Ethics Are in the News — Now a Daily Feature?
July 27, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
It is déjà vu all over again, or it seems like it. I read “Judge Finds Forensic Scientist Henry Lee Liable for Fabricating Evidence in a Murder Case.” Yep, that is the story. Scientist Lee allegedly has a knack for non-fiction; that is, making up stuff or arranging items in a special way. One of my relatives founded Hartford, Connecticut, in the 1635. I am not sure he would have been on board with this make-stuff-up approach to data. (According to our family lore, John Arnold was into beating people with a stick.) Dr. Lee is a big wheel because he worked on the 1995 running-through-airports trial. The cited article includes this interesting sentence:
[Scientist] Lee’s work in several other cases has come under scrutiny…
No one is watching. A noted scientist helps himself to the cookies in the lab’s cookie jar. He is heard mumbling, “Cookies. I love cookies. I am going to eat as many of these suckers as I can because I am alone. And who cares about anyone else in this lab? Not me.” Chomp chomp chomp. Thanks, MidJourney. You depicted an okay scientist but refused to create an image of a great leader whom I identified by proper name. For this I paid money?
Let me mention three ethics incidents which for one reason or another hit my radar:
- MIT accepting cash from every young person’s friend Jeffrey Epstein. He allegedly killed himself. He’s off the table.
- The Harvard ethics professor who made up data. She’s probably doing consulting work now. I don’t know if she will get back into the classroom. If she does it might be in the Harvard Business School. Those students have a hunger for information about ethics.
- The soon-to-be-departed president of Stanford University. He may find a future using ChatGPT or an equivalent to write technical articles and angling for a gig on cable TV.
What do these allegedly true incidents tell us about the moral fiber of some people in positions of influence? I have a few ideas. Now the task is remediation. When John Arnold chopped wood in Hartford, justice involved ostracism, possibly a public shaming, or rough justice played out to the the theme from Hang ‘Em High.
Harvard, MIT, and Stanford: Aren’t universities supposed to set an example for impressionable young minds? What are the students learning? Anything goes? Prevaricate? Cut corners? Grub money?
Imagine sweatshirts with the college logo and these words on the front and back of the garment. Winner. Some at Amazon, Apple, Facebook, Google, Microsoft, and OpenAI might wear them to the next off-site. I would wager that one turns up in the Rayburn House Office Building wellness room.
Stephen E Arnold, July 27, 2023
AI Leaders and the Art of Misdirection
July 27, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Lately, leaders at tech companies seem to have slipped into a sci-fi movie.
“Trust me. AI is really good. I have been working to create a technology which will help the world. I want to make customers you, Senator, trust us. I and other AI executives want to save whales. We want the snail darter to thrive. We want the homeless to have suitable housing. AI will deliver this and more plus power and big bucks to us!” asserts the sincere AI wizard with a PhD and an MBA.
Rein in our algorithmic monster immediately before it takes over the world and destroys us all! But AI Snake Oil asks, “Is Avoiding Extinction from AI Really an Urgent Priority?” Or is it a red herring? Writers Seth Lazar, Jeremy Howard, and Arvind Narayanan consider:
“Start with the focus on risks from AI. This is an ambiguous phrase, but it implies an autonomous rogue agent. What about risks posed by people who negligently, recklessly, or maliciously use AI systems? Whatever harms we are concerned might be possible from a rogue AI will be far more likely at a much earlier stage as a result of a ‘rogue human’ with AI’s assistance. Indeed, focusing on this particular threat might exacerbate the more likely risks. The history of technology to date suggests that the greatest risks come not from technology itself, but from the people who control the technology using it to accumulate power and wealth. The AI industry leaders who have signed this statement are precisely the people best positioned to do just that. And in calling for regulations to address the risks of future rogue AI systems, they have proposed interventions that would further cement their power. We should be wary of Prometheans who want to both profit from bringing the people fire, and be trusted as the firefighters.”
Excellent point. But what, specifically, are the rich and powerful trying to distract us from here? Existing AI systems are already causing harm, and have been for some time. Without mitigation, this problem will only worsen. There are actions that can be taken, but who can focus on that when our very existence is (supposedly) at stake? Probably not our legislators.
Cynthia Murrell, July 27, 2023
Google, You Are Constantly Surprising: Planned Obsolescence, Allegations of IP Impropriety, and Gardening Leave
July 25, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I find Google to be an interesting company, possibly more intriguing than the tweeter X outfit. As I zipped through my newsfeed this morning while dutifully riding the exercise machine, I noticed three stories. Each provides a glimpse of the excitement that Google engenders. Let me share these items with you because I am not sure each will get the boost from the tweeter X outfit.
Google is in the news and causing consternation in the mind of this MidJourney creation. . At least one Google advocate finds the information shocking. Imagine, planned obsolescence, alleged theft of intellectual property, and sending a Googler with a 13 year work history home to “garden.”
The first story comes from Oakland, California. California is a bastion of good living and clear thinking. “Thousands of Chromebooks Are ‘Expiring,’ Forcing Schools to Toss Them Out” explains that Google has designed obsolescence into Chromebooks used in schools. Why? one may ask. Here’s the answer:
Google told OUSD [Oakland Unified School District’ the baked-in death dates are necessary for security and compatibility purposes. As Google continues to iterate on its Chromebook software, older devices supposedly can’t handle the updates.
Yes, security, compatibility, and the march of Googleware. My take is that green talk is PR. The reality is landfill.
The second story is from the Android Authority online news service. One would expect good news or semi-happy information about my beloved Google. But, alas, the story “Google Ordered to Pay $339M for stealing the very idea of Chromecast.” The operative word is “stealing.” Wow. The Google? The write up states:
Google opposed the complaint, arguing that the patents are “hardly foundational and do not cover every method of selecting content on a personal device and watching it on another screen.”
Yep, “hardly,” but stealing. That’s quite an allegation. It begs the question, “Are there any other Google actions which have suggested similar behavior; for example, an architecture-related method, an online advertising process, or alleged misuse of intellectual property? Oh, my.
The third story is a personnel matter. Google has a highly refined human resource methodology. “Google’s Indian-Origin Director of News Laid Off after 13 Years: In Privileged Position” reveals as actual factual:
Google has sent Chinnappa on a “gardening leave…
Ah, ha, Google is taking steps to further its green agenda. I wonder if the “Indian origin Xoogler” will dig a hole and fill it with Chromebooks from the Oakland school district.
Amazing, beloved Google. Amazing.
Stephen E Arnold, July 25, 2023
Harvard Approach to Ethics: Unemployed at Stanford
July 25, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I attended such lousy schools no one bothered to cheat. No one was motivated. No parents cared. It was a glorious educational romp because the horizons for someone in a small town in the dead center of Illinois was going nowhere. The proof? Visit a small town in Illinois and what do you see? Not much. Think of Cairo, Illinois, as a portent. In the interest of full disclosure, I did sell math and English homework to other students in that intellectual wasteland. Now you know how bad my education was. People bought “knowledge” from me. Go figure.
“You have been cheating,” says the old-fashioned high school teacher. The student who would rise to fame as a brilliant academician and consummate campus politician replies, “No, no, I would never do such a thing.” The student sitting next to this would-be future beacon of proper behavior snarls, “Oh, yes you were. You did not read Aristotle Ethics, so you copied exactly what I wrote in my blue book. You are disgusting. And your suspenders are stupid.”
But in big name schools, cheating apparently is a thing. Competition is keen. The stakes are high. I suppose that’s why an ethic professor at Harvard made some questionable decisions. I thought that somewhat scandalous situation would have motivated big name universities to sweep cheating even farther under the rug.
But no, no, no.
The Stanford student newspaper — presumably written by humanoid students awash with Phil Coffee — wrote “Stanford President Resigns over Manipulated Research, Will Retract at Least Three Papers.” The subtitle is cute; to wit:
Marc Tessier-Lavigne failed to address manipulated papers, fostered unhealthy lab dynamic, Stanford report says
Okay, this respected leader and thought leader for the students who want to grow up to be just like Larry, Sergey, and Peter, among other luminaries, took some liberties with data.
The presumably humanoid-written article reports:
Tessier-Lavigne defended his reputation but acknowledged that issues with his research, first raised in a Daily investigation last autumn, meant that Stanford requires a president “whose leadership is not hampered by such discussions.”
I am confident reputation management firms and a modest convocation of legal eagles will explain this Harvard-echoing matter. With regard to the soon-to-be former president, I really don’t care about him, his allegedly fiddled research, and his tear-inducing explanation which will appear soon.
Here’s what I care about:
- Is it any wonder why graduates of Stanford University — plug in your favorite Sillycon Valley wizard who graduated from the prestigious university — finds trust difficult to manifest? I don’t. I am not sure “trust”, excellence, and Stanford are words that can nest comfortably on campus.
- Is any academic research reproducible? I know that ballpark estimates suggest that as much as 40 percent of published research may manifest the tiny problem of duplicating the results? Is it time to think about what actions are teaching students what’s okay and what’s not?
- Does what I shall call “ethics rot” extend outside of academic institutions? My hunch is that big time universities have had some challenges with data in the past. No one bothered to check too closely. I know that the estimable William James looked for mistakes in the writings of those who disagreed with radical empiricism stuff, but today? Yeah, today.
Net net: Ethical rot, not academic excellence, seems to be a growth business. Now what Stanford graduates’ business have taken ethical short cuts to revenue? I hear crickets.
PS. Is it three, five, or an unknown number of papers with allegedly fakey wakey information? Perhaps the Stanford humanoids writing the article were hallucinating when working with the number of fiddled articles? Let’s ask Bard. Oh, right, a Stanford-infused service. The analogy is an institution as bereft as pathetic Cairo, Illinois. Check out some pictures here.
Stephen E Arnold, July 25, 2023
Citation Manipulation: Fiddling for Fame and Grant Money Perhaps?
July 24, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
A fact about science and academia is that these fields are incredibly biased. Researchers, scientists, and professors are always on the hunt for funding and prestige. While these professionals state they uphold ethical practices, they are still human. In other words, they violate their ethics for a decent reward. Another prize for these individuals is being published, but even publishers are becoming impartial says Nature in, “Researchers Who Agree To Manipulate Citations Are More Likely To Get Their Papers Published.”
A former university researcher practices his new craft: Rigging die for gangs running crap games. He said to my fictional interviewer, “The skills are directly transferable. I use die manufactured by other people. I manipulate them. My degrees in statistics allow me to calculate what weights are needed to tip the odds. This new job pays well too. I do miss the faculty meetings, but the gang leaders often make it clear that if I need anything special, those fine gentlemen will accommodate my wishes.” MidJourney seems to have an affinity for certain artistic creations like people who create loaded dice.
A recent study from Research Policy discovered that researchers are coerced by editors to include superfluous citations in their papers. Those that give into the editors have a higher chance of getting published. If the citations are relevant to the researchers’ topic, what is the big deal? The problem is that the citations might not accurately represent the research nor augment the original data. There is also the pressure to comply with industry politics:
“When scientists are coerced into padding their papers with citations, the journal editor might be looking to boost either their journal’s or their own citation counts, says study author Eric Fong, who studies research management at the University of Alabama in Huntsville. In other cases, peer reviewers might try to persuade authors to cite their work. Citation rings, in which multiple scholars or journals agree to cite each other excessively, can be harder to spot, because there are several stakeholders involved, instead of just two academics disproportionately citing one another.”
The study is over a decade old, but its results pertain to today’s scientific and academia environment. Academic journals want to inflate their citations to “justify” their importance to the industry and maybe even keeping the paywall incentive. Researchers are also pressured to add more authors, because it helps someone pad their resume.
These are not good practices to protect science and academia’s’ integrity, but it is better than lying about results.
Whitney Grace, July 24, 2023
AI Commitments: But What about Chipmunks and the Bunny Rabbits?
July 23, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI sent executives to a meeting held in “the White House” to agree on some ground rules for “artificial intelligence.” AI is available from a number of companies and as free downloads as open source. Rumors have reached me suggesting that active research and development are underway in government agencies, universities, and companies located in a number of countries other than the U.S. Some believe the U.S. is the Zoe of AI, assisted by Naiads. Okay, but you know those Greek gods can be unpredictable.
Thus, what’s a commitment? I am not sure what the word means today. I asked You.com, a smart search system to define the term for me. The system dutifully return this explanation:
commitment is defined as “an agreement or pledge to do something in the future; the state or an instance of being obligated or emotionally impelled; the act of committing, especially the act of committing a crime.” In general, commitment refers to a promise or pledge to do something, often with a strong sense of dedication or obligation. It can also refer to a state of being emotionally invested in something or someone, or to the act of carrying out a particular action or decision.
Several words and phrases jumped out at me; namely, “do something in the future.” What does “do” mean? What is “the future?” Next week, next month, a decade from a specific point in time, etc.? “Obligated” is an intriguing word. What compels the obligation? A threat, a sense of duty, and understanding of a shared ethical fabric? “Promise” evokes a young person’s statement to a parent when caught drinking daddy’s beer; for example, “Mom, I promise I won’t do that again.” The “emotional” investment is an angle that reminds me that 40 to 50 percent of first marriages end in divorce. Commitments — even when bound by social values — are flimsy things for some. Would I fly on a commercial airline whose crash rate was 40 to 50 percent? Would you?
“Okay, we broke the window? Now what do we do?” asks the leader of the pack. “Run,” says the brightest of the group. “If we are caught, we just say, “Okay, we will fix it.” “Will we?” asks the smallest of the gang. “Of course not,” replies the leader. Thanks MidJourney, you create original kid images well.
Why make any noise about commitment?
I read “How Do the White House’s A.I. Commitments Stack Up?” The write up is a personal opinion about an agreement between “the White House” and the big US players in artificial intelligence. The focus was understandable because those in attendance are wrapped in the red, white, and blue; presumably pay taxes; and want to do what’s right, save the rain forest, and be green.
Some of the companies participating in the meeting have testified before Congress. I recall at least one of the firms’ senior managers say, “Senator, thank you for that question. I don’t know the answer. I will have my team provide that information to you…” My hunch is that a few of the companies in attendance at the White House meeting could use the phrase or a similar one at some point in the “future.”
The table below lists most of the commitments to which the AI leaders showed some receptivity. The table presents the commitments in the left hand column and the right hand column offers some hypothesized reactions from a nation state quite opposed to the United States, the US dollar, the hegemony of US technology, baseball, apple pie, etc.
Commitments | Gamed Responses |
Security testing before release | Based on historical security activities, not to worry |
Sharing AI information | Let’s order pizza and plan a front company based in Walnut Creek |
Protect IP about models | Let’s canvas our AI coders and pick some to get jobs at these outfits |
Permit pentesting | Yes, pentesting. Order some white hats with happy faces |
Tell users when AI content is produced | Yes, let’s become registered users. Who has a cousin in Mountain View? |
Report about use of the AI technologies | Make sure we are on the mailing list for these reports |
Research AI social risks | Do we own a research firm? Can we buy the research firm assisting these US companies? |
Use AI to fix up social ills | What is a social ill? Call the general, please, and ask. |
The PR angle is obvious. I wonder if commitments will work. The firms have one objective; that is, meet the expectations of their stakeholders. In order to do that, the firms must operate from the baseline of self-interest.
Net net: A plot of techno-land now have a few big outfits working and thinking hard how to buy up the best plots. What about zoning, government regulations, and doing good things for small animals and wild flowers? Yeah. No problem.
Stephen E Arnold, July 23, 2023
Silicon Valley and Its Busy, Busy Beavers
July 21, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Several stories caught my attention. These are:
- The story “Google Pitches AI to Newsrooms As Tool to Help Reporters Write News Stories.” The main idea is that the “test” will allow newspaper publishers to become more efficient.
- The story “YouTube Premium Price Increase 2023: Users Calls for Lawsuit” explains that to improve the experience, Google will raise its price for YouTube Premium.” Was that service positioned as fixed price?
- The story “Google Gives a Peek at What a Quantum Computer Can Do” resurfaces the quantum supremacy assertion. Like high school hot rodders, Google suggests that its hardware is the most powerful, fastest, and slickest one in the Quantum School for Mavens.
- The story “Meta, Google, and OpenAI Promise the White House They’ll Develop AI Responsibly” reports that Google and other big tech outfits cross their hearts and hope to die that they will not act in an untoward manner.
Google’s busy beavers have been active: AI, pricing tactics, quantum goodness, and team building. Thanks, MidJourney but you left out the computing devices which no high value beaver goes without.
Google has allowed its beavers to gnaw on some organic material to build some dams. Specifically, the newspapers which have been affected by Google’s online advertising (no I am not forgetting Craigslist.com. I am just focusing on the Google at the moment) can avail themselves of AI. The idea is… cost cutting. Could there be some learnings for the Google? What I mean is that such a series of tests or trials provides the Google with telemetry. Such telemetry allows the Google to refine its news writing capabilities. The trajectory of such knowledge may allow the Google to embark on its own newspaper experiment. Where will that lead? I don’t know, but it does not bode well for real journalists or some other entities.
The YouTube price increase is positioned as a better experience. Could the sharp increase in ads before, during, and after a YouTube video be part of a strategy? What I am hypothesizing is that more ads will force users to pay to be able to watch a YouTube video without being driven crazy by ads for cheap mobile, health products, and gun belts? Deteriorating the experience allows a customer to buy a better experience. Could that be semi-accurate?
The quantum supremacy thing strikes me as 100 percent PR with a dash of high school braggadocio. The write up speaks to me this way: “I got a higher score on the SAT.” Snort snort snort. The snorts are a sound track to putting down those whose machines just don’t have the right stuff. I wonder if this is how others perceive the article.
And the busy beavers turned up at the White House. The beavers say, “We will be responsible with this AI stuff. We AI promise.” Okay, I believe this because I don’t know what these creatures mean when the word “responsible” is used. I can guess, however.
Net net: The ethicist from Harvard and the soon-to-be-former president of Stanford are available to provide advisory services. Silicon Valley is a metaphor for many good things, especially for the companies and their senior executives. Life will get better and better with certain high technology outfits running the show, pulling the strings, and controlling information, won’t it?
Stephen E Arnold, July 21, 2023
Sam the AI-Man Explains His Favorite Song, My Way, to the European Union
July 18, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
It seems someone is uncomfortable with AI regulation despite asking for regulation. TIME posts this “Exclusive: OpenAI Lobbied the E.U. to Water Down AI Regulation.” OpenAI insists AI must be regulated posthaste. CEO Sam Altman even testified to congress about it. But when push comes to legislative action, the AI-man balks. At least when it affects his company. Reporter Billy Perrigo tells us:
“The CEO of OpenAI, Sam Altman, has spent the last month touring world capitals where, at talks to sold-out crowds and in meetings with heads of governments, he has repeatedly spoken of the need for global AI regulation. But behind the scenes, OpenAI has lobbied for significant elements of the most comprehensive AI legislation in the world—the E.U.’s AI Act—to be watered down in ways that would reduce the regulatory burden on the company.”
What, to Altman’s mind, makes OpenAI exempt from the much-needed regulation? Their product is a general-purpose AI, as opposed to a high-risk one. So it contributes to benign projects as well as consequential ones. How’s that for logic? Apparently it was good enough for EU regulators. Or maybe they just caved to OpenGI’s empty threat to pull out of Europe.
Is it true that Mr. AI-Man only follows the rules he promulgates? Thanks for the Leonardo-like image of students violating a university’s Keep Off the Grass rule.
We learn:
“The final draft of the Act approved by E.U. lawmakers did not contain wording present in earlier drafts suggesting that general purpose AI systems should be considered inherently high risk. Instead, the agreed law called for providers of so-called ‘foundation models,’ or powerful AI systems trained on large quantities of data, to comply with a smaller handful of requirements including preventing the generation of illegal content, disclosing whether a system was trained on copyrighted material, and carrying out risk assessments.”
Of course, all of this may be a moot point given the catch-22 of asking legislators to regulate technologies they do not understand. Tech companies’ lobbying dollars seem to provide the most clarity.
Cynthia Murrell, July 18, 2023
When Wizards Flail: The Mysteries of Smart Software
July 18, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
How about that smart software stuff? VCs are salivating. Whiz kids are emulating Sam AI-man. Users are hoping there is a job opening for a Wal-Mart greeter. But there is a hitch in the git along; specifically, some bright experts are not able to understand what smart software does to generate output. The cloud of unknowing is thick and has settled over the Land of Obfuscation.
“Even the Scientists Who Build AI Can’t Tell You How It Works” has a particularly interesting kicker:
“We built it, we trained it, but we don’t know what it’s doing.”
A group of artificial intelligence engineers struggling with the question, “What the heck is the system doing?” A click of the slide rule for MidJourney for this dramatic depiction of AI wizards at work.
The write up (which is an essay-interview confection) includes some thought-provoking comments. Here are three; you can visit the cited article for more scintillating insights:
Item 1: “… with reinforcement learning, you say, “All right, make this entire response more likely because the user liked it, and make this entire response less likely because the user didn’t like it.”
Item 2: “… The other big unknown that’s connected to this is we don’t know how to steer these things or control them in any reliable way. We can kind of nudge them
Item 3: “We don’t have the concepts that map onto these neurons to really be able to say anything interesting about how they behave.”
Item 4: “… we can sort of take some clippers and clip it into that shape. But that doesn’t mean we understand anything about the biology of that tree.”
Item 5: “… because there’s so much we don’t know about these systems, I imagine the spectrum of positive and negative possibilities is pretty wide.”
For more of this type of “explanation,” please, consult the source document cited above.
Several observations:
- I like the nudge and watch approach. Humanoids learning about what their code does may be useful.
- The nudging is subjective (human skill) and the reference to growing a tree and not knowing how that works exactly. Just do the bonsai thing. Interesting but is it efficient? Will it work? Sure or at least as Silicon Valley thinking permits
- The wide spectrum of good and bad. My reaction is to ask the striking writers and actors what their views of the bad side of the deal is. What if the writers get frisky and start throwing spit balls or (heaven forbid) old IBM Selectric type balls. Scary.
Net net: Perhaps Google knows best? Tensors, big computers, need for money, and control of advertising — I think I know why Google tries so hard to frame the AI discussion. A useful exercise is to compare what Google’s winner in the smart software power struggle has to say about Google’s vision. You can find that PR emission at this link. Be aware that the interviewer’s questions are almost as long at the interview subject’s answers. Does either suggest downsides comparable to the five items cited in this blog post?
Stephen E Arnold, July 18, 2023
Hit Delete. Save Money. Data Liability Is Gone. Is That Right?
July 17, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
“Reddit Removed Your Chat History from before 2023” stated:
… legacy chats were being migrated to the new chat platform and that only 2023 data is being brought over, adding that they “hope” a data export will help the user get back the older chats. The admin told another user asking whether there was an option to stay on the legacy chat that no, there isn’t, and Reddit is “working on making new chats better.”
A young attorney studies ancient Reddit data from 2023. That’s when information began because the a great cataclysm destroyed any previous, possibly useful data for a legal matter. But what about the Library of Congress? But what about the Internet Archive? But what about back up tapes at assorted archives? Yeah, right. Thanks for the data in amber MidJourney.
The cited article does not raise the following obviously irrelevant questions:
- Are there backups which can be consulted?
- Are their copies of the Reddit data chat data?
- Was the action taken to reduce costs or legal liability?
I am not a Reddit user, nor do I affix site:reddit or append the word “reddit” to my queries. Some may find the service useful, but I am a dinobaby and hopeless out of touch with where the knowledge action is.
As an outsider, my initial reaction is that dumping data has two immediate paybacks: Reduce storage and the likelihood that a group of affable lawyers will ask for historic data about a Reddit user’s activity. My hunch is that users of a free service cannot fathom why a commercial enterprise would downgrade or eliminate a free service. Gee, why?
I think I would answer the question with one word, “Adulting.”
Stephen E Arnold, July 17, 2023