Harvard Approach to Ethics: Unemployed at Stanford
July 25, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I attended such lousy schools no one bothered to cheat. No one was motivated. No parents cared. It was a glorious educational romp because the horizons for someone in a small town in the dead center of Illinois was going nowhere. The proof? Visit a small town in Illinois and what do you see? Not much. Think of Cairo, Illinois, as a portent. In the interest of full disclosure, I did sell math and English homework to other students in that intellectual wasteland. Now you know how bad my education was. People bought “knowledge” from me. Go figure.
“You have been cheating,” says the old-fashioned high school teacher. The student who would rise to fame as a brilliant academician and consummate campus politician replies, “No, no, I would never do such a thing.” The student sitting next to this would-be future beacon of proper behavior snarls, “Oh, yes you were. You did not read Aristotle Ethics, so you copied exactly what I wrote in my blue book. You are disgusting. And your suspenders are stupid.”
But in big name schools, cheating apparently is a thing. Competition is keen. The stakes are high. I suppose that’s why an ethic professor at Harvard made some questionable decisions. I thought that somewhat scandalous situation would have motivated big name universities to sweep cheating even farther under the rug.
But no, no, no.
The Stanford student newspaper — presumably written by humanoid students awash with Phil Coffee — wrote “Stanford President Resigns over Manipulated Research, Will Retract at Least Three Papers.” The subtitle is cute; to wit:
Marc Tessier-Lavigne failed to address manipulated papers, fostered unhealthy lab dynamic, Stanford report says
Okay, this respected leader and thought leader for the students who want to grow up to be just like Larry, Sergey, and Peter, among other luminaries, took some liberties with data.
The presumably humanoid-written article reports:
Tessier-Lavigne defended his reputation but acknowledged that issues with his research, first raised in a Daily investigation last autumn, meant that Stanford requires a president “whose leadership is not hampered by such discussions.”
I am confident reputation management firms and a modest convocation of legal eagles will explain this Harvard-echoing matter. With regard to the soon-to-be former president, I really don’t care about him, his allegedly fiddled research, and his tear-inducing explanation which will appear soon.
Here’s what I care about:
- Is it any wonder why graduates of Stanford University — plug in your favorite Sillycon Valley wizard who graduated from the prestigious university — finds trust difficult to manifest? I don’t. I am not sure “trust”, excellence, and Stanford are words that can nest comfortably on campus.
- Is any academic research reproducible? I know that ballpark estimates suggest that as much as 40 percent of published research may manifest the tiny problem of duplicating the results? Is it time to think about what actions are teaching students what’s okay and what’s not?
- Does what I shall call “ethics rot” extend outside of academic institutions? My hunch is that big time universities have had some challenges with data in the past. No one bothered to check too closely. I know that the estimable William James looked for mistakes in the writings of those who disagreed with radical empiricism stuff, but today? Yeah, today.
Net net: Ethical rot, not academic excellence, seems to be a growth business. Now what Stanford graduates’ business have taken ethical short cuts to revenue? I hear crickets.
PS. Is it three, five, or an unknown number of papers with allegedly fakey wakey information? Perhaps the Stanford humanoids writing the article were hallucinating when working with the number of fiddled articles? Let’s ask Bard. Oh, right, a Stanford-infused service. The analogy is an institution as bereft as pathetic Cairo, Illinois. Check out some pictures here.
Stephen E Arnold, July 25, 2023
Citation Manipulation: Fiddling for Fame and Grant Money Perhaps?
July 24, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
A fact about science and academia is that these fields are incredibly biased. Researchers, scientists, and professors are always on the hunt for funding and prestige. While these professionals state they uphold ethical practices, they are still human. In other words, they violate their ethics for a decent reward. Another prize for these individuals is being published, but even publishers are becoming impartial says Nature in, “Researchers Who Agree To Manipulate Citations Are More Likely To Get Their Papers Published.”
A former university researcher practices his new craft: Rigging die for gangs running crap games. He said to my fictional interviewer, “The skills are directly transferable. I use die manufactured by other people. I manipulate them. My degrees in statistics allow me to calculate what weights are needed to tip the odds. This new job pays well too. I do miss the faculty meetings, but the gang leaders often make it clear that if I need anything special, those fine gentlemen will accommodate my wishes.” MidJourney seems to have an affinity for certain artistic creations like people who create loaded dice.
A recent study from Research Policy discovered that researchers are coerced by editors to include superfluous citations in their papers. Those that give into the editors have a higher chance of getting published. If the citations are relevant to the researchers’ topic, what is the big deal? The problem is that the citations might not accurately represent the research nor augment the original data. There is also the pressure to comply with industry politics:
“When scientists are coerced into padding their papers with citations, the journal editor might be looking to boost either their journal’s or their own citation counts, says study author Eric Fong, who studies research management at the University of Alabama in Huntsville. In other cases, peer reviewers might try to persuade authors to cite their work. Citation rings, in which multiple scholars or journals agree to cite each other excessively, can be harder to spot, because there are several stakeholders involved, instead of just two academics disproportionately citing one another.”
The study is over a decade old, but its results pertain to today’s scientific and academia environment. Academic journals want to inflate their citations to “justify” their importance to the industry and maybe even keeping the paywall incentive. Researchers are also pressured to add more authors, because it helps someone pad their resume.
These are not good practices to protect science and academia’s’ integrity, but it is better than lying about results.
Whitney Grace, July 24, 2023
AI Commitments: But What about Chipmunks and the Bunny Rabbits?
July 23, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI sent executives to a meeting held in “the White House” to agree on some ground rules for “artificial intelligence.” AI is available from a number of companies and as free downloads as open source. Rumors have reached me suggesting that active research and development are underway in government agencies, universities, and companies located in a number of countries other than the U.S. Some believe the U.S. is the Zoe of AI, assisted by Naiads. Okay, but you know those Greek gods can be unpredictable.
Thus, what’s a commitment? I am not sure what the word means today. I asked You.com, a smart search system to define the term for me. The system dutifully return this explanation:
commitment is defined as “an agreement or pledge to do something in the future; the state or an instance of being obligated or emotionally impelled; the act of committing, especially the act of committing a crime.” In general, commitment refers to a promise or pledge to do something, often with a strong sense of dedication or obligation. It can also refer to a state of being emotionally invested in something or someone, or to the act of carrying out a particular action or decision.
Several words and phrases jumped out at me; namely, “do something in the future.” What does “do” mean? What is “the future?” Next week, next month, a decade from a specific point in time, etc.? “Obligated” is an intriguing word. What compels the obligation? A threat, a sense of duty, and understanding of a shared ethical fabric? “Promise” evokes a young person’s statement to a parent when caught drinking daddy’s beer; for example, “Mom, I promise I won’t do that again.” The “emotional” investment is an angle that reminds me that 40 to 50 percent of first marriages end in divorce. Commitments — even when bound by social values — are flimsy things for some. Would I fly on a commercial airline whose crash rate was 40 to 50 percent? Would you?
“Okay, we broke the window? Now what do we do?” asks the leader of the pack. “Run,” says the brightest of the group. “If we are caught, we just say, “Okay, we will fix it.” “Will we?” asks the smallest of the gang. “Of course not,” replies the leader. Thanks MidJourney, you create original kid images well.
Why make any noise about commitment?
I read “How Do the White House’s A.I. Commitments Stack Up?” The write up is a personal opinion about an agreement between “the White House” and the big US players in artificial intelligence. The focus was understandable because those in attendance are wrapped in the red, white, and blue; presumably pay taxes; and want to do what’s right, save the rain forest, and be green.
Some of the companies participating in the meeting have testified before Congress. I recall at least one of the firms’ senior managers say, “Senator, thank you for that question. I don’t know the answer. I will have my team provide that information to you…” My hunch is that a few of the companies in attendance at the White House meeting could use the phrase or a similar one at some point in the “future.”
The table below lists most of the commitments to which the AI leaders showed some receptivity. The table presents the commitments in the left hand column and the right hand column offers some hypothesized reactions from a nation state quite opposed to the United States, the US dollar, the hegemony of US technology, baseball, apple pie, etc.
Commitments | Gamed Responses |
Security testing before release | Based on historical security activities, not to worry |
Sharing AI information | Let’s order pizza and plan a front company based in Walnut Creek |
Protect IP about models | Let’s canvas our AI coders and pick some to get jobs at these outfits |
Permit pentesting | Yes, pentesting. Order some white hats with happy faces |
Tell users when AI content is produced | Yes, let’s become registered users. Who has a cousin in Mountain View? |
Report about use of the AI technologies | Make sure we are on the mailing list for these reports |
Research AI social risks | Do we own a research firm? Can we buy the research firm assisting these US companies? |
Use AI to fix up social ills | What is a social ill? Call the general, please, and ask. |
The PR angle is obvious. I wonder if commitments will work. The firms have one objective; that is, meet the expectations of their stakeholders. In order to do that, the firms must operate from the baseline of self-interest.
Net net: A plot of techno-land now have a few big outfits working and thinking hard how to buy up the best plots. What about zoning, government regulations, and doing good things for small animals and wild flowers? Yeah. No problem.
Stephen E Arnold, July 23, 2023
Silicon Valley and Its Busy, Busy Beavers
July 21, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Several stories caught my attention. These are:
- The story “Google Pitches AI to Newsrooms As Tool to Help Reporters Write News Stories.” The main idea is that the “test” will allow newspaper publishers to become more efficient.
- The story “YouTube Premium Price Increase 2023: Users Calls for Lawsuit” explains that to improve the experience, Google will raise its price for YouTube Premium.” Was that service positioned as fixed price?
- The story “Google Gives a Peek at What a Quantum Computer Can Do” resurfaces the quantum supremacy assertion. Like high school hot rodders, Google suggests that its hardware is the most powerful, fastest, and slickest one in the Quantum School for Mavens.
- The story “Meta, Google, and OpenAI Promise the White House They’ll Develop AI Responsibly” reports that Google and other big tech outfits cross their hearts and hope to die that they will not act in an untoward manner.
Google’s busy beavers have been active: AI, pricing tactics, quantum goodness, and team building. Thanks, MidJourney but you left out the computing devices which no high value beaver goes without.
Google has allowed its beavers to gnaw on some organic material to build some dams. Specifically, the newspapers which have been affected by Google’s online advertising (no I am not forgetting Craigslist.com. I am just focusing on the Google at the moment) can avail themselves of AI. The idea is… cost cutting. Could there be some learnings for the Google? What I mean is that such a series of tests or trials provides the Google with telemetry. Such telemetry allows the Google to refine its news writing capabilities. The trajectory of such knowledge may allow the Google to embark on its own newspaper experiment. Where will that lead? I don’t know, but it does not bode well for real journalists or some other entities.
The YouTube price increase is positioned as a better experience. Could the sharp increase in ads before, during, and after a YouTube video be part of a strategy? What I am hypothesizing is that more ads will force users to pay to be able to watch a YouTube video without being driven crazy by ads for cheap mobile, health products, and gun belts? Deteriorating the experience allows a customer to buy a better experience. Could that be semi-accurate?
The quantum supremacy thing strikes me as 100 percent PR with a dash of high school braggadocio. The write up speaks to me this way: “I got a higher score on the SAT.” Snort snort snort. The snorts are a sound track to putting down those whose machines just don’t have the right stuff. I wonder if this is how others perceive the article.
And the busy beavers turned up at the White House. The beavers say, “We will be responsible with this AI stuff. We AI promise.” Okay, I believe this because I don’t know what these creatures mean when the word “responsible” is used. I can guess, however.
Net net: The ethicist from Harvard and the soon-to-be-former president of Stanford are available to provide advisory services. Silicon Valley is a metaphor for many good things, especially for the companies and their senior executives. Life will get better and better with certain high technology outfits running the show, pulling the strings, and controlling information, won’t it?
Stephen E Arnold, July 21, 2023
Sam the AI-Man Explains His Favorite Song, My Way, to the European Union
July 18, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
It seems someone is uncomfortable with AI regulation despite asking for regulation. TIME posts this “Exclusive: OpenAI Lobbied the E.U. to Water Down AI Regulation.” OpenAI insists AI must be regulated posthaste. CEO Sam Altman even testified to congress about it. But when push comes to legislative action, the AI-man balks. At least when it affects his company. Reporter Billy Perrigo tells us:
“The CEO of OpenAI, Sam Altman, has spent the last month touring world capitals where, at talks to sold-out crowds and in meetings with heads of governments, he has repeatedly spoken of the need for global AI regulation. But behind the scenes, OpenAI has lobbied for significant elements of the most comprehensive AI legislation in the world—the E.U.’s AI Act—to be watered down in ways that would reduce the regulatory burden on the company.”
What, to Altman’s mind, makes OpenAI exempt from the much-needed regulation? Their product is a general-purpose AI, as opposed to a high-risk one. So it contributes to benign projects as well as consequential ones. How’s that for logic? Apparently it was good enough for EU regulators. Or maybe they just caved to OpenGI’s empty threat to pull out of Europe.
Is it true that Mr. AI-Man only follows the rules he promulgates? Thanks for the Leonardo-like image of students violating a university’s Keep Off the Grass rule.
We learn:
“The final draft of the Act approved by E.U. lawmakers did not contain wording present in earlier drafts suggesting that general purpose AI systems should be considered inherently high risk. Instead, the agreed law called for providers of so-called ‘foundation models,’ or powerful AI systems trained on large quantities of data, to comply with a smaller handful of requirements including preventing the generation of illegal content, disclosing whether a system was trained on copyrighted material, and carrying out risk assessments.”
Of course, all of this may be a moot point given the catch-22 of asking legislators to regulate technologies they do not understand. Tech companies’ lobbying dollars seem to provide the most clarity.
Cynthia Murrell, July 18, 2023
When Wizards Flail: The Mysteries of Smart Software
July 18, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
How about that smart software stuff? VCs are salivating. Whiz kids are emulating Sam AI-man. Users are hoping there is a job opening for a Wal-Mart greeter. But there is a hitch in the git along; specifically, some bright experts are not able to understand what smart software does to generate output. The cloud of unknowing is thick and has settled over the Land of Obfuscation.
“Even the Scientists Who Build AI Can’t Tell You How It Works” has a particularly interesting kicker:
“We built it, we trained it, but we don’t know what it’s doing.”
A group of artificial intelligence engineers struggling with the question, “What the heck is the system doing?” A click of the slide rule for MidJourney for this dramatic depiction of AI wizards at work.
The write up (which is an essay-interview confection) includes some thought-provoking comments. Here are three; you can visit the cited article for more scintillating insights:
Item 1: “… with reinforcement learning, you say, “All right, make this entire response more likely because the user liked it, and make this entire response less likely because the user didn’t like it.”
Item 2: “… The other big unknown that’s connected to this is we don’t know how to steer these things or control them in any reliable way. We can kind of nudge them
Item 3: “We don’t have the concepts that map onto these neurons to really be able to say anything interesting about how they behave.”
Item 4: “… we can sort of take some clippers and clip it into that shape. But that doesn’t mean we understand anything about the biology of that tree.”
Item 5: “… because there’s so much we don’t know about these systems, I imagine the spectrum of positive and negative possibilities is pretty wide.”
For more of this type of “explanation,” please, consult the source document cited above.
Several observations:
- I like the nudge and watch approach. Humanoids learning about what their code does may be useful.
- The nudging is subjective (human skill) and the reference to growing a tree and not knowing how that works exactly. Just do the bonsai thing. Interesting but is it efficient? Will it work? Sure or at least as Silicon Valley thinking permits
- The wide spectrum of good and bad. My reaction is to ask the striking writers and actors what their views of the bad side of the deal is. What if the writers get frisky and start throwing spit balls or (heaven forbid) old IBM Selectric type balls. Scary.
Net net: Perhaps Google knows best? Tensors, big computers, need for money, and control of advertising — I think I know why Google tries so hard to frame the AI discussion. A useful exercise is to compare what Google’s winner in the smart software power struggle has to say about Google’s vision. You can find that PR emission at this link. Be aware that the interviewer’s questions are almost as long at the interview subject’s answers. Does either suggest downsides comparable to the five items cited in this blog post?
Stephen E Arnold, July 18, 2023
Hit Delete. Save Money. Data Liability Is Gone. Is That Right?
July 17, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
“Reddit Removed Your Chat History from before 2023” stated:
… legacy chats were being migrated to the new chat platform and that only 2023 data is being brought over, adding that they “hope” a data export will help the user get back the older chats. The admin told another user asking whether there was an option to stay on the legacy chat that no, there isn’t, and Reddit is “working on making new chats better.”
A young attorney studies ancient Reddit data from 2023. That’s when information began because the a great cataclysm destroyed any previous, possibly useful data for a legal matter. But what about the Library of Congress? But what about the Internet Archive? But what about back up tapes at assorted archives? Yeah, right. Thanks for the data in amber MidJourney.
The cited article does not raise the following obviously irrelevant questions:
- Are there backups which can be consulted?
- Are their copies of the Reddit data chat data?
- Was the action taken to reduce costs or legal liability?
I am not a Reddit user, nor do I affix site:reddit or append the word “reddit” to my queries. Some may find the service useful, but I am a dinobaby and hopeless out of touch with where the knowledge action is.
As an outsider, my initial reaction is that dumping data has two immediate paybacks: Reduce storage and the likelihood that a group of affable lawyers will ask for historic data about a Reddit user’s activity. My hunch is that users of a free service cannot fathom why a commercial enterprise would downgrade or eliminate a free service. Gee, why?
I think I would answer the question with one word, “Adulting.”
Stephen E Arnold, July 17, 2023
Refining Open: The AI Weak Spot during a Gold Rush
July 13, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Nope, no reference will I make to sell picks and denim pants to those involved in a gold rush. I do want to highlight the essay “AI Weights Are Not Open Source.” There is a nifty chart with rows and columns setting forth some conceptual facets of smart software. Please, navigate to the cited document so you can read the text in the rows and columns.
For me, the most important sentence in the essay in my opinion is this one:
Many AI weights with the label “open” are not open source.
How are these “weights” determined or contrived? Are these weights derived by proprietary systems and methods? Are these weights assigned by a subject matter expert, a software engineer using guess-timation, or are low wage workers pressed against the task?
The answers to these questions reveal how models are configured to generate “good enough” results. Present models are prone to providing incomplete, incorrect, or pastiche information.
Furthermore, the popularity of obtaining images of Mr. Trump in an orange jumpsuit illustrates how “censorship” is applied to certain requests for information. Try it yourself. Navigate to MidJourney. Jump through the Discord hoops. Input the command “President Donald Trump in an orange jumpsuit.” Get the improper request flag. Then ask yourself, “How does BoingBoing keep creating Mr. Trump in an orange jumpsuit?”
Net net: The power of AI rests with the weights and controls which allow certain information and disallows other types of information. “Open” does not mean open like “the door is open.” Open for AI means a means to obtain power and exert control in my opinion.
Stephen E Arnold, July 13, 2023
Business Precepts for Silicon Valley: Shouting at the Grand Canyon?
July 13, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I love people with enthusiasm and passion. What’s important about these two qualities is that they often act like little dumpsters at the Grand Canyon. People put a range of discarded items into them, and hard-working contractors remove the contents and dump them in a landfill. (I hear some countries no longer accept trash from the US. Wow. Imagine that.)
During one visit many years ago with the late industrial photographer John C Evans, we visited the Grand Canyon. We were visiting uranium mines and snapping pictures for a client. I don’t snap anything; I used to get paid to be in charge of said image making. I know. Quite a responsibility. I did know enough not to visit the uranium mine face. The photographer? Yeah, well, I did not provide too much information about dust, radiation, and the efficacy of breathing devices in 1973. Senior manager types are often prone to forgetting some details.
Back to the Grand Canyon.
There was an older person who was screaming into or at the Grand Canyon. Most visitors avoided the individual. I, however, walked over and listened to him. He was explaining that everyone had to embrace the sacred nature of the Grand Canyon and stop robbing the souls of the deceased by taking pictures. He provided other outputs about the evils of modern society, the cost of mule renting, and the prices in the “official” restaurants. Since I had no camera, he ignored me. He did yell at John C Evens, who smiled and snapped pictures.
I asked MidJourney to replicate this individual who thought the Grand Canyon, assorted unseen spirits, and the visitors were listening. Here’s what the estimable art system output:
I thought of this individual when I read “Seven Rules For Internet CEOs To Avoid Enshittification.” The write up, inspired by a real journalist, surfs on the professional terminology for ruining a free service. I find the term somewhat offensive, and I am amused at the broad use the neologism has found.
The article provides what I think are similar to the precepts outlined in a revered religious book or a collection of Ogden Nash statements. Let me point out that these statements contain elements of truth and would probably reduce philosophers like A.E.O. Taylor and William James to tears of joy because of their fundamental rational empiricism. Yeah. (These fellows would have told the photographer about the visit to the uranium mine face too.)
The write up lays out a Code of Conduct for some Silicon Valley-type companies. Let me present three of the seven statements and urge you to visit the original article to internalize the precepts as a whole. You may want to consider screaming these out loud in a small group of friends or possibly visiting a local park and shouting at the pedestal where a Civil War statue once stood ignored.
Selected precept one:
Tell your investors that you’re in this for the long haul and they need to be too.
Selected precept two:
Find ways to make money that don’t undermine the community or the experience.
Selected precept three and remember there are four more in the original write up:
Never charge for what was once free.
I selected three of these utterances because each touches upon money. Investors provide money to get more money in return. Power and fame are okay, but money is the core objective. Telling investors to wait or be patient is like telling a TikTok influencer to wait, stand in line like everyone else, or calm down. Tip: That does not work. Investors want money and in a snappy manner. Goals and timelines are part of the cost of taking their money. The Golden Rule: Those with the gold rule.
The idea of giving up money for community or the undefined experience is okay as long as it does not break the Golden Rule. If it does, those providing the funding will get someone who follows the Golden Rule. The mandate to never charge for what was once free is like a one-liner at a Comedy Club. Quite a laugh because money trumps customers and the fuzzy wuzzy notion of experience.
What’s my take on these and the full listing of precepts? Think about the notion of a senior manager retaining information for self preservation. Think about the crazy person shouting rules into the Grand Canyon. Now think about how quickly certain Silicon Valley type outfits will operate in a different way? Free insight: The Grand Canyon does not listen. The trash is removed by contractors. The old person shouting eventually gets tired, goes to the elder care facility or back to van life, and the Silicon Valley steps boldly toward enshittification. That’s the term, right?
Stephen E Arnold, July 12, 2023
Understanding Reality: A Job for MuskAI
July 12, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid
I read “Elon Musk Launches His Own xAI Biz to Understand Reality.” Upon reading this article, I was immediately perturbed. The name of the company should be MuskAI (pronounced mus-key like the lovable muskox (Ovibos moschatus). This imposing and aromatic animal can tip the scales at up to 900 pounds. Take that to the cage match and watch the opposition wilt or at least scrunch up its nose.
I also wanted to interpret the xAI as AIX. IBM, discharger of dinobabies, could find that amusing. (What happens when AIX memory is corrupted? Answer: Aches in the posterior. Snort snort.)
Finally, my thoughts coalesced around the name Elon-AI, illustrated below by the affable MidJourney:
Bummer. Elon AI is the name of a “coin.” And the proper name Elonai means “a person who has the potential to attain spiritual enlightenment.” A natural!
The article reports:
Elon Musk is founding of his own AI company with some lofty ambitions. According to the billionaire, his xAI venture is being formed “to understand reality.” Those hoping to get a better explanation than Musk’s brief tweet by visiting xAI’s website won’t find much to help them understand what the company actually plans to do there, either. “The goal of xAI is to understand the true nature of the universe,” xAI said of itself…
I have a number of questions. Let me ask one:
Will Elon AI go after the Zuck AI?
And another:
Will the two AIs power an unmanned fighter jet, each loaded with live ordnance?
And the must-ask:
Will the AIs attempt to kill one another?
The mano-a-mano fight in Las Vegas (maybe in the weird LED appliqued in itsy bitsy LEDs) is less interesting to me than watching two warbirds from the Dayton Air Museum gear up and dog fight.
Imagine a YouTube video, then some TikToks, and finally a Netflix original released to the few remaining old-fashioned theaters.
That’s entertainment. Sigh. I mean xAI.
Stephen E Arnold, July 12, 2023