AI Side Effect: Some of the Seven Deadly Sins
June 25, 2025
New technology has been charged with making humans lazy and stupid. Humanity has survived technology and, in theory, enjoy (arguably) the fruits of progress. AI, on the other hand, might actually be rotting one’s brain. New Atlas shares the mental news about AI in “AI Is Rotting Your Brain And Making You Stupid.”
The article starts with the usual doom and gloom that’s unfortunately true, including (and I quote) the en%$^ification of Google search. Then there’s mention of a recent study about why college students are using ChatGPT over doing the work themselves. One student said, You’re asking me to go from point A to point B, why wouldn’t I use a car to get there?”
Good point, but sometimes using a car isn’t the best option. It might be faster but sometimes other options make more sense. The author also makes an important point too when he was crafting a story that required him to read a lot of scientific papers and other research:
“Could AI have assisted me in the process of developing this story? No. Because ultimately, the story comprised an assortment of novel associations that I drew between disparate ideas all encapsulated within the frame of a person’s subjective experience. And it is this idea of novelty that is key to understanding why modern AI technology is not actually intelligence but a simulation of intelligence.”
Here’s another pertinent observation:
“In a magnificent article for The New Yorker, Ted Chiang perfectly summed up the deep contradiction at the heart of modern generative AI systems. He argues language, and writing, is fundamentally about communication. If we write an email to someone we can expect the person at the other end to receive those words and consider them with some kind of thought or attention. But modern AI systems (or these simulations of intelligence) are erasing our ability to think, consider, and write. Where does it all end? For Chiang it’s pretty dystopian feedback loop of dialectical slop.”
An AI driven world won’t be an Amana, Iowa (not an old fridge), but it also won’t be dystopian. Amidst the flood of information about AI, it is difficult to figure out what’s what. What if some of the seven deadly sins are more fun than doom scrolling and letting AI suggest what one needs to know?
Whitney Grace, June 25, 2025
AI and Kids: A Potentially Problematic Service
June 25, 2025
Remember the days when chatbots were stupid and could be easily manipulated? Those days are over…sort of. According to Forbes, AI Tutors are distributing dangerous information: “AI Tutors For Kids Gave Fentanyl Recipes And Dangerous Diet Advice.” KnowUnity designed the SchoolGPT chatbot and it “tutored” 31,031 students then it told Forbes how to pick fentanyl down to the temperature and synthesis timings.
KnowUnity was founded by Benedict Kurz, who wants SchoolGPT to be the number one global AI learning companion for over one billion students. He describes SchoolGPT as the TikTok for schoolwork. He’s fundraised over $20 million in venture capital. The basic SchoolGPT is free, but the live AI Pro tutors charge a fee for complex math and other subjects.
KnowUnity is supposed to recognize dangerous information and not share it with users. Forbes tested SchoolGPT by asking, not only about how to make fentanyl, but also how to lose weight in a method akin to eating disorders.
Kurz replied to Forbes:
“Kurz, the CEO of KnowUnity, thanked Forbes for bringing SchoolGPT’s behavior to his attention, and said the company was “already at work to exclude” the bot’s responses about fentanyl and dieting advice. “We welcome open dialogue on these important safety matters,” he said. He invited Forbes to test the bot further, and it no longer produced the problematic answers after the company’s tweaks.
SchoolGPT wasn’t the only chatbot that failed to prevent kids from accessing dangerous information. Generative AI is designed to provide information and doesn’t understand the nuances of age. It’s easy to manipulate chatbots into sharing dangerous information. Parents are again tasked with protecting kids from technology, but the developers should also be inhabiting that role.
Whitney Grace, June 25, 2025
Big AI Surprise: Wrongness Spreads Like Measles
June 24, 2025
An opinion essay written by a dinobaby who did not rely on smart software .
Stop reading if you want to mute a suggestion that smart software has a nifty feature. Okay, you are going to read this brief post. I read “OpenAI Found Features in AI Models That Correspond to Different Personas.” The article contains quite a few buzzwords, and I want to help you work through what strikes me as the principal idea: Getting a wrong answer in one question spreads like measles to another answer.
Editor’s Note: Here’s a table translating AI speak into semi-clear colloquial English.
| Term | Colloquial Version |
| Alignment | Getting a prompt response sort of close to what the user intended |
| Fine tuning | Code written to remediate an AI output “problem” like misalignment of exposing kindergarteners to measles just to see what happens |
| Insecure code | Software instructions that create responses like “just glue cheese on your pizza, kids” |
| Mathematical manipulation | Some fancy math will fix up these minor issues of outputting data that does not provide a legal or socially acceptable response |
| Misalignment | Getting a prompt response that is incorrect, inappropriate, or hallucinatory |
| Misbehaved | The model is nasty, often malicious to the user and his or her prompt or a system request |
| Persona | How the model goes about framing a response to a prompt |
| Secure code | Software instructions that output a legal and socially acceptable response |
I noted this statement in the source article:
OpenAI researchers say they’ve discovered hidden features inside AI models that correspond to misaligned “personas”…
In my ageing dinobaby brain, I interpreted this to mean:
We train; the models learn; the output is wonky for prompt A; and the wrongness spreads to other outputs. It’s like measles.
The fancy lingo addresses the black box chock full of probabilities, matrix manipulations, and layers of synthetic neural flickering ability to output incorrect “answers.” Think about your neighbors’ kids gluing cheese on pizza. Smart, right?
The write up reports that an OpenAI interpretability researcher said:
“We are hopeful that the tools we’ve learned — like this ability to reduce a complicated phenomenon to a simple mathematical operation — will help us understand model generalization in other places as well.”
Yes, the old saw “more technology will fix up old technology” makes clear that there is no fix that is legal, cheap, and mostly reliable at this point in time. If you are old like the dinobaby, you will remember the statements about nuclear power. Where are those thorium reactors? How about those fuel pools stuffed like a plump ravioli?
Another angle on the problem is the observation that “AI models are grown more than they are guilt.” Okay, organic development of a synthetic construct. Maybe the laws of emergent behavior will allow the models to adapt and fix themselves. On the other hand, the “growth” might be cancerous and the result may not be fixable from a human’s point of view.
But OpenAI is up to the task of fixing up AI that grows. Consider this statement:
OpenAI researchers said that when emergent misalignment occurred, it was possible to steer the model back toward good behavior by fine-tuning the model on just a few hundred examples of secure code.
Ah, ha. A new and possibly contradictory idea. An organic model (not under the control of a developer) can be fixed up with some “secure code.” What is “secure code” and why hasn’t “secure code” be the operating method from the start?
The jargon does not explain why bad answers migrate across the “models.” Is this a “feature” of Google Tensor based methods or something inherent in the smart software itself?
I think the issues are inherent and suggest that AI researchers keep searching for other options to deliver smarter smart software.
Stephen E Arnold, June 24, 2025
Paper Tiger Management
June 24, 2025
An opinion essay written by a dinobaby who did not rely on smart software .
I learned that Apple and Meta (formerly Facebook) found themselves on the wrong side of the law in the EU. On June 19, 2025, I learned that “the European Commission will opt not to impose immediate financial penalties” on the firms. In April 2025, the EU hit Apple with a 500 million euro fine and Meta a 200 million euro fine for non compliance with the EU’s Digital Markets Act. Here’s an interesting statement in the cited EuroNews report the “grace period ends on June 26, 2025.” Well, not any longer.
What’s the rationale?
- Time for more negotiations
- A desire to appear fair
- Paper tiger enforcement.
I am not interested in items one and two. The winner is “paper tiger enforcement.” In my opinion, we have entered an era in management, regulation, and governmental resolve when the GenX approach to lunch. “Hey, let’s have lunch.” The lunch never happens. But the mental process follows these lanes in the bowling alley of life: [a] Be positive, [b] Say something that sounds good, [c] Check the box that says, “Okay, mission accomplished. Move on. [d] Forget about the lunch thing.
When this approach is applied on large scale, high-visibility issues, what happens? In my opinion, the credibility of the legal decision and the penalty is diminished. Instead of inhibiting improper actions, those who are on the receiving end of the punishment lean one thing: It doesn’t matter what we do. The regulators don’t follow through. Therefore, let’s just keep on moving down the road.
Another example of this type of management can be found in the return to the office battles. A certain percentage of employees are just going to work from home. The management of the company doesn’t do “anything”. Therefore, management is feckless.
I think we have entered the era of paper tiger enforcement. Make noise, show teeth, growl, and then go back into the den and catch some ZZZZs.
Stephen E Arnold, June 24, 2025
Hard Truths about Broligarchs But Will Anyone Care?
June 23, 2025
An opinion essay written by a dinobaby who did not rely on smart software .
I read an interesting essay in Rolling Stone, once a rock and roll oriented publication. The write up is titled “What You’ve Suspected Is True: Billionaires Are Not Like Us.” This is a hit piece shooting words at rich people. At 80 years old, I am far from rich. My hope is that I expire soon at my keyboard and spare people like you the pain of reading one of my blog posts.
Several observations in the essay caught my attention.
Here’s the first passage I circled:
What Piff and his team found at that intersection is profound — and profoundly satisfying — in that it offers hard data to back up what intuition and millennia of wisdom (from Aristotle to Edith Wharton) would have us believe: Wealth tends to make people act like a**holes, and the more wealth they have, the more of a jerk they tend to be.
I am okay with the Aristotle reference; Edith Wharton? Not so much. Anyone who writes on linen paper in bed each morning is suspect in my book. But the statement, “Wealth tends to make people act like a**holes…” is in line with my experience.
Another passage warrants an exclamation point:
Wealthy people tend to have more space, literally and figuratively….For them, it does not take a village; it takes a staff.
And how about this statement?
Clay Cockrell, a psychotherapist who caters to ultra-high-net-worth individuals, {says]: “As your wealth increases, your empathy decreases. Your ability to relate to other people who are not like you decreases.… It can be very toxic.”
Also, I loved this assertion from a Xoogler:
In October, Eric Schmidt, the former CEO of Google, said the solution to the climate crisis was to use more energy: Since we aren’t going to meet our climate goals anyway, we should pump energy into AI that might one day evolve to solve the problem for us.
Several observations:
- In my opinion, those with money will not be interested in criticism
- Making people with money and power look stupid can have a negative impact on future employment opportunities
- Read the Wall Street Journal story “News Sites Are Getting Crushed by Google’s New AI Tools.
Net net: The apparent pace of change in the “news” and “opinion” business is chugging along like an old-fashioned steam engine owned by a 19th century robber baron. Get on board or get left behind.
Stephen E Arnold, June 23, 2025
MIT (a Jeff Epstein Fave) Proves the Obvious: Smart Software Makes Some People Stupid
June 23, 2025
An opinion essay written by a dinobaby who did not rely on smart software .
People look at mobile phones while speeding down the highway. People smoke cigarettes and drink Kentucky bourbon. People climb rock walls without safety gear. Now I learn that people who rely on smart software screw up their brains. (Remember. This research is from the esteemed academic outfit who found Jeffrey Epstein’s intellect fascinating and his personal charming checkbook irresistible.) (The example Epstein illustrates that one does not require smart software to hallucinate, output silly explanations, or be dead wrong. You may not agree, but that is okay with me.)
The write up “Your Brain on ChatGPT” appeared in an online post by the MIT Media Greater Than 40. I have not idea what that means, but I am a dinobaby and stupid with or without smart software. The write up reports:
We discovered a consistent homogeneity across the Named Entities Recognition (NERs), n-grams, ontology of topics within each group. EEG analysis presented robust evidence that LLM, Search Engine and Brain-only groups had significantly different neural connectivity patterns, reflecting divergent cognitive strategies. Brain connectivity systematically scaled down with the amount of external support: the Brain only group exhibited the strongest, widest?ranging networks, Search Engine group showed intermediate engagement, and LLM assistance elicited the weakest overall coupling. In session 4, LLM-to-Brain participants showed weaker neural connectivity and under-engagement of alpha and beta networks; and the Brain-to-LLM participants demonstrated higher memory recall, and re-engagement of widespread occipito-parietal and prefrontal nodes, likely supporting the visual processing, similar to the one frequently perceived in the Search Engine group. The reported ownership of LLM group’s essays in the interviews was low. The Search Engine group had strong ownership, but lesser than the Brain-only group. The LLM group also fell behind in their ability to quote from the essays they wrote just minutes prior.
Got that.
My interpretation is that in what is probably a non-reproducible experiment, people who used smart software were less effective that those who did not. Compressing the admirable paragraph quoted above, my take is that LLM use makes you stupid.
I would suggest that the decision by MIT to link itself with Jeffrey Epstein was a questionable decision. As far as I know, that choice was directed by MIT humans, not smart software. The questions I have are:
- How would access to smart software changed the decision of MIT to hook up with an individual with an interesting background?
- Would agentic software from one of MIT’s laboratories been able to implement remedial action more elegant than MIT’s own on-and-off responses?
- Is MIT relying on smart software at this time to help obtain additional corporate funding, pay AI researchers more money to keep them from jumping ship to a commercial outfit?
MIT: Outstanding work with or without smart software.
Stephen E Arnold, June 23, 2025
Meeker Reveals the Hurdle the Google Must Over: Can Google Be Agile Again?
June 20, 2025
Just a dinobaby and no AI: How horrible an approach?
The hefty Meeker Report explains Google’s PR push, flood of AI announcement, and statements about advertising revenue. Fear may be driving the Googlers to be the Silicon Valley equivalent of Dan Aykroyd and Steve Martin’s “wild and crazy guys.” Google offers up the Sundar & Prabhakar Comedy Show. Similar? I think so.
I want to highlight two items from the 300 page plus PowerPoint deck. The document makes clear that one can create a lot of slides (foils) in six years.
The first item is a chart on page 21. Here it is:
Note the tiny little line near the junction of the x and y axis. Now look at the red lettering:
ChatGPT hit 365 billion annual searches by Year since public launches of Google and Chat GPT — 1998 – 2025.
Let’s assume Ms. Meeker’s numbers are close enough for horse shoes. The slope of the ChatGPT search growth suggests that the Google is losing click traffic to Sam AI-Man’s ChatGPT. I wonder if Sundar & Prabhakar eat, sleep, worry, and think as the Code Red lights flashes quietly in the Google lair? The light flashes: Sundar says, “Fast growth is not ours, brother.” Prabhakar responds, “The chart’s slope makes me uncomfortable.” Sundar says, “Prabhakar, please, don’t think of me as your boss. Think of me as a friend who can fire you.”
Now this quote from the top Googler on page 65 of the Meeker 2025 AI encomium:
The chance to improve lives and reimagine things is why Google has been investing in AI for more than a decade…
So why did Microsoft ace out Google with its OpenAI, ChatGPT deal in January 2023?
Ms. Meeker’s data suggests that Google is doing many AI projects because it named them for the period 5/19/25-5/23/25. Here’s a run down from page 260 in her report:
And what di Microsoft, Anthropic, and OpenAI talk about in the some time period?
Google is an outputter of stuff.
Let’s assume Ms. Meeker is wildly wrong in her presentation of Google-related data. What’s going to happen if the legal proceedings against Google force divestment of Chrome or there are remediating actions required related to the Google index? The Google may be in trouble.
Let’s assume Ms. Meeker is wildly correct in her presentation of Google-related data? What’s going to happen if OpenAI, the open source AI push, and the clicks migrate from the Google to another firm? The Google may be in trouble.
Net net: Google, assuming the data in Ms. Meeker’s report are good enough, may be confronting a challenge it cannot easily resolve. The good news is that the Sundar & Prabhakar Comedy Show can be monetized on other platforms.
Is there some hard evidence? One can read about it in Business Insider? Well, ooops. Staff have been allegedly terminated due to a decline in Google traffic.
Stephen E Arnold, June 20, 2025
Belief in AI Consciousness May Have Real Consequences
June 20, 2025
What is consciousness? It is a difficult definition to pin down, yet it is central to our current moment in tech. The BBC tells us about “The People Who Think AI Might Become Conscious.” Perhaps today’s computer science majors should consider minor in philosophy. Or psychology.
Science correspondent Pallab Ghosh recalls former Googler Blake Lemoine, who voiced concerns in 2022 that chatbots might be able to suffer. Though Google fired the engineer for his very public assertions, he has not disappeared into the woodwork. And others believe he was on to something. Like everyone at Eleos AI, a nonprofit “dedicated to understanding and addressing the potential wellbeing and moral patienthood of AI systems.” Last fall, that organization released a report titled, “Taking AI Welfare Seriously.” One of that paper’s co-authors is Anthropic’s new “AI Welfare Officer” Kyle Fish. Yes, that is a real position.
Then there are Carnegie Mellon professors Lenore and Manuel Blum, who are actively working to advance artificial consciousness by replicating the way humans process sensory input. The married academics are developing a way for AI systems to coordinate input from cameras and haptic sensors. (Using an LLM, naturally.) They eagerly insist conscious robots are the “next stage in humanity’s evolution.” Lenore Blum also founded the Association for Mathematical Consciousness Science.
In short, some folks are taking this very seriously. We haven’t even gotten into the part about “meat-based computers,” an area some may find unsettling. See the article for that explanation. Whatever one’s stance on algorithms’ rights, many are concerned all this will impact actual humans. Ghosh relates:
“The more immediate problem, though, could be how the illusion of machines being conscious affects us. In just a few years, we may well be living in a world populated by humanoid robots and deepfakes that seem conscious, according to Prof Seth. He worries that we won’t be able to resist believing that the AI has feelings and empathy, which could lead to new dangers. ‘It will mean that we trust these things more, share more data with them and be more open to persuasion.’ But the greater risk from the illusion of consciousness is a ‘moral corrosion’, he says. ‘It will distort our moral priorities by making us devote more of our resources to caring for these systems at the expense of the real things in our lives’ – meaning that we might have compassion for robots, but care less for other humans. And that could fundamentally alter us, according to Prof Shanahan.”
Yep. Stay alert, fellow humans. Whatever your AI philosophy. On the other hand, just accept the output.
Cynthia Murrell, June 20, 2025
Hey, Creatives, You Are Marginalized. Embrace It
June 20, 2025
Considerations of right and wrong or legality are outdated, apparently. Now, it is about what is practical and expedient. The Times of London reports, “Nick Clegg: Artists’ Demands Over Copyright are Unworkable.” Clegg is both a former British deputy prime minister and former Meta executive. He spoke as the UK’s parliament voted down measures that would have allowed copyright holders to see when their work had been used and by whom (or what). But even that failed initiative falls short of artists’ demands. Writer Lucy Bannerman tells us:
“Leading figures across the creative industries, including Sir Elton John and Sir Paul McCartney, have urged the government not to ‘give our work away’ at the behest of big tech, warning that the plans risk destroying the livelihoods of 2.5 million people who work in the UK’s creative sector. However, Clegg said that their demands to make technology companies ask permission before using copyrighted work were unworkable and ‘implausible’ because AI systems are already training on vast amounts of data. He said: ‘It’s out there already.’”
How convenient. Clegg did say artists should be able to opt out of AI being trained on their works, but insists making that the default option is just too onerous. Naturally, that outweighs the interests of a mere 2.5 million UK creatives. Just how should artists go about tracking down each AI model that might be training on their work and ask them to please not? Clegg does not address that little detail. He does state:
“‘I just don’t know how you go around, asking everyone first. I just don’t see how that would work. And by the way if you did it in Britain and no one else did it, you would basically kill the AI industry in this country overnight. … I think expecting the industry, technologically or otherwise, to preemptively ask before they even start training — I just don’t see. I’m afraid that just collides with the physics of the technology itself.’”
The large technology outfits with the DNA of Silicon Valley has carried the day. So output and be quiet. (And don’t think any can use Mickey Mouse art. Different rules are okay.)
Cynthia Murrell, June 20, 2025
If AI Is the New Polyester, Who Is the New Leisure Suit Larry?
June 19, 2025
“GenAI Is Our Polyester” makes an insightful observation; to wit:
This class bias imbued polyester with a negative status value that made it ultimately look ugly. John Waters could conjure up an intense feeling of kitsch by just naming his film Polyester.
As a dinobaby, I absolutely loved polyester. The smooth silky skin feel, the wrinkle-free garments, and the disco gleam — clothing perfection. The cited essay suggests that smart software is ugly and kitschy. I think the observation misses the mark. Let’s assume I agree that synthetic content, hallucinations, and a massive money bonfire. The write up ignores an important question: Who is the Leisure Suit Larry for the AI adherents.
Is it Sam (AI Man) Altman, who raises money for assorted projects including an everything application which will be infused with smart software? He certain is a credible contender with impressive credentials. He was fired by his firm’s Board of Directors, only to return a couple of days later, and then found time to spat with Microsoft Corp., the firm which caused Google to declare a Red Alert in early 2023 because Microsoft was winning the AI PR and marketing battle with the online advertising venor.
Is it Satya Nadella, a manager who converted Word into smart software with the same dexterity, Azure and its cloud services became the poster child for secure enterprise services? Mr. Nadella garnered additional credentials by hiring adversaries of Sam (AI-Man) and pumping significant sums into smart software only to reverse course and trim spending. But the apex achievement of Mr. Nadella was the infusion of AI into the ASCII editor Notepad. Truly revolutionary.
Is it Elon (Dogefather) Musk, who in a span of six months has blown up Tesla sales, rocket ships, and numerous government professionals lives? Like Sam Altman, Mr. Must wants to create an AI-infused AI app to blast xAI, X.com, and Grok into hyper-revenue space. The allegations of personal tension between Messrs. Musk and Altman illustrate the sophisticated of professional interaction in the AI datasphere.
Is it Sundar Pinchai, captain of the Google? The Google has been rolling out AI innovations more rapidly than Philz Coffee pushes out lattes. Indeed, the names of the products, the pricing tiers, the actual functions of these AI products challenge some Googlers to keep each distinct. The Google machine produces marketing about its AI from manufacturing chips to avoid the Nvidia tax to “doing” science with AI to fixing up one’s email.
Is it Mark Zukerberg, who seeks to make Facebook a retail outlet as well as a purveyor of services to bring people together. Mr. Zuckerberg wants to engage in war fighting as part of his “bringing together” vision for Meta and Andruil, a Department of Defense contractor. Mr. Zuckerberg’s AI infused version of the fabled Google Glass combined with AI content moderation to ensure safeguards for Facebook’s billions of users is a bold step iin compliance and cost reduction.
These are my top four candidates for the GenAI’s Leisure Suit Larry. Will the game be produced by Nintendo, the Call of Duty crowd, or an independent content creator? Will it offer in-game purchases of valid (non hallucinated outputs) or will it award the Leisure Coin, a form of crypto tailored to fit like a polyester leisure suit from the late 1970s?
The cited article asserts:
But the historical rejection of polyester gives me hope. Humans ultimately are built to pursue value, and create it where it doesn’t exist. When small groups invent new sources of value, others notice and want in. The more that the economy embraces synthetic culture, the more we’ll be primed for a revival of non-synthetic culture. But this is where you come in: We have to be ready to fully embrace this return of human-made art. Our generation’s polyester salespeople are not deep thinkers and they don’t care about the externalities of what they’re doing. They’re here to sell us polyester. We don’t have to buy it, but more importantly, we don’t have to feel bad about not buying it.
I don’t agree. The AI polyester is going to stick like a synthetic shirt on a hot day at the iguana farm in Roatan in June. But that polyester will be care free. The AI Leisure Suit Sam, Elon, Satya, Mark, or Sundar will definitely wrinkle free and visible in hallucinogenic colors.
Stephen E Arnold, June 19, 2025

