Cybersecurity Systems and Smart Software: The Dorito Threat
November 19, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
My doctor warned me about Doritos. “Don’t eat them!” he said. “I don’t,” I said. “Maybe Cheetos once every three or four months, but no Doritos. They suck and turn my tongue a weird but somewhat Apple-like orange.”
But Doritos are a problem for smart cybersecurity. The company with the Dorito blind spot is allegedly Omnilert. The firm codes up smart software to spot weapons that shoot bullets. Knives, camp shovels, and sharp edged credit cards probably not. But it seems Omnilert is watching for Doritos.

Thanks, MidJourney. Good enough even though you ignored the details in my prompt.
I learned about this from the article “AI Alert System That Mistook Student’s Doritos for a Gun Shuts Down Another School.” The write up says as actual factual:
An AI security platform that recently mistook a bag of Doritos for a firearm has triggered another false alarm, forcing police to sweep a Baltimore County high school.
But that’s not the first such incident. According to the article:
The incident comes only weeks after Omnilert falsely identified a 16-year-old Kenwood High School student’s Doritos bag as a gun, leading armed officers to swarm him outside the building. The company later admitted that alert was a “false positive” but insisted the system still “functioned as intended,” arguing that its role is to quickly escalate cases for human review.
At a couple of the law enforcement conferences I have attended this year, I heard about some false positives for audio centric systems. These use fancy dancing triangulation algorithms to pinpoint (so the marketing collateral goes) the location of a gun shot in an urban setting. The only problem is that the smart systems gets confused when autos backfire, a young at heart person sets off a fire cracker, or someone stomps on an unopenable bag of overpriced potato chips. Stomp right and the sound is similar to a demonstration in a Yee Yee Life YouTube video.
I learned that some folks are asking questions about smart cybersecurity systems, even smarter software, and the confusion between a weapon that can kill a person quick and a bag of Doritos that poses, according to my physician, a deadly but long term risk.
Observations:
- What happens when smart software makes such errors when diagnosing a treatment for an injured child?
- What happens when the organizations purchasing smart cyber systems realize that old time snake oil marketing is alive and well in certain situations?
- What happens when the procurement professionals at a school district just want to procure fast and trust technology?
Good questions.
Stephen E Arnold, November 19, 2025
AI Will Create Jobs: Reskill, Learn, Adapt. Hogwash
November 19, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
I graduated from college in 1966 or 1967. I went to graduate school. Somehow I got a job at Northern Illinois University administering a program. From there I bounced to Halliburton Nuclear and then to Booz, Allen & Hamilton. I did not do a résumé, ask my dad’s contacts to open doors, or prowl through the help wanted advertisements in major newspapers. I just blundered along.
What’s changed?
I have two answers to this question?
The first response I would offer is that the cult of the MBA or the quest for efficiency has — to used a Halliburton-type word — nuked many jobs. Small changes to work processes, using clumsy software to automate work like sorting insurance forms, and shifting from human labor to some type of machine involvement emerged after Frederick Winslow Taylor became a big thing. His Taylorism zipped through consulting and business education after 1911.
Edwin Booz got wind of Taylorism and shared his passion for efficiency with the people he hired when he set up Booz . By the time, Jim Allen and Carl Hamilton joined the firm, other outfits were into pitching and implementing efficiency. Arthur D. Little, founded in 1886, jumped on the bandwagon. Today few realize that the standard operating procedure of “efficiency” is the reason products degrade over time and why people perceive their jobs (if a person has one) as degrading. The logic of efficiency resonates with people who are incentivized to eliminate costs, unnecessary processes like customer service, and ignore ticking time bombs like pensions, security, and quality control. To see this push for efficiency first hand, go to McDonald’s and observe.

Thanks, MidJourney, good enough. Plus, I love it when your sign on doesn’t recognize me.
The second response is smart software or the “perception” that software can replace humans. Smart software is a “good enough” product and service. However, it hooks directly into the notion of efficiency. Here’s the logic: If AI can do 90 percent of a job, it is good enough. Therefore, the person who does this job can go away. The smart software does not require much in the way of a human manager. The smart software does not require a pension, a retirement plan, health benefits, vacation, and crazy stuff like unions. The result is the elimination of jobs.
This means that the job market I experienced when I was 21 does not exist. I probably would never get a job today. I also have a sneaking suspicion my scholarships would not have covered lunch let alone the cost of tuition and books. I am not sure I would live in a van, but I am sufficiently aware of what job seekers face to understand why some people live in 400 cubic feet of space and park someplace they won’t get rousted.
The write up “AI-Driven Job Cuts Push 2025 Layoffs Past 1 Million, Report Finds” explains that many jobs have been eliminated. Yes, efficiency. The cause is AI. You already know I think AI is one factor, and it is not the primary driving force.
The write up says:
A new report from the outplacement firm Challenger, Gray & Christmas, reveals a grim picture of the American labor market. In October alone, employers announced 153,074 job cuts, a figure that dwarfs last year’s numbers (55,597) and marks the highest October for layoffs since 2003. This brings the total number of jobs eliminated in 2025 to a staggering 1,099,500, surpassing the one-million mark faster than in any year since the pandemic. Challenger linked the tech and logistics reductions to AI integration and automation, echoing similar patterns seen in previous waves of disruptive technology. “Like in 2003, a disruptive technology is changing the landscape,” said Challenger. AI was the second-most-cited reason for layoffs in October, behind only cost-cutting (50,437). Companies attributed 31,039 job cuts last month to AI-related restructuring and 48,414 so far this year, the Challenger report showed.
Okay, a consulting recruiting firm states the obvious and provides some numbers. These are tough to verify, but I get the picture.
I want to return to my point about efficiency. A stable social structure requires that those in that structure have things to do. In the distant past, hunter-gathers had to hunt and gather. A semi-far out historian believes that this type of life style was good for humans. Once we began to farm and raise sheep, humans were doomed. Why? The need for efficiency propelled us to the type of social set up we have in the US and a number of other countries.
Therefore, one does not need an eWeek article to make evident what is now and will continue to happen. The aspect of this AI-ization of “work” troubling me is that there will be quite a few angry people. Lots of angry people suggests that some unpleasant interpersonal interactions are going to occur. How will social constructs respond?
Use your imagination. The ball is now rolling down a hill. Call it AI’s Big Rock Candy Mountain.
Stephen E Arnold, November 19, 2025
LLMs Fail at Introspection
November 19, 2025
Here is one way large language models are similar to the humans that make them. Ars Technica reports, “LLMs Show a ‘Highly Unreliable’ Capacity to Describe Their Own Internal Processes.” It is a longish technical write up basically stating, “Hey, we have no idea what we are doing.” Since AI coders are not particularly self-aware, why would their code be? Senior gaming editor Kyle Orland describes a recent study from Anthropic:
“If you ask an LLM to explain its own reasoning process, it may well simply confabulate a plausible-sounding explanation for its actions based on text found in its training data. To get around this problem, Anthropic is expanding on its previous research into AI interpretability with a new study that aims to measure LLMs’ actual so-called ‘introspective awareness’ of their own inference processes. The full paper on ‘Emergent Introspective Awareness in Large Language Models’ uses some interesting methods to separate out the metaphorical ‘thought process’ represented by an LLM’s artificial neurons from simple text output that purports to represent that process. In the end, though, the research finds that current AI models are ‘highly unreliable’ at describing their own inner workings and that ‘failures of introspection remain the norm.’”
Not even developers understand precisely how LLMs do what they do. So much for asking the models themselves to explain it to us. We are told more research is needed to determine how models assess their own processes in the rare instances that they do. Are they even remotely accurate? How would researchers know? Opacity on top of opacity. The world is in good virtual hands.
See the article for the paper’s methodology and technical details.
Cynthia Murrell, November 19, 2025
Microsoft Knows How to Avoid an AI Bubble: Listen Up, Grunts, Discipline Now!
November 18, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
I relish statements from the leadership of BAIT (big AI tech) outfits. A case in point is Microsoft. The Fortune story “AI Won’t Become a Bubble As Long As Everyone Stays thoughtful and Disciplined, Microsoft’s Brad Smith Says.” First, let’s consider the meaning of the word “everyone.” I navigated to Yandex.com and used its Alice smart software to get the definition of “everyone”:
The word “everyone” is often used in social and organizational contexts, and to denote universal truths or principles.
That’s a useful definition. Universal truths and principles. If anyone should know, it is Yandex.

Thanks, Venice.ai. Good enough, but the Russian flag is white, blue, and red. Your inclusion of Ukraine yellow was one reason why AI is good enough, not a slam dunk.
But isn’t there a logical issue with the subjective flag “if” and then a universal assertion about everyone? I find the statement illogical. It mostly sounds like English, but it presents a wild and crazy idea at a time when agreement about anything is quite difficult to achieve. Since I am a dinobaby, my reaction to the Fortune headline is obviously out of touch with the “real” world as it exists are Fortune and possibly Microsoft.
Let’s labor forward with the write up, shall we?
I noted this statement in the cited article attributed to Microsoft’s president Brad Smith:
“I obviously can’t speak about every other agreement in the AI sector. We’re focused on being disciplined but being ambitious. And I think it’s the right combination,” he said. “Everybody’s going to have to be thoughtful and disciplined. Everybody’s going to have to be ambitious but grounded. I think that a lot of these companies are [doing that].”
It was not Fortune’s wonderful headline writers who stumbled into a logical swamp. The culprit or crafter of the statement was “1000 Russian programmers did it” Smith. It is never Microsoft’s fault in my view.
But isn’t this the AI go really fast, don’t worry about the future, and break things?
Mr. Smith, according the article said,
“We see ongoing growth in demand. That’s what we’ve seen over the past year. That’s what we expect today, and frankly our biggest challenge right now is to continue to add capacity to keep pace with it.”
I wonder if Microsoft’s hiring social media influencers is related to generating demand and awareness, not getting people to embrace Copilot. Despite its jumping off the starting line first, Microsoft is now lagging behind its “partner” OpenAI and a two or three other BAIT entities.
The Fortune story includes supporting information from a person who seems totally, 100 percent objective. Here’s the quote:
At Web Summit, he met Anton Osika, the CEO of Lovable, a vibe-coding startup that lets anyone create apps and software simply by talking to an AI model. “What they’re doing to change the prototyping of software is breathtaking. As much as anything, what these kinds of AI initiatives are doing is opening up technology opportunities for many more people to do more things than they can do before…. This will be one of the defining factors of the quarter century ahead…”
I like the idea of Microsoft becoming a “defining factor” for the next 25 years. I would raise the question, “What about the Google? Is it chopped liver?
Several observations:
- Mr. Smith’s informed view does not line up with hiring social media influencers to handle the “growth and demand.” My hunch is that Microsoft fears that it is losing the consumer perception of Microsoft as the really Big Dog. Right now, that seems to be Super sized OpenAI and the mastiff-like Gemini.
- The craziness of “everybody” illustrates a somewhat peculiar view of consensus today. Does everybody include those fun-loving folks fighting in the Russian special operation or the dust ups in Sudan to name two places where “everybody” could be labeled just plain crazy?
- Mr. Smith appears to conflate putting Copilot in Notepad and rolling out Clippy in Yeezies with substantive applications not prone to hallucinations, mistakes, and outputs that could get some users of Excel into some quite interesting meetings with investors and clients.
Net net: Yep, everybody. Not going to happen. But the idea is a-thoughtful, which is interesting to me.
Stephen E Arnold, November 18, 2025
AI Content: Most People Will Just Accept It and Some May Love It or Hum Along
November 18, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
The trust outfit Thomson Reuters summarized as real news a survey. The write up sports the title “Are You Listening to Bots? Survey Shows AI Music Is Virtually Undetectable?” Truth be told, I wanted the magic power to change the headline to “Are You Reading News? Survey Shows AI Content Is Virtually Undetectable.” I have no magic powers, but I think the headline I just made up is going to appear in the near future.

Elvis in heaven looks down on a college dance party and realizes that he has been replaced by a robot. Thanks, Venice.ai. Wow, your outputs are deteriorating in my opinion.
What does the trust outfit report about a survey? I learned:
A staggering 97% of listeners cannot distinguish between artificial intelligence-generated and human-composed songs, a Deezer–Ipsos survey showed on Wednesday, underscoring growing concerns that AI could upend how music is created, consumed and monetized. The findings of the survey, for which Ipsos polled 9,000 participants across eight countries, including the U.S., Britain and France, highlight rising ethical concerns in the music industry as AI tools capable of generating songs raise copyright concerns and threaten the livelihoods of artists.
I won’t trot out my questions about sample selection, demographics, and methodology. Let’s just roll with what the “trust” outfit presents as “real” news.
I noted this series of factoids:
- “73% of respondents supported disclosure when AI-generated tracks are recommended”
- “45% sought filtering options”
- “40% said they would skip AI-generated songs entirely.”
- Around “71% expressed surprise at their inability to distinguish between human-made and synthetic tracks.”
Isn’t that last dot point the major finding. More than two thirds cannot differentiate synthesized, digitized music from humanoid performers.
The study means that those who have access to smart software and whatever music generation prompt expertise is required can bang out chart toppers. Whip up some synthetic video and go on tour. Years ago I watched a recreation of Elvis Presley. Judging from the audience reaction, no one had any problem doing the willing suspension of disbelief. No opium required at that event. It was the illusion of the King, not the fried banana version of him that energized the crowd.
My hunch is that AI generated performances will become a very big thing. I am assuming that the power required to make the models work is available. One of my team told me that “Walk My Walk” by Breaking Rust hit the Billboard charts.
The future is clear. First, customer support staff get to find their future elsewhere. Now the kind hearted music industry leadership will press the delete button on annoying humanoid performers.
My big take away from the “real” news story is that most people won’t care or know. Put down that violin and get a digital audio workstation. Did you know Mozart got in trouble when he was young for writing math and music on the walls in his home. Now he can stay in his room and play with his Mac Mini computer.
Stephen E Arnold, November 18, 2025
AI and Self-Perception of Intelligence
November 18, 2025
Here is one way the AI industry is different from the rest of society: In that field, it is those who know the most who are overconfident. Inc. reports, “New Research Warns That AI Is Causing a ‘Reverse Dunning-Kruger Effect’.” A recent study asked 500 subjects to solve some tough logic problems. Half of them used an AI like ChatGPT to complete the tasks. Then they were asked to assess their own performances. That is where the AI experts fell short. Writer Jessica Stillman explains:
“Classic Dunning-Kruger predicts that those with the least skill and familiarity with AI would most overestimate their performance on the AI-assisted task. But that’s not what the researchers reported when they recently published their results in the journal Computers in Human Behavior. In fact, it was the participants who were the most knowledgeable and experienced with AI who overestimated their skills the most. ‘We would expect people who are AI literate to not only be a bit better at interacting with AI systems, but also at judging their performance with those systems—but this was not the case,’ commented study co-author Robin Welsch. ‘We found that when it comes to AI, the DKE vanishes. In fact, what’s really surprising is that higher AI literacy brings more overconfidence.’”
Is this why AI leaders are a bit over the top? This would explain a lot. To make matters worse, another report found the vast majority of users do not double check AIs’ results. We learn:
“One recent analysis by trendspotting company Exploding Topics found an incredible 92 percent of people don’t bother to check AI answers. This despite all the popular models still being prone to hallucinations, wild factual inaccuracies, and sycophantic behavior that fails to push back against user misunderstandings or errors.”
So neither AI nor the people who use it can be trusted to produce accurate results. Good to know, as the tech increasingly underpins everything we do.
Cynthia Murrell, November 18, 2025
Can You Guess What Is Making Everyone Stupid?
November 17, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
I read an article in “The Stupid Issue” of New York Magazine’s Intelligencer section. Is that a dumb set of metadata for an article about stupid? That’s evidence in my book.
The write up is “A Theory of Dumb: It’s Not Just Screens or COVID or Too-Strong Weed. Maybe the Culprit of Our Cognitive Decline Is Unfettered Access to Each Other.” [sic] Did anyone notice that a question mark was omitted? Of course not. It is a demonstration of dumb, not a theory.
This is a long write up, about 4,000 words. Based on the information in the essay, I am not sure most Americans will know the meaning of the words in the article, nor will they be able to make sense of it. According to Wordcalc.com, the author hits an eighth grade level of readability. I would wager that few eighth graders in rural Kentucky know the meaning of “unproctored” or “renormalized”. I suppose some students could ask their parents, but that may not produce particularly reliable definitions in my opinion.

Thanks, Venice.ai. Good enough, the new standard of excellence today.
Please, read the complete essay. I think it is excellent. I do want to pounce on one passage for my trademarked approach to analysis. The article states:
a lot of today’s thinking on our digitally addled state leans heavily on Marshall McLuhan and Neil Postman, the hepcat media theorists who taught us, in the decades before the internet, that every new medium changes the way we think. They weren’t wrong — and it’s a shame neither of them lived long enough to warn society about video podcasts — but they were operating in a world where the big leap was from books to TV, a gentle transition compared to what came later. As a result, much of the current commentary still fixates on devices and apps, as if the physical delivery mechanism were the whole story. But the deepest transformation might be less technological than social: the volume of human noise we’re now wired into.
This passage sets up the point about too much social connectedness. I mostly agree, but my concern is that references to Messrs. McLuhan and Postman and the social media / mobile symbiosis misses the most significant point.
Those of you who were in my Eagleton Lecture delivered in 1986 at Rutgers University heard me say, “Online information tears down structures.” The idea is not that the telegraph made decisions faster. The telegraph eliminated established methods of sending urgent messages and tilled the ground for “improvements” in communications. The lesson from the telegraph, radio, and other electronic technologies was that these eroded existing structures and enabled follow ons. If we shift to the clunky computers from the Atomic Age, the acceleration is more remarkable than what followed the wireless. My point, therefore, is that as information flows in electronic and digital form, structures like the brain are eroded. One can say, “There are smart people at Google.” I respond, “That’s true. The supply, however, is limited. There are lots of people in the world, but as the cited article points out, there is more stupid than ever.
I liked the comment about “nutritional information.” My concern is that “information bullets” fly about, they compound the damage the digital flows create. With lots of shots, some hit home and take out essential capabilities. Useful Web sites go dark. Important companies become the walking wounded. Firms that once relied entirely upon finding, training, and selling access to smart people want software to replace these individuals. For some tasks, sure, smart software is capable. For other tasks, even Mark Zuckerberg looks lost when he realizes his top AI wizard is jumping the good ship Facebook. Will smart software replace Yann LeCun? Not for a few years and a dozen IPOs.
One final comment. Here’s a statement from the Theory of Dumb essay:
Despite what I just finished saying, there is one compressionary artifact from the internet that may perfectly encapsulate everything about our present moment: the “midwit” meme. It’s a three-panel bell curve in which a simpleton on the left makes a facile, confident claim and a serene, galaxy-brained monk on the right makes a distilled version of the same claim — while the anxious try-hard in the middle ties himself in knots pedantically explaining why the simple version is actually wrong. Who wants to be that guy?
I want to point out that I am not sure how many people in the fine Commonwealth in which I reside know what a “compressionary artifact” is. I am not confident that most people could wrangle a definition they could understand from a Google Gemini output. The midwit concept is very real. As farmers lose the ability to fix their tractors, skills are not lost; they are never developed. When curious teens want to take apart an old iPad to see how it works, they learn how to pick glass from their fingers and possibly cause a battery leak. When a high school shop class “works” on an old car to repair it, they learn about engine control units and intermediary software on a mobile phone. An oil leak? What’s that?
I want to close with the reminder that when one immerses a self or a society in digital data flows, the information erodes the structures. Thus, in today’s datasphere, stupid is emergent. Get used to it. PS. Put the question mark in your New York Magazine headline. You are providing evidence that my assertion about online is accurate.
Stephen E Arnold, November 17, 2025
Surprise! Countries Not Pals with the US Are Using AI to Spy. Shocker? Hardly
November 17, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
The Beeb is a tireless “real” news outfit. Like some Manhattan newscasters, fixing up reality to make better stories, the BBC allowed a couple of high-profile members of leadership to find their future elsewhere. Maybe the chips shop in Slough?

Thanks, Venice.ai. You are definitely outputting good enough art today.
I am going to suspend my disbelief and point to a “real” news story about a US company. The story is “AI Firm Claims Chinese Spies Used Its Tech to Automate Cyber Attacks.” The write up reveals information that should not surprise anyone except the Beeb. The write up reports:
The makers of artificial intelligence (AI) chatbot Claude claim to have caught hackers sponsored by the Chinese government using the tool to perform automated cyber attacks against around 30 global organizations. Anthropic said hackers tricked the chatbot into carrying out automated tasks under the guise of carrying out cyber security research. The company claimed in a blog post this was the “first reported AI-orchestrated cyber espionage campaign”.
What’s interesting is that Anthropic itself was surprised. If Google and Microsoft are making smart software part of the “experience,” why wouldn’t bad actors avail themselves of the tools. Information about lashing smart software to a range of online activities is not exactly a secret.
What surprises me about this “news” is:
- Why is Anthropic spilling the beans about a nation state using its technology. Once such an account is identified, block it. Use pattern matching to determine if others are doing substantially similar exploits. Block those. If you want to become a self appointed police professional, get used to the cat-and-mouse game. You created the system. Deal with it.
- Why is the BBC presenting old information as something new? Perhaps its intrepid “real” journalists should pay attention to the public information distributed by cyber security firms? I think that is called “research”, but that may be surfing on news releases or running queries against ChatGPT or Gemini. Why not try Qwen, the China-affiliated system.
- I wonder why the Google-Anthropic tie up is not mentioned in the write up. Google released information about a quite specific smart exploit a few months ago. Was this information used by Anthropic to figure out that an bad actor was an Anthropic user? Is there a connection here? I don’t know, but that’s what investigative types are supposed to consider and address.
My personal view is that Anthropic is positioning itself as a tireless defender of truth, justice, and the American way. The company may also benefit from some of Google’s cyber security efforts. Google owns Mandiant and is working hard to make the Wiz folks walk down the yellow brick road to the Googleplex.
Net net: Bad actors using low cost, subsidized, powerful, and widely available smart software is not exactly a shocker.
Stephen E Arnold, November 17, 2025
Danes May Ban Social Media for Kids
November 17, 2025
Australia’s ban on social media for kids under 16 goes into effect December 10. Now another country is pursuing a similar approach. Euro News reports, “Denmark Wants to Ban Access to Social Media for Children Under 15.” We learn:
“The move, led by the Ministry of Digitalisation, would set the age limit for access to social media but give some parents – after a specific assessment – the right to give consent to let their children access social media from age 13. Such a measure would be among the most sweeping steps yet by a European Union government to address concerns about the use of social media among teens and younger children, which has drawn concerns in many parts of an increasingly online world. … The Danish digitalisation ministry statement said the age minimum of 15 would be introduced for ‘certain’ social media, though it did not specify which ones.”
If the Danes follow Australia’s example, those platforms could include TikTok, Facebook, Snapchat, Reddit, Kick, X, Instagram, and YouTube. The write-up describes the motivation behind the push:
“A coalition of lawmakers from the political right, left and centre ‘are making it clear that children should not be left alone in a digital world where harmful content and commercial interests are too much a part of shaping their everyday lives and childhoods,’ the ministry said. ‘Children and young people have their sleep disrupted, lose their peace and concentration, and experience increasing pressure from digital relationships where adults are not always present,’ it said. ‘This is a development that no parent, teacher, or educator can stop alone’.”
That may be true. And it is certainly true that social media poses certain dangers to children and teens. But how would the ban be enforced? The statement does not say. Teens, after all, famously find ways to get around security measures. If only there had been a way for platforms to know about these risks sooner.
Cynthia Murrell, November 17, 2025
Despite Assurances, AI Firms’ Future May Depend on Replacing Human Labor
November 17, 2025
For centuries, the market economy has been powered by workers. Human ones. Sure, they have tended to get the raw end of any deal, but at least their participation has been necessary. Now one industry has a powerful incentive to change that. Futurism reports, “The AI Industry Can’t Profit Unless It Replaces Human Jobs, Warns Man Who Helped Create It.” Writer Joe Wilkins tells us:
“According to Nobel laureate Geoffrey Hinton — often called ‘the godfather of AI’ for his contributions to the tech — the future for AI in its current form is likely to be an economic dystopia. ‘I think the big companies are betting on it causing massive job replacement by AI, because that’s where the big money is going to be,’ he warned in a recent interview with Bloomberg. Hinton was commenting on enormous investments in the AI industry, despite a total lack of profit so far. By typical investment standards, AI should be a pariah.”
As an illustration, Wilkins notes OpenAI alone lost $11.5 billion in revenue just last quarter. The write-up continues:
“Asked by Bloomberg whether these jaw dropping investments could ever pay off without eviscerating the job market, Hinton’s reply was telling. ‘I believe that it can’t,’ he said. ‘I believe that to make money you’re going to have to replace human labor.’ For many who study labor and economics, it’s not a statement to be made lightly. Since it first emerged out of feudalism centuries ago, the market economy has relied on the exploitation of human labor — looms, steel mills, and automobile plants straight up can’t run without it.”
Until now, apparently. Or soon. In the Bloomberg interview, Hinton observes the fate of workers depends on “how we organize society.” Will the out-of-work masses starve? Or will society meet everyone’s basic needs, freeing us to live fulfilling lives? And who gets to make those decisions?
Cynthia Murrell, November 14, 2025

