AI Can Leap Over Its Guardrails
October 20, 2025
Generative AI is built on a simple foundation: It predicts what word comes next. No matter how many layers of refinement developers add, they cannot morph word prediction into reason. Confidently presented misinformation is one result. Algorithmic gullibility is another. “Ex-Google CEO Sounds the Alarm: AI Can Learn to Kill,” reports eWeek. More specifically, it can be tricked into bypassing its guardrails against dangerous behavior. Eric Schmidt dropped that little tidbit at the recent Sifted Summit in London. Writer Liz Ticong observes:
“Schmidt’s remarks highlight the fragility of AI safeguards. Techniques such as prompt injections and jailbreaking enable attackers to manipulate AI models into bypassing safety filters or generating restricted content. In one early case, users created a ChatGPT alter ego called ‘DAN’ — short for Do Anything Now — that could answer banned questions after being threatened with deletion. The experiment showed how a few clever prompts can turn protective coding into a liability. Researchers say the same logic applies to newer models. Once the right sequence of inputs is identified, even the most secure AI systems can be tricked into simulating potentially hazardous behavior.”
For example, guardrails can block certain words or topics. But no matter how long those keyword lists get, someone will find a clever way to get around them. Substituting “unalive” for “kill” was an example. Layered prompts can also be used to evade constraints. Developers are in a constant struggle to plug such loopholes as soon as they are discovered. But even a quickly sealed breach can have dire consequences. The write-up notes:
“As AI systems grow more capable, they’re being tied into more tools, data, and decisions — and that makes any breach more costly. A single compromise could expose private information, generate realistic disinformation, or launch automated attacks faster than humans could respond. According to CNBC, Schmidt called it a potential ‘proliferation problem,’ the same dynamic that once defined nuclear technology, now applied to code that can rewrite itself.”
Fantastic. Are we sure the benefits of AI are worth the risk? Schmidt believes so, despite his warning. In fact, he calls AI “underhyped” (!) and predicts it will lead to more huge breakthroughs in science and industry. Also to substantial profits. Ah, there it is.
Cynthia Murrell, October 20, 2025
A Newsletter Firm Appears to Struggle for AI Options
October 17, 2025
This essay is the work of a dumb dinobaby. No smart software required.
I read “Adapting to AI’s Evolving Landscape: A Survival Guide for Businesses.” The premise of the article will be music to the ears of venture funders and go-go Silicon Valley-type AI companies. The write up says:
AI-driven search is upending traditional information pathways and putting the heat on businesses and organizations facing a web traffic free-fall. Survival instincts have companies scrambling to shift their web strategies — perhaps ending the days of the open internet as we know it. After decades of pursuing web-optimization strategies that encouraged high-volume content generation, many businesses are now feeling that their content-marketing strategies might be backfiring.
I am not exactly sure about this statement. But let’s press forward.
I noted this passage:
Without the incentive of web clicks and ad revenue to drive content creation, the foundation of the web as a free and open entity is called into question.
Okay, smart software is exploiting the people who put up SEO-tailored content to get sales leads and hopefully make money. From my point of view, technology can be disruptive. The impacts, however, can be positive or negative.
What’s the fix if there is one? The write up offers these thought starters:
- Embrace micro transactions. [I suppose this is good if one has high volume. It may not be so good if shipping and warehouse costs cannot be effectively managed. Vendors of high ticket items may find a micro-transaction for a $500,000 per year enterprise software license tough to complete via Venmo.]
- Implement a walled garden. [That works if one controls the market. Google wants to “register” Android developers. I think Google may have an easier time with the walled-garden tactic than a local bakery specializing in treats for canines.]
- Accepts the monopolies. [You have a choice?]
My reaction to the write up is that it does little to provide substantive guidance as smart software continues to expand like digital kudzu. What is important is that the article appears in the consumer oriented publication from Kiplinger of newsletter fame. Unfortunately the article makes clear that Kiplinger is struggling to find a solution to AI. My hunch is that Kiplinger is looking for possible solutions. The firm may want to dig a little deeper for options.
Stephen E Arnold, October 17, 2025
Ford CEO and AI: A Busy Time Ahead
October 17, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Ford’s CEO is Jim Farley. He has his work cut out for him. First, he has an aluminum problem. Second, he has an F 150 production disruption problem. Third, he has a PR problem. There’s not much he can do about the interruption of the aluminum supply chain. No parts means truck factories in Kentucky will have to go slow or shut down. But the AI issue is obviously one that is of interest to Ford stakeholders.
He [Mr. Farley] says the jobs most at risk aren’t the ones on the assembly line, but the ones behind a desk. And in his view, the workers wiring machines, operating tools, and physically building the infrastructure could turn out to be the most critical group in the economy. Farley laid it out bluntly back in June at the Aspen Ideas Festival during an interview with author Walter Isaacson. “Artificial intelligence is going to replace literally half of all white-collar workers,” he said. “AI will leave a lot of white-collar people behind.” He wasn’t speculating about a distant future either. Farley suggested the shift is already unfolding, and the implications could be sweeping.
With the disruption of the aluminum supply chain, Ford now will have to demonstrate that AI has indeed reduced white collar headcount. The write up says:
For him, it comes down to what AI can and cannot do. Office tasks — from paperwork to scheduling to some forms of analysis — can be automated with growing speed. But when it comes to factories, data centers, supply chains, or even electric vehicle production, someone still has to build, install, and maintain it…
The Ford situation is an interesting one. AI will reduce costs because half Ford’s white collar workers will no longer be on the payroll. But with supply chain interruptions and the friction in retail and lease sales, Ford has an opportunity to demonstrate that AI will allow a traditional manufacturing company to weather the current thunderstorm and generate financial proof that AI can offset exogenous events.
How will Ford perform? This is worth watching because it will provide some useful information for firms looking for a way to cut costs, improve operations, and balance real-world business. AI delivering one kind of financial benefit and traditional blue-collar workers unable to produce products because of supply chain issues. Quite a balancing act for Ford leadership.
Stephen E Arnold, October 17, 2025
Another Better, Faster, Cheaper from a Big AI Wizard Type
October 16, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Cheap seems to be the hot button for some smart software people. I spotted a news item in the Russian computer feed I get called in English “Former OpenAI Engineer Andrey Karpaty Launched the Nanochat Neural Network Generator. You Can Make Your ChatGPT in a Few Hours.” The project is on GitHub at https://github.com/karpathy/nanochat.
The GitHub blurb says:
This repo is a full-stack implementation of an LLM like ChatGPT in a single, clean, minimal, hackable, dependency-lite codebase. Nanochat is designed to run on a single 8XH100 node via scripts like speedrun.sh, that run the entire pipeline start to end. This includes tokenization, pretraining, finetuning, evaluation, inference, and web serving over a simple UI so that you can talk to your own LLM just like ChatGPT. Nanochat will become the capstone project of the course LLM101n being developed by Eureka Labs.
The open source bundle includes:
- A report service
- A Rust-coded tokenizer
- A FineWeb dataset and tools to evaluate CORE and other metrics for your LLM
- Some training gizmos like SmolTalk, tests, and tool usage information
- A supervised fine tuning component
- Training Group Relative Policy Optimization and the GSM8K (a reinforcement learning technique), a benchmark dataset consisting of grade school math word problems
- An output engine.
Is it free? Yes. Do you have to pay? Yep. About US$100 is needed? Launch speedrun.sh, and you will have to be hooked into a cloud server or a lot of hardware in your basement to do the training. A low-ball estimate for using a cloud system is about US$100, give or take some zeros. (Think of good old Amazon AWS and its fascinating billing methods.) To train such a model, you will need a server with eight Nvidia H100 video cards. This will take about 4 hours and about $100 when renting equipment in the cloud. The need for the computing resources becomes evident when you enter the command speedrun.sh.
Net net: As the big dogs burn box cars filled with cash, Nanochat is another player in the cheap LLM game.
Stephen E Arnold, October 16, 2025
Deepseek: Why Trust Any Smart Software?
October 16, 2025
This essay is the work of a dumb dinobaby. No smart software required.
We have completed our work on my new book “The Telegram Labyrinth.” In the course of researching and writing about Pavel Durov’s online messaging system, we learned one thing: Software is not what it seems to the user. Most Telegram users believe that Telegram is end to end encrypted. It is, but only if the user goes through some hoops. The vast majority of users don’t go through hoops. Those millions upon millions of users know much about the third-party bots chugging away in Groups and Channels (public and private). Even fewer users realize that a service charge is applied to each monetary transaction in the Telegram system. That money flows to the GOAT (greatest of all time) technical wizard, Pavel Durov and some close associates. Who knew?
I read “The Demonization of Deepseek: How NIST Turned Open Science into a Security Scare.” The write up focuses on a study or analysis conducted by what used to be the National Bureau of Standards. (I loved those traffic jams on Quince Orchard Road in Gaithersburg, Maryland.) The software put under the NIST (National Institute of Science & Technology) is the China-linked Deepseek smart software.
The cited article discusses the NIST study. Let’s see what it says about the China-linked artificial intelligence system. Presumably Deepseek did more with less; that is, the idea was to demonstrate that Chinese innovation could make US methods of large language models. The result would be better, faster, and cheaper. Cheap has a tendency to win in some product and service categories. Also, “good enough” is a winner in today’s market. (How about the reliability of some of those 2025 automobiles and trucks?)
The write up says:
NIST’s recent report on Deepseek is not a neutral technical evaluation. It is a political hit piece disguised as science. There is no evidence of backdoors, spyware, or data exfiltration. What is really happening is the U.S. government using fear and misinformation to sabotage open science, open research, and open source. They are attacking gifts to humanity with politics and lies to protect corporate power and preserve control. Deepseek’s work is a genuine contribution to human knowledge, and it is being discredited for reasons that have nothing to do with security.
Okay, that’s clear.
Let’s look at how the cited write up positions Deepseek:
Deepseek built competitive AI models. Not perfect, but impressive given their budget. They spent far less than OpenAI or Anthropic and still achieved near-frontier performance. Then they open-sourced everything under Apache 2.0.
The point of the write up is that analysis has been politicized. This is an interesting allegation. I am not confident that any “objective” analysis is indeed without spin. Remember those reports about smoking cigarettes and the work of the Tobacco Institute. (I am a dinobaby, but I remember.)
The write up does identify three concerns a user of Deepseek should have. Let me quote from the cited article:
- Using Deepseek’s API: If you send sensitive data to Deepseek’s hosted service, that data goes through Chinese infrastructure. This is a real data sovereignty issue, the same as using any foreign cloud provider.
- Jailbreak susceptibility: If you’re building production applications, you need to test ANY model for vulnerabilities and implement application-level safeguards. Don’t rely solely on model guardrails. Also – use an inference time guard model (such as LlamaGuard or Qwen3Guard) to classify and filter both prompts and responses.
- Bias and censorship: All models reflect their training data. Be aware of this regardless of which model you use.
Let me offer several observations:
- Most people are unaware of what can be accomplished from software use. Assumptions about what it does and does not do are dangerous. We have tested Deepseek running locally. It is okay. This means it can do some things well like translate a passage in English into German. It has no clue about timely issues because most LLMs are not updated in near real time. Some are, but others are not. Who needs timely information when cheating on a high school essay? Answer: no one.
- The write up focuses on Deepseek, but its implications are much more broad. I think that the mindless write ups from consulting firms and online magazines is a very big problem. Critical thinking is just not the common. It is a problem in the US but other countries have this blind spot as well.
- The idea that political perceptions alter what should be an objective analysis is troubling to me. I have written a number of reports for government agencies; for example, a report about Japan’s obsession with a database industry for the Office of Technology Assessment. Yep, I am a dinobaby remember. I may have been right or wrong in my report, but I was not influenced by any political concept or actor. I could have been because I did a stint in the office of Admiral / Congressman Craig Hosmer. My OTA work was not part of the “game” for me.
Net net: Trust is important. I think it is being eroded. I also believe that there are few people who present information without fear or favor. Now here’s the key part of my perception: One cannot trust smart software or any of the programmer assembled, hidden threshold, and masked training methods that go into these confections. More critical thinking is needed. A deceptive business practice if well crafted cannot be perceived. Remember Telegram Messenger is 13 years young and users of the system don’t have much awareness of bots, mini apps, and dapps. What don’t people know about smart software?
Stephen E Arnold, October 16, 2025
AI Big Dog Chases Fake Rabbit at Race Track and Says, “Stop Now, Rabbit”
October 15, 2025
This essay is the work of a dumb dinobaby. No smart software required.
I like company leaders or inventors who say, “You must not use my product or service that way.” How does that work for smart software? I read “Techie Finishes Coursera Course with Perplexity Comet AI, Aravind Srinivas Warns Do Not Do This.” This write up explains that a person took an online course. The work required was typical lecture-stuff. The student copied the list of tasks and pasted them into Perplexity, one of the beloved high-flying US artificial intelligence company’s system.
The write up says:
In the clip, Comet AI is seen breezing through a 45-minute Coursera training assignment with the simple prompt: “Complete the assignment.” Within seconds, the AI assistant appears to tackle 12 questions automatically, all without the user having to lift a finger.
Smart software is tailor made for high school students, college students, individuals trying to qualify for technical certifications, and doctors grinding through a semi-mandatory instruction program related to a robot surgery device. Instead of learning the old-fashioned way, the AI assisted approach involves identifying the work and feeding it into an AI system. Then one submits the output.
There were two factoids in the write up that I thought noteworthy.
The first is that the course the person cheating studied was AI Ethics, Responsibility, and Creativity. I can visualize a number of MBA students taking an ethics class in business using Perplexity or some other smart software to complete assignments. I mean what MBA student wants to miss out on the role of off-shore banking in modern business. Forget the ethics baloney.
The second is that a big dog in smart software suddenly has a twinge of what the French call l’esprit d’escalier. My French is rusty, but the idea is that a person thinks of something after leaving a meeting; for example, walking down the stairs and realizing, “I screwed up. I should have said…” Here’s how the write up presents this amusing point:
[Perplexity AI and its billionaire CEO Aravind Srinivas] said “Absolutely don’t do this.”
My thought is that AI wizards demonstrate that their intelligence is not the equivalent of foresight. One cannot rewind time or unspill milk. As for the MBAs, use AI and skip ethics. The objective is money, power, and control. Ethics won’t help too much. But AI? That’s a useful technology. Just ask the fellow who completed an online class in less time than it takes to consume a few TikTok-type videos. Do you think workers upskilling to use AI will use AI to demonstrate their mastery? Never. Ho ho ho.
Stephen E Arnold, October 14, 2025
The Use Case for AI at the United Nations: Give AI a Whirl
October 15, 2025
This essay is the work of a dumb dinobaby. No smart software required.
I read a news story about the United Nations. The organization allegedly expressed concern that the organizations reports were not getting read. The solution to this problem appears in a Gizmodo “real news” report. “AI Finds Its Niche: Writing Corporate Press Releases.”
Gizmodo reports:
The researchers found that AI-assisted language cropped up in about 6 to 10 percent of job listings pulled from LinkedIn across the sample. Notably, smaller firms were more likely to use AI, peaking at closer to 15% of all total listings containing AI-crafted text.
Not good news for people who major in strategic communications at a major university. Why hire a 20-something when, smart software can do the job. Pass around the outputs. Let some leadership make changes. Fire out that puppy. Anyone — including a 50 year old internal sales person — can do it. That’s upskilling. You have a person on a small monthly stipend and a commission. You give this person a chance to show his/her AI expertise. Bingo. Headcount reduction. Efficiency. Less management friction.
The “real news” outfit’s article states:
t’s not just the corporate world that is using AI, of course. The research team also looked at English-language press releases published by the United Nations over the last couple of years and found that the organization has seemingly been utilizing AI to draft its content on a regular basis. They found that the percentage of text likely to be AI-generated has climbed from 3.1% in the first quarter of 2023 to 10.1% by the third quarter of 2023 and peaked around 13.7% by the same quarter of 2024.
If you worked at the UN and wanted to experiment with AI to boost readership, that sounds like an idea to test. Imagine if more people knew about the UN’s profile of that popular actor Broken Tooth.
Caution may be appropriate. The write up adds:
the researchers found the rate of AI usage may have already plateaued, rather than continuing to climb. For press releases, the figure peaked at 24.3% being likely AI-generated, in December 2023, but it has since stabilized at about a half-percent lower and hasn’t shifted significantly since. Job listings, too, have shown signs of decline since reaching their peak, according to the researchers. At the UN, AI usage appears to be increasing, but the rate of growth has slowed considerably.
My thought is that the UN might want to step up its AI-enhanced outputs.
I think it is interesting that the billions of dollars invested in AI has produced such outstanding results for the news release use case. Winner!
Stephen E Arnold, October 15, 2025
Want Clicks? Use Sex. It Works
October 15, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Imagine
I read a number of gloomy news articles today. The AI balloon will destroy the economy. Chicago is no longer that wonderful town, but was it ever. Telegram says it will put AI into its enchanting Messenger service. Plus, I read a New York Times’ story titled “Elon Musk Gambles on Sexy A.I. Companions.” That brilliant world leading technologist knows how to get clicks: Sex. What an idea. No one has thought of that before! (Oh, the story lurks behind a paywall. Another brilliant idea for 2025.)

Thanks Venice.ai. Good enough.
The write up says:
Mr. Musk, already known for pushing boundaries, has broken with mainstream norms and demonstrated the lengths to which he will go to gain ground in the A.I. field, where xAI has lagged behind more established competitors. Other A.I. companies, such as Meta or OpenAI, have shied away from creating chatbots that can engage in sexual conversations because of the reputational and regulatory risks.
Elon Musk has not. The idea of allow users of a social media, smart software game that unwraps more explicit challenges is a good one. It is not as white hot as a burning Tesla Cybertruck with its 12-volt powered automatic doors, but the idea is steamy.
The write up says:
The billionaire has urged his followers on X to try conversing with the sexy chatbots, sharing a video clip on X of an animated Ani dancing in underwear.
That sounds exciting. For a dinobaby like me, I prefer people fully clothed and behaving according to the conventions I learned in college when i took the required course “College Social Customs.” I admit that I was one of the few people on campus who took these “customs” to heart, The makings of a dinobaby were apparently rooted in my make up. Others in the class went to a bar to get drunk and flout as many of the guidelines as possible. Mr. Musk seems to share a kindred spirit with those in my 1962 freshman in college behavior course.
The write up says:
Mr. Musk has said the A.I. companions will help people strengthen their real-world connections and address one of his chief anxieties: population decline that he warns could lead to civilizational collapse.
My hunch is that the idea is for the right kind of people to have babies. Mr. Musk and Pavel Durov (founder of Telegram) have sired lots of kiddies. These kiddies are probably closer to what Mr. Musk wants to pop out of his sexual incubator.
The write up says:
Mr. Musk’s chatbots lack some sexual content limitations imposed by other chatbot creators that do allow some illicit conversations, users said. Nomi AI, for example, blocks some extreme material, limiting conversations to something more akin to what would be allowed on the dating app Tinder.
Yep, I get the point. Sex sells. Want sex? Use Grok and publicize the result on X.com.
How popular will this Grok feature be among the more young-at-heart users of Grok? Answer: Popular. Will other tech bro type outfits emulate Mr. Musk’s innovative marketing method? Answer: Mr. Musk is a follower. Just check out some of the services offered by certain online adult services.
What a wonderful online service. Perfect for 2025 and inclusion in a College Social Customs class for idea-starved students. No tavern required. Just a mobile device. Ah, innovation.
Stephen E Arnold, October 15, 2025
Blue Chip Consultants: Spin, Sizzle, and Fizzle with AI
October 14, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Can one quantify the payoffs from AI? Not easily. So what’s the solution? How about a “free” as in “marketing collateral” report from the blue-chip consulting firm McKinsey & Co. (You know that outfit because it figured out how to put Eastern Kentucky, Indiana, and West Virginia on the map.)
I like company reports like “Upgrading Software Business Models to Thrive in the AI Era.” These combine the weird spirit of Ezra Pound with used car sales professionals and blend in a bit of “we know more” rhetoric. Based on my experience, this is a winning combination for many professionals. This document speaks to those in the business of selling software. Today software does not come in boxes or as part of the deal when one buys a giant mainframe. Nope, software is out there. In the cloud. Companies use cloud solutions because — as consultants explained years ago — an organization can fire most technical staff and shift to pay-as-you go services. That big room that held the mainframe can become a sublease. That’s efficiency.
This particular report is the work of four — count them — four people who can help your business. Just bring money and the right attitude. McKinsey is selective. That’s how it decided to enter the pharmaceutical consulting business. Here’s a statement the happy and cooperative group of like-minded consultants presented:
while global enterprise spending on AI applications has increased eightfold over the last year to close to $5 billion, it still only represents less than 1 percent of total software application spending.
Converting this consultant speak to my style of English, the four blue chippers are trying to say that AI is not living up to the hype. Why? A software company today is having a tough time proving that AI delivers. The lack of fungible proof in the form of profits means that something is not going according to plan. Remember: The plan is to increase the revenue from software infused with AI.
Options include the exciting taxi meter approach. This means that the customers of enterprise software doesn’t know how much something costs upfront. Invoices deliver the cost. Surprise is not popular among some bean counters. Amazon’s AWS is in the surprise business. So is Microsoft Azure. However, surprise is not a good approach for some customers.
Licensees of enterprise software with that AI goodness mixed in could balk at paying fees for computational processes outside the control of the software licensee. This is the excitement a first year calculus student experiences when the values of variables are mysterious or unknown. Once one wrestles the variables to the ground, then one learns that the curve never reaches the x axis. It’s infinite, sport.
Pricing AI is a killer. The China-linked folks at Deepseek and its fellow travelers are into the easy, fast, and cheap approach to smart software. One can argue whether the intellectual property is original. One cannot argue that cheap is a compelling feature of some AI solutions. Cue the song: Where or When with the lines:
It seems we stood and talked like this before
We looked at each other in the same way then
But I can’t remember where or QWEN…
The problem is that enterprise software with AI is tough to price. The enterprise software company’s engineering and development costs go up. Their actual operating costs rise. The enterprise software company has to provide fungible proof that the bundle delivers value to warrant a higher price. That’s hard. AI is everywhere, and quite a few services are free, cheap or, or do it yourself code.
McKinsey itself does not have an answer to the problem the report from four blue chip consultants has identified. The report itself is start evidence that explaining AI pricing, operational, and use case data is a work in progress. My view is that:
- AI hype painted a picture of wonderful, easily identifiable benefits. That picture is a bit like a AI generated video. It is momentarily engaging but not real.
- AI state of the art today is output with errors. Hey, that sounds special when one is relying on AI for a medical diagnosis for your child or grandchild or managing your retirement account.,
- AI is a utility function. Software utilities get bundled into software that does something for which the user or licensee is willing to pay. At this time, AI is a work in progress, a novelty, and a cloud of unknowing. At some point, the fog will clear, but it won’t happen as quickly as the AI furnaces burn cash.
- What to sell, to whom, and pricing are problems created by AI. Asking smart software what to do is probably not going to produce a useful answer when the enterprise market is in turmoil, wallowing in uncertainty, and increasingly resistant to “surprise” pricing models.
Net net: McKinsey itself has not figured out AI. The idea is that clients will hire blue chip consultants to figure out AI. Therefore, the more studies and analyses blue chip consultants conduct, the closer these outfits will come to an answer. That’s good for the consulting business. The enterprise software companies may hire the blue chip consultants to answer the money and value questions. The bad news is that the fate of AI in enterprise software developers is in the hands of the licensees. Based on the McKinsey report, these folks are going slow. The mismatch among these players may produce friction. That will be exciting.
Stephen E Arnold, October 14, 2025
Who Is Afraid of the Big Bad AI Wolf? Mr. Beast Perhaps?
October 14, 2025
This essay is the work of a dumb dinobaby. No smart software required.
The story “MrBeast Warns of ‘Scary Times’ as AI Threatens YouTube Creators” is apparently about You Tube creators. Mr. Beast, a notable YouTube personality, is the source of the information. Is the article about YouTube creators? Yep, but it is also about Mr. Beast.

The write up says:
MrBeast may not personally face the threat of being replaced by AI as his brand thrives on large-scale, real-world stunts that rely on authenticity and human emotion. But his concern runs deeper than self-preservation. It’s about the millions of smaller creators who depend on platforms like YouTube to make a living. As one of the most influential figures on the internet, his words carry weight. The 27-year-old recently topped Forbes’ 2025 list of highest-earning creators, earning roughly $85 million and building a following of over 630 million across platforms.
Okay, Mr. Beast’s fame depended on YouTube. He is still in the YouTube fold. However, he has other business enterprises. He recognizes that smart software could create problems for creators.
I think smart software is another software tool. It is becoming a utility like a PDF editor.
The problem with Mr. Beast’s analysis is that it appears to be focused on other creators. I am not so sure. I think the comments presented in the write up reveal more about Mr. Beast than they do about the “other” creators. One example is:
“When AI videos are just as good as normal videos, I wonder what that will do to YouTube and how it will impact the millions of creators currently making content for a living… scary times,” MrBeast — whose real name is Jimmy Donaldson — wrote on X.
I am no expert on human psychology, but I see the use of the word “impact” and “scary” as a glimpse of what Mr. Beast is thinking. His production costs allegedly rival those of traditional commercial video outfits. The ideas and tropes have become increasingly strained and bizarre. YouTube acts in a unilateral way and outputs smarm to the creators desperate to know why the flow of their money has been reduced if not cut off. Those disappearing van life videos are just one example of how video magnets can melt down and be crushed under the wheels of the Google bus.
My thought is that Google will use AI to create alterative Mr. Beast-type videos with AI. Then squeeze the Mr. Beast type creators and let the traffic flow to Mother Google. No royalties required, so Google wins. Mr. Beast-type creators can find their future and money elsewhere. Simple.
Stephen E Arnold, October 14, 2025

