Blue Chip Consultants: Spin, Sizzle, and Fizzle with AI
October 14, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Can one quantify the payoffs from AI? Not easily. So what’s the solution? How about a “free” as in “marketing collateral” report from the blue-chip consulting firm McKinsey & Co. (You know that outfit because it figured out how to put Eastern Kentucky, Indiana, and West Virginia on the map.)
I like company reports like “Upgrading Software Business Models to Thrive in the AI Era.” These combine the weird spirit of Ezra Pound with used car sales professionals and blend in a bit of “we know more” rhetoric. Based on my experience, this is a winning combination for many professionals. This document speaks to those in the business of selling software. Today software does not come in boxes or as part of the deal when one buys a giant mainframe. Nope, software is out there. In the cloud. Companies use cloud solutions because — as consultants explained years ago — an organization can fire most technical staff and shift to pay-as-you go services. That big room that held the mainframe can become a sublease. That’s efficiency.
This particular report is the work of four — count them — four people who can help your business. Just bring money and the right attitude. McKinsey is selective. That’s how it decided to enter the pharmaceutical consulting business. Here’s a statement the happy and cooperative group of like-minded consultants presented:
while global enterprise spending on AI applications has increased eightfold over the last year to close to $5 billion, it still only represents less than 1 percent of total software application spending.
Converting this consultant speak to my style of English, the four blue chippers are trying to say that AI is not living up to the hype. Why? A software company today is having a tough time proving that AI delivers. The lack of fungible proof in the form of profits means that something is not going according to plan. Remember: The plan is to increase the revenue from software infused with AI.
Options include the exciting taxi meter approach. This means that the customers of enterprise software doesn’t know how much something costs upfront. Invoices deliver the cost. Surprise is not popular among some bean counters. Amazon’s AWS is in the surprise business. So is Microsoft Azure. However, surprise is not a good approach for some customers.
Licensees of enterprise software with that AI goodness mixed in could balk at paying fees for computational processes outside the control of the software licensee. This is the excitement a first year calculus student experiences when the values of variables are mysterious or unknown. Once one wrestles the variables to the ground, then one learns that the curve never reaches the x axis. It’s infinite, sport.
Pricing AI is a killer. The China-linked folks at Deepseek and its fellow travelers are into the easy, fast, and cheap approach to smart software. One can argue whether the intellectual property is original. One cannot argue that cheap is a compelling feature of some AI solutions. Cue the song: Where or When with the lines:
It seems we stood and talked like this before
We looked at each other in the same way then
But I can’t remember where or QWEN…
The problem is that enterprise software with AI is tough to price. The enterprise software company’s engineering and development costs go up. Their actual operating costs rise. The enterprise software company has to provide fungible proof that the bundle delivers value to warrant a higher price. That’s hard. AI is everywhere, and quite a few services are free, cheap or, or do it yourself code.
McKinsey itself does not have an answer to the problem the report from four blue chip consultants has identified. The report itself is start evidence that explaining AI pricing, operational, and use case data is a work in progress. My view is that:
- AI hype painted a picture of wonderful, easily identifiable benefits. That picture is a bit like a AI generated video. It is momentarily engaging but not real.
- AI state of the art today is output with errors. Hey, that sounds special when one is relying on AI for a medical diagnosis for your child or grandchild or managing your retirement account.,
- AI is a utility function. Software utilities get bundled into software that does something for which the user or licensee is willing to pay. At this time, AI is a work in progress, a novelty, and a cloud of unknowing. At some point, the fog will clear, but it won’t happen as quickly as the AI furnaces burn cash.
- What to sell, to whom, and pricing are problems created by AI. Asking smart software what to do is probably not going to produce a useful answer when the enterprise market is in turmoil, wallowing in uncertainty, and increasingly resistant to “surprise” pricing models.
Net net: McKinsey itself has not figured out AI. The idea is that clients will hire blue chip consultants to figure out AI. Therefore, the more studies and analyses blue chip consultants conduct, the closer these outfits will come to an answer. That’s good for the consulting business. The enterprise software companies may hire the blue chip consultants to answer the money and value questions. The bad news is that the fate of AI in enterprise software developers is in the hands of the licensees. Based on the McKinsey report, these folks are going slow. The mismatch among these players may produce friction. That will be exciting.
Stephen E Arnold, October 14, 2025
AI and America: Not a Winner It Seems
October 13, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Los Alamos National Laboratory perceives itself as one of the world’s leading science and research facilities. Jason Pruet is the Director of Los Alamos’s National Security AI Office and he was interviewed in “Q&A With Jason Pruet.” Pruet’s job is to prepare the laboratory for AI integration. He used to view AI as another tool for advancement, but Pruet now believes AI would disrupt the fundamental landscape of science, security, and more.
In the interview, Pruet states that the US government invested more in AI than any time in the past. He compared this investment to the World War II paradigm of science for the public good. Pruet explained that before the war, the US government wasn’t involved with science. After the war, Los Alamos shifted the dynamic and shaped modern America’s dedication to science, engineering, etc.
One of the biggest advances in AI technology is transformer architecture that allows huge progress to scale AI models, especially for mixing different information types. Pruet said that China is treating AI like a general purpose technology (i.e electricity) and they’ve launched a National AI strategy. The recent advances in AI are changing power structures. It’s turning into a new international arms race but that might not be the best metaphor:
“[Pruet:] All that said, I’m increasingly uncomfortable viewing this through the lens of a traditional arms race. Many thoughtful and respected people have emphasized that AI poses enormous risks for humanity. There are credible reports that China’s leadership has come to the same view, and that internally, they are trying to better balance the potential risks rather than recklessly seek advantage. It may be that the only path for managing these risks involves new kinds of international collaborations and agreements.”
Then Pruet had this to say about the state of the US’s AI development:
“Like we’re behind. The ability to use machines for general-purpose reasoning represents a seminal advance with enormous consequences. This will accelerate progress in science and technology and expand the frontiers of knowledge. It could also pose disruptions to national security paradigms, educational systems, energy, and other foundational aspects of our society. As with other powerful general-purpose technologies, making this transition will depend on creating the right ecosystem. To do that, we will need new kinds of partnerships with industry and universities.”
The sentiment seems to be focused on going faster and farther than any other country in the AI game. With the circular deals OpenAI has been crafting, AI seems to be more about financial innovation than technical innovation.
Whitney Grace, October 13, 2025
Parenting 100: A Remedial Guide to Raising Children
October 13, 2025
This essay is the work of a dumb dinobaby. No smart software required.
I am not sure what’s up this week (October 6 to 10, 2025). I am seeing more articles about the impact of mobile devices, social media, doom scrolling, and related cheerful subjects in my newsfeed. A representative article is “Lazy Parents Are Giving Their Toddlers ChatGPT on Voice Mode to Keep Them Entertained for Hours.”
Let’s take a look at a couple of passages that I thought were interesting:
with the rise of human-like AI chatbots, a generation of “iPad babies” could seem almost quaint: some parents are now encouraging their kids to talk with AI models, sometimes for hours on end…
I get it. Parents are busy today. If they are lucky enough to have jobs, automatic meeting services keep them hopping. Then there is the administrivia of life. Children just add to the burden. Why not stick the kiddie in a playpen with an iPad. Tim Apple will be happy.
What’s the harm? How about this factoid (maybe an assertion from smart software?) from the write up:
AI chatbots have been implicated in the suicides of several teenagers, while a wave of reports detail how even grown adults have become so entranced by their interactions with sycophantic AI interlocutors that they develop severe delusions and suffer breaks with reality — sometimes with deadly consequences.
Okay, bummer. The write up includes a hint of risk for parents about these chat-sitters; to wit:
Andrew McStay, a professor of technology and society at Bangor University, isn’t against letting children use AI — with the right safeguards and supervision. But he was unequivocal about the major risks involved, and pointed to how AI instills a false impression of empathy.
Several observations seem warranted:
- Which is better? Mom and dad interacting with the kiddo. Maybe grandma could be a good stand in? Or, letting the kid tune in and drop out?
- Imagine sending a chat surfer to school. Human interaction is not going to be as smooth and stress free as having someone take the kiddo’s animal crackers and milk or pouting until kiddo can log on again.
- Visualize the future: Is this chat surfer going to be a great employee and colleague? Answer: No.
I find it amazing that decades after these tools became available that people do not understand the damage flowing bits do to thinking, self esteem, and social conventions. Empathy? Sure, just like those luminaries at Silicon Valley type AI companies. Warm, caring, trustworthy.
Stephen E Arnold, October 13, 2025
Weaponization of LLMs Is a Thing. Will Users Care? Nope
October 10, 2025
This essay is the work of a dumb dinobaby. No smart software required.
A European country’s intelligence agency learned about my research into automatic indexing. We did a series of lectures to a group of officers. Our research method, the results, and some examples preceded a hands on activity. Everyone was polite. I delivered versions of the lecture to some public audiences. At one event, I did a live demo with a couple of people in the audience. Each followed a procedure, and I showed the speed with which the method turned up in the Google index. These presentations took place in the early 2000s. I assumed that the behavior we discovered would be disseminated and then it would diffuse. It was obvious that:
- Weaponized content would be “noted” by daemons looking for new and changed information
- The systems were sensitive to what I called “pulses” of data. We showed how widely used algorithms react to sequences of content
- The systems would alter what they would output based on these “augmented content objects.”
In short, online systems could be manipulated or weaponized with specific actions. Most of these actions could be orchestrated and tuned to have maximum impact. One example in my talks was taking a particular word string and making it turn up in queries where one would not expect that behavior. Our research showed that a few as four weaponized content objects orchestrated in a specific time interval would do the trick. Yep, four. How many weaponized write ups can my local installation of LLMs produce in 15 minutes? Answer: Hundreds. How long does it take to push those content objects into information streams used for “training.” Seconds.
Fish live in an environment. Do fish know about the outside world? Thanks, Midjourney. Not a ringer but close enough in horseshoes.
I was surprised when I read “A Small Number of Samples Can Poison LLMs of Any Size.” You can read the paper and work through the prose. The basic idea is that selecting or shaping training data or new inputs to recalibrate training data can alter what the target system does. I quite like the phrase “weaponize information.” Not only does the method work, it can be automated.
What’s this mean?
The intentional selection of information or the use of a sample of information from a domain can generate biases in what the smart software knows, thinks, decides, and outputs. Dr. Timnit Gebru and her parrot colleagues were nibbling around the Google cafeteria. Their research caused the Google to put up a barrier to this line of thinking. My hunch is that she and her fellow travelers found that content that is representative will reflect the biases of the authors. This means that careful selection of content for training or updating training sets can be steered. That’s what the Anthropic write up make clear.
Several observations are warranted:
- Whoever selects training data or the information used to update and recalibrate training data can control what is displayed, recommended, or included in outputs like recommendations
- Users of online systems and smart software are like fish in a fish bowl. The LLM and smart software crowd are the people who fill the bowl and feed the fish. Fish have a tough time understanding what’s outside their bowl. I don’t like the word “bubble” because these pop. An information fish bowl is tough to escape and break.
- As smart software companies converge into essentially an oligopoly using the types of systems I described in the early 2000s with some added sizzle from the Transformer thinking, a new type of information industrial complex is being assembled on a very large scale. There’s a reason why Sam AI-Man can maintain his enthusiasm for ChatGPT. He sees the potential of seemingly innocuous functions like apps within ChatGPT.
There are some interesting knock on effects from this intentional or inadvertent weaponization of online systems. One is that the escalating violent incidents are an output of these online systems. Inject some René Girard-type content into training data sets. Watch what those systems output. “Real” journalists are explaining how they use smart software for background research. Student uses online systems without checking to see if the outputs line up with what other experts say. What about investment firms allowing smart software to make certain financial decisions.
Weaponize what the fish live in and consume. The fish are controlled and shaped by weaponized information. How long has this quirk of online been known? A couple of decades, maybe more. Why hasn’t “anything” been done to address this problem? Fish just ask, “What problem?”
Stephen E Arnold, October x, 2025
I spotted
ChatGPT Finds Humans Useful
October 10, 2025
OpenAI is chasing consumers during primetime football games, we learn from 9to5Mac’s piece, “Pressure Mounts on Siri as ChatGPT Ads Start Airing on Primetime TV.” The first of these ads premiered during NFL Primetime. We are told the campaign focuses on ways people are using ChatGPT in their everyday lives, like creating recipes or fitness plans. So wholesome! (We assume they are leaving out the many downsides of overreliance on the tech.) Does this mean firm’s second Super Bowl ad will be more down to earth than its first one?
Writer Ben Lovejoy asserts this campaign highlights how embarrassingly far Apple’s Siri is behind ChatGPT. iPhone users have the option to get an answer from ChatGPT when Siri fails them. But, as Lovejoy notes, the permission prompt serves as a spotlight on Siri’s inadequacies.
The ad campaign comes with an interesting caveat. We learn:
“With growing concern in the creative sector around the use of AI, the company has gone out of its way to ensure that no artificial intelligence was used for the actual creative work. Creative Review reports: Crucially, the campaign was created largely through human endeavour, with the team at OpenAI noting that: ‘Human craft was central to the campaign’s creation. Every frame was shot on film, shaped by directors, photographers, producers and many more masters of craft.’ That ‘largely’ rider reflects that ChatGPT was used for some background work, with ‘streamlining shot lists and organising schedules’ given as examples.”
Will this acknowledgement that real life is better than AI fakery backfire on the premier AI company? And no Sora?
Cynthia Murrell, October 10, 2025
AI Has a Secret: Humans Do the Work
October 10, 2025
A key component of artificial intelligence output is not artificial at all. The Guardian reveals “How Thousands of ‘Overworked, Underpaid’ Humans Train Google’s AI to Seem Smart.” From accuracy to content moderation, Google Gemini and other AI models rely on a host of humans employed by third-party contractors. Humans whose jobs get harder and harder as they are pressured to churn through the work faster and faster. Gee, what could go wrong?
Reporter Varsha Bansal relates:
“Each new model release comes with the promise of higher accuracy, which means that for each version, these AI raters are working hard to check if the model responses are safe for the user. Thousands of humans lend their intelligence to teach chatbots the right responses across domains as varied as medicine, architecture and astrophysics, correcting mistakes and steering away from harmful outputs.”
Very important work—which is why companies treat these folks as valued assets. Just kidding. We learn:
“Despite their significant contributions to these AI models, which would perhaps hallucinate if not for these quality control editors, these workers feel hidden. ‘AI isn’t magic; it’s a pyramid scheme of human labor,’ said Adio Dinika, a researcher at the Distributed AI Research Institute based in Bremen, Germany. ‘These raters are the middle rung: invisible, essential and expendable.’”
And, increasingly, rushed. The write-up continues:
“[One rater’s] timer of 30 minutes for each task shrank to 15 – which meant reading, fact-checking and rating approximately 500 words per response, sometimes more. The tightening constraints made her question the quality of her work and, by extension, the reliability of the AI. In May 2023, a contract worker for Appen submitted a letter to the US Congress that the pace imposed on him and others would make Google Bard, Gemini’s predecessor, a ‘faulty’ and ‘dangerous’ product.”
And that is how we get AI advice like using glue on pizza or adding rocks to one’s diet. After those actual suggestions went out, Google focused on quality over quantity. Briefly. But, according to workers, it was not long before they were again told to emphasize speed over accuracy. For example, last December, Google announced raters could no longer skip prompts on topics they knew little about. Think workers with no medical expertise reviewing health advice. Not great. Furthermore, guardrails around harmful content were perforated with new loopholes. Bansal quotes Rachael Sawyer, a rater employed by Gemini contractor GlobalLogic:
“It used to be that the model could not say racial slurs whatsoever. In February, that changed, and now, as long as the user uses a racial slur, the model can repeat it, but it can’t generate it. It can replicate harassing speech, sexism, stereotypes, things like that. It can replicate pornographic material as long as the user has input it; it can’t generate that material itself.”
Lovely. It is policies like this that leave many workers very uncomfortable with the software they are helping to produce. In fact, most say they avoid using LLMs and actively discourage friends and family from doing so.
On top of the disillusionment, pressure to perform full tilt, and low pay, raters also face job insecurity. We learn GlobalLogic has been rolling out layoffs since the beginning of the year. The article concludes with this quote from Sawyer:
‘I just want people to know that AI is being sold as this tech magic – that’s why there’s a little sparkle symbol next to an AI response,’ said Sawyer. ‘But it’s not. It’s built on the backs of overworked, underpaid human beings.’
We wish we could say we are surprised.
Cynthia Murrell, October 10, 2025
AI Embraces the Ethos of Enterprise Search
October 9, 2025
This essay is the work of a dumb dinobaby. No smart software required.
In my files, I have examples of the marketing collateral generated by enterprise search vendors. I have some clippings from trade publications and other odds and ends dumped into my enterprise search folder. One of these reports is “Fastgründer John Markus Lervik dømt til fengsel.” The article is no longer online, but you can read my 2014 summary at this Beyond Search link. The write up documents an enterprise search vendor who used some alleged accounting methods to put a shine on the company. In 2008, Microsoft purchased Fast Search & Transfer putting an end to this interesting company.
A young CPA MBA BA (with honors) is jockeying a spreadsheet. His father worked for an enterprise search vendor based in the UK. His son is using his father’s template but cannot get the numbers to show positive cash flows across six quarters. Thanks, Venice.ai. Good enough.
Why am I mentioning Fast Search & Transfer? The information in Fortune Magazine’s “‘There’s So Much Pressure to Be the Company That Went from Zero to $100 Million in X Days’: Inside the Sketchy World of ARR and Inflated AI Startup Accounting” jogged my memory about Fast Search and a couple of other interesting companies in the enterprise search sector.
Enterprise search was the alleged technology to put an organization’s information at the fingertips of employees. Enterprise search would unify silos of information. Enterprise search would unlock the value of an organization’s “hidden” or “dark” data. Enterprise search would put those hours wasted looking for information to better use. (IDC was the cheerleader for the efficiency payoff from enterprise search.)
Does this sound familiar? It should every vendor applying AI to an organization’s information challenges is either recycling old chestnuts from the Golden Age of Enterprise Search or wandering in the data orchard discovering these glittering generalities amidst nuggets of high value jargon.
The Fortune article states:
There’s now a massive amount of pressure on AI-focused founders, at earlier stages than ever before: If you’re not generating revenue immediately, what are you even doing? Founders—in an effort to keep up with the Joneses—are counting all sorts of things as “long-term revenue” that are, to be blunt, nothing your Accounting 101 professor would recognize as legitimate. Exacerbating the pressure is the fact that more VCs than ever are trying to funnel capital into possible winners, at a time where there’s no certainty about what evaluating success or traction even looks like.
Would AI start ups fudge numbers? Of course not. Someone at the start up or investment firm took a class in business ethics. (The pizza in those study groups was good. Great if it could be charged to another group member’s Visa without her knowledge. Ho ho ho.)
The write up purses the idea that ARR or annual recurring revenue is a metric that may not reflect the health of an AI business. No kidding? When an outfit has zero revenue resulting from dumping investor case into a burning dumpster fire, it is difficult for me to understand how people see a payoff from AI. The “payoff” comes from moving money around, not from getting cash from people or organizations on a consistent basis. Subscription-like business models are great until churn becomes a factor.
The real point of the write up for me is that financial tricks, not customers paying for the product or service, are the name of the game. One big enterprise search outfit used “circular” deals to boost revenue. I did some small work for this outfit, so I cannot identify it. The same method is now part of the AI revolution involving Nvidia, OpenAI, and a number of other outfits. Whose money is moving? Who gets it? What’s the payoff? These are questions not addressed in depth in the information to which I have access?
I think financial intermediaries are the folks taking home the money. Some vendors may get paid like masters of black art accounting. But investor payoff? I am not so sure. For me the good old days of enterprise search are back again, just with bigger numbers and more impactful financial consequences.
As an aside, the Fortune article uses the word “shit” twice. Freudian slip or a change in editorial standards at Fortune? That word was applied by one of my team when asked to describe the companies I profiled in the Enterprise Search Report I wrote many years ago. “Are you talking about my book or enterprise search?” I asked. My team member replied, “The enterprise search thing.”
Stephen E Arnold, October 2025
With or Without AI: Winners Win and Losers Lose
October 8, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Some outfits are just losers. That’s the message I got after reading “AI Magnifies Your Teams’ Strengths – and Weaknesses, Google Report Finds.” Keep in mind that this report — the DORA Report or DevOps Research & Assessment — is Googley. The write up makes clear that Google is not hallucinating. The outstanding company:
surveyed 5,000 software development professionals across industries and followed up with more than 100 hours of interviews. It may be one of the most comprehensive studies of AI’s changing role in software development, especially at the enterprise level.

Winners with AI win bigger. Losers with AI continue to lose. Is that sad team mascot one of Sam Altman’s AI cheerleaders. I think it is. Thanks, MidJourney. Good enough.
Obviously the study is “one of the most comprehensive”; of course, it is Google’s study!
The big finding seems to be:
… AI has moved from hype to mainstream in the enterprise software development world. Second, real advantage isn’t about the tools (or even the AI you use). It’s about building solid organizational systems. Without those systems, AI has little advantage. And third, AI is a mirror. It reflects and magnifies how well (or poorly) you already operate.
I interpret the findings of the DORA Report in an easy-to-remember way: Losers still lose even if their teams use AI. I think of this as a dominant football team. The team has the money to induce or direct events. As a result, the team has the best players. The team has the best coaches (leadership). The team has the best infrastructure. In short, when one is the best, AI makes the best better.
On the other hand, a losing team composed of losers will use AI and still lose.
I noted that the report about DORA did not include:
- Method of sample selection
- Questions asked
- Methodology for generating the numerous statistics in the write up.
What happens if one conducts a study to validate the idea that winners win and losers keep on losing? I think it sends a clear signal that a monopoly-type of outfit has a bit of an inferiority or fear-centric tactical view. Even the quantumly supreme need a marketing pick me up now and then.
Stephen E Arnold, October 8, 2025
Slopity Slopity Slop: Nice Work AI Leaders
October 8, 2025
Remember that article about academic and scientific publishers using AI to churn out pseudoscience and crap papers? Or how about that story relating to authors’ works being stolen to train AI algorithms? Did I mention they were stealing art too?
Techdirt literally has the dirt on AI creating more slop: “AI Slop Startup To Flood The Internet With Thousands Of AI Slop Podcasts, Calls Critics Of AI Slop ‘Luddites’.” AI is a helpful tool. It’s great to assist with mundane things of life or improve workflows. Automation, however, has become the newest sensation. Big Tech bigwigs and other corporate giants are using it to line their purses, while making lives worse for others.
Note this outstanding example of a startup that appears to be interested in slop:
“Case in point: a new startup named Inception Point AI is preparing to flood the internet with a thousands upon thousands of LLM-generated podcasts hosted by fake experts and influencers. The podcasts cost the startup a dollar or so to make, so even if just a few dozen folks subscribe they hope to break even…”
They’ll make the episodes for less than a dollar. Podcasting is already a saturated market, but Point AI plans to flush it with garbage. They don’t care about the ethics. It’s going to be the Temu of podcasts. It would be great if people would flock to true human-made stuff, but they probably won’t.
Another reason we’re in a knowledge swamp with crocodiles.
Whitney Grace, October 9, 2025
The Future: Autonomous Machines
October 7, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Does mass customization ring a bell? I cannot remember whether it was Joe Pine or Al Toffler who popularized the idea. The concept has become a trendlet. Like many high-technology trends, a new term is required to help communication the sizzle of “new.”
An organization is now an “autonomous machine.” The concept is spelled out in “This Is Why Your Company Is Transforming into an Autonomous Machine.” The write up asserts:
Industries are undergoing a profound transformation as products, factories, and companies adopt the autonomous machine design model, treating each element as an integrated system that can sense, understand, decide, and act (SUDA business operating system) independently or in coordination with other platforms.
I assume SUDA rhymes with OODA (Observe, Orient, Decide, Act), but who knows?
The inspiration for the autonomous machine may be Elon Musk, who allegedly said: “I’m really thinking of the factory like a product.” Gnomic stuff.
The write up adds:
The Tesla is a cyber-physical system that improves over time through software updates, learns from millions of other vehicles, and can predict maintenance needs before problems occur.
I think this is an interesting idea. There is a logical progression at work; specifically:
- An autonomous “factory”
- Autonomous “companies” but I think one could just think about organizations and not be limited to commercial enterprises
- Agentic enterprises.
The future appears to be like this:
The path to becoming an autonomous enterprise, using a hybrid workforce of humans and digital labor powered by AI agents, will require constant experimentation and learning. Go fast, but don’t hurry. A balanced approach, using your organization’s brains and hearts, will be key to success. Once you start, you will never go back. Adopt a beginner’s mindset and build. Companies that are built like autonomous machines no longer have to decide between high performance and stability. Thanks to AI integration, business leaders are no longer forced to compromise. AI agents and physical AI can help business leaders design companies like a stealth aircraft. The technology is ready, and the design principles are proven in products and production. The fittest companies are autonomous companies.
I am glad I am a dinobaby, a really old dinobaby. Mass customization alright. Oligopolies producing what they want for humans who are supposed to have a job to buy the products and services. Yeah.
Stephen E Arnold, October 7, 2025

