ChatGPT: Smoked by GenX MBA Data
December 8, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I saw this chart from Sensor Tower in several online articles. Examples include TechCrunch, LinkedIn, and a couple of others. Here’s the chart as presented by TechCrunch on December 5, 2025:

Yes, I know it is difficult to read. Complain to WordPress, not me, please.
The seven columns are labeled Date starting on January 2025. I am not sure if this is December 2024 data compiled in January 2025 or end of January 2025 data. Meta data would be helpful, but I am a dinobaby and this is a very GenX-type of Excel chart. The chart then presents what I think are mobile installs or some action related to the “event” captured when the Sensor Tower data receives a signal. I am not sure, and some remarks about how the data were collected would be helpful to a person disguised as a dinobaby. The column heads are not in alphabetical order. I assume the hassle of alphabetizing was too much work for whoever created the table. Here’s the order:
- ChatGPT
- Microsoft 365 Copilot
- Google Gemini
- Perplexity
- Grok
- Claude
The second thing I noticed was that the data do not reflect individual installs or uses. Thus, these data are of limited use to a dinobaby like me. Sure, I can see that ChatGPT’s growth slowed (if the numbers are on the money) and Gemini’s grew. But ChatGPT has a bigger base and it may be finding it ore difficult to attract installs or events so the percent increase seems to shout, “Bad news, Sam AI-Man.”
Then there is the issue of number of customers. We are now shifting from the impression some may have that these numbers represent individual humans to the fuzzy notion of app events. Why does this matter? Google and Microsoft have many more corporate and individual users than the other firms combined. If Google or Microsoft pushes or provides free access, those events will appeal to the user base and the number of “events” will jump. The data narrow Microsoft’s AI to Microsoft 365 Copilot. Google’s numbers are not narrowed. They may be, but there is not metadata to help me out. Here’s the Microsoft column:

As a result, the graph of the Microsoft 365 Copilot looks like this:

What’s going on from May to August 2025? I have no clue. Vacations maybe? Again that old fashioned metadata, footnotes, and some information about methodology would be helpful to a dinobaby. I mention the Microsoft data for one reason: None of the other AI systems listed in the Sensor Tower data table have this characteristic. Don’t users of ChatGPT, Google, et al, go on vacation? If one set of data for an important company have an anomaly, can one trust the other data. Those data are smooth.
If I look at the complete array of numbers, I expected to see more ones. There is some weird Statistics 101 “law” about digit frequency, and it seems to this dinobaby that it’s not being substantiated in the table. I can overlook how tidy the numbers are because why not round big numbers. It works for Fortune 1000 budgets and for many government agencies’ budgets.
A person looking at these data will probably think “number of users.” Nope, number of events recorded by Sensor Tower. Some of the vendors can force or inject AI into a corporate, governmental, or individual user stream. Some “events” may be triggered by workflows that use multiple AI systems. There are probably a few people with too much time and no money sense paying for multiple services and using them to explore a single topic or area in inquiry; for example, what is the psychological make up of a GenX MBA who presents data that can be misinterpreted.
Plus, the AI systems are functionally different and probably not comparable using “event” data. For example, Copilot may reflect events in corporate document editing. The Google can slam AI into any of its multi-billion user, system, or partner activities. I am not sure about Claude (Anthropic) or Grok. What about Amazon? Nowhere to be found I assume. The Chinese LLMs? Nope. Mistral? Crickets.
Finally, should I raise the question of demographics? Ah, you say, “No.” Okay, I am easy. Forget demos; there aren’t any.
Please, check out the cited article. I want to wrap up by quoting one passage from the TechCrunch write up:
Gemini is also increasing its share of the overall AI chatbot market when compared across all top apps like ChatGPT, Copilot, Claude, Perplexity, and Grok. Over the past seven months (May-November 2025), Gemini increased its share of global monthly active users by three percentage points, the firm estimates.
This sounds like Sensor Tower talking.
Net net: I am not confident in GenX “event” data which seems to say, “ChatGPT is losing the AI race.” I may agree in part with this sentiment, but the data from Sensor Tower do influence me. But marketing is marketing.
Stephen E Arnold, December 8, 2025
Clippy, How Is Copilot? Oh, Too Bad
December 8, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
In most of my jobs, rewards landed on my desk when I sold something. When the firms silly enough to hire me rolled out a product, I cannot remember one that failed. The sales professionals were the early warning system for many of our consulting firm’s clients. Management provided money to a product manager or R&D whiz with a great idea. Then a product or new service idea emerged, often at a company event. Some were modest, but others featured bells and whistles. One such roll out had a big name person who a former adviser to several presidents. These firms were either lucky or well managed. Product dogs, diseased ferrets, and outright losers were identified early and the efforts redirected.

Two sales professionals realize that their prospects resist Microsoft’s agentic pawing. Mortgages must be paid. Sneakers must be purchased. Food has to be put on the table. Sales are needed, not push backs. Thanks, Venice.ai. Good enough.
But my employers were in tune with what their existing customer base wanted. Climbing a tall tree and going out on a limb were not common occurrences. Even Apple, which resides in a peculiar type of commercial bubble, recognizes a product that does not sell. A recent example is the itsy bitsy, teeny weenie mobile thingy. Apple bounced back with the Granny Scarf designed to hold any mobile phone. The thin and light model is not killed; its just not everywhere like the old reliable orange iPhone.
Sales professionals talk to prospects and customers. If something is not selling, the sales people report, “Problemo, boss.”
In the companies which employed me, the sales professionals knew what was coming and could mention in appropriately terms to those in the target market. This happened before the product or service was in production or available to clients. My employers (Halliburton, Booz, Allen, and a couple of others held in high esteem) had the R&D, the market signals, the early warning system for bad ideas, and the refinement or improvement mechanism working in a reliable way.
I read “Microsoft Drops AI Sales Targets in Half after Salespeople Miss Their Quotas.” The headline suggested three things to me instantly:
- The pre-sales early warning radar system did not exist or it was broken
- The sales professionals said in numbers, “Boss, this Copilot AI stuff is not selling.”
- Microsoft committed billions of dollars and significant, expensive professional staff time on something that prospects and customers do not rush to write checks, use, or tell their friends about the next big thing.”
The write up says:
… one US Azure sales unit set quotas for salespeople to increase customer spending on a product called Foundry, which helps customers develop AI applications, by 50 percent. Less than a fifth of salespeople in that unit met their Foundry sales growth targets. In July, Microsoft lowered those targets to roughly 25 percent growth for the current fiscal year. In another US Azure unit, most salespeople failed to meet an earlier quota to double Foundry sales, and Microsoft cut their quotas to 50 percent for the current fiscal year. The sales figures suggest enterprises aren’t yet willing to pay premium prices for these AI agent tools. And Microsoft’s Copilot itself has faced a brand preference challenge: Earlier this year, Bloomberg reported that Microsoft salespeople were having trouble selling Copilot to enterprises because many employees prefer ChatGPT instead.
Microsoft appears to have listened to the feedback. The adjustment, however, does not address the failure to implement the type of marketing probing process used by Halliburton and Booz, Allen: Microsoft implemented the “think it and it will become real.” The thinking in this case is that software can perform human work roles in a way that is equivalent to or better than a human’s execution.
I may be a dinobaby, but I figured out quickly that smart software has been for the last three years a utility. It is not quite useless, but it is not sufficiently robust to do the work that I do. Other people are on the same page with me.
My take away from the lower quotas is that Microsoft should have a rethink. The OpenAI bet, the AI acquisitions, the death march to put software that makes mistakes in applications millions use in quite limited ways, and the crazy publicity output to sell Copilot are sending Microsoft leadership both audio and visual alarms.
Plus, OpenAI has copied Google’s weird Red Alert. Since Microsoft has skin in the game with OpenAI, perhaps Microsoft should open its eyes and check out the beacons and listen to the klaxons ringing in Softieland sales meetings and social media discussions about Microsoft AI? Just a thought. (That Telegram virtual AI data center service looks quite promising to me. Telegram’s management is avoiding the Clippy-type error. Telegram may fail, but that outfit is paying GPU providers in TONcoin, not actual fiat currency. The good news is that MSFT can make Azure AI compute available to Telegram and get paid in TONcoin. Sounds like a plan to me.)
Stephen E Arnold, December 8, 2025
Telegram’s Cocoon AI Hooks Up with AlphaTON
December 5, 2025
[This post is a version of an alert I sent to some of the professionals for whom I have given lectures. It is possible that the entities identified in this short report will alter their messaging and delete their Telegram posts. However, the thrust of this announcement is directionally correct.]
Telegram’s rapid expansion into decentralized artificial intelligence announced a deal with AlphaTON Capital Corp. The Telegram post revealed that AlphaTON would be a flagship infrastructure and financial partner. The announcement was posted to the Cocoon Group within hours of AlphaTON getting clear of U.S. SEC “baby shelf” financial restrictions. AlphaTON promptly launched a $420.69 million securities push. Telegram and AlphaTON either acted in a coincidental way or Pavel Durov moved to make clear his desire to build a smart, Telegram-anchored financial service.
AlphaTON, a Nasdaq microcap formerly known as Portage Biotech rebranded in September 2025. The “new” AlphaTON claims to be deploying Nvidia B200 GPU clusters to support Cocoon, Telegram’s confidential-compute AI network. The company’s pivot from oncology to crypto-finance and AI infrastructure was sudden. Plus AlphaTON’s CEO Brittany Kaiser (best known for Cambridge Analytica) has allegedly interacted with Russian political and business figures during earlier data-operations ventures. If the allegations are accurate, Ms. Kaiser has connections to Russia-linked influence and financial networks. Telegram is viewed by some organizations like Kucoin as a reliable operational platform for certain financial activities.
Telegram has positioned AlphaTON as a partner and developer in the Telegram ecosystem. Firms like Huione Guarantee allegedly used Telegram for financial maneuvers that resulted in criminal charges. Other alleged uses of the Telegram platform have included other illegal activities identified in the more than a dozen criminal charges for which Pavel Durov awaits trial in France. Telegram’s instant promotion of AlphaTON, combined with the firm’s new ability to raise hundreds of millions, points to a coordinated strategy to build an AI-enabled financial services layer using Cocoon’s VAIC or virtual artificial intelligence complex.
The message seems clear. Telegram is not merely launching a distributed AI compute service; it is enabling a low latency, secrecy enshrouded AI-crypto financial construct. Telegram and AlphaTON both see an opportunity to profit from a fusion of distributed AI, cross jurisdictional operation, and a financial pay off from transactions at scale. For me and my research team, the AlphaTON tie-up signals that Telegram’s next frontier may blend decentralized AI, speculative finance, and actors operating far from traditional regulatory guardrails.
In my monograph “Telegram Labyrinth” (available only to law enforcement, US intelligence officers, and cyber attorneys in the US), Telegram requires close monitoring and a new generation of intelware software. Yesterday’s tools were not designed for what Telegram is deploying itself and with its partners. Thank you.
Stephen E Arnold, December 5, 2025, 1034 am US Eastern time
AI Bubble? What Bubble? Bubble?
December 5, 2025
Another dinobaby original. If there is what passes for art, you bet your bippy, that I used smart software. I am a grandpa but not a Grandma Moses.
I read “JP Morgan Report: AI Investment Surge Backed by Fundamentals, No Bubble in Sight.” The “report” angle is interesting. It implies unbiased, objective information compiled and synthesized by informed individuals. The content, however, strikes me as a bit of fancy dancing.
Here’s what strikes me as the main point:
A recent JP Morgan report finds the current rally in artificial intelligence (AI) related investments to be justified and sustainable, with no evidence of a bubble forming at this stage.
Feel better now? I don’t. The report strikes me as bank marketing with a big dose of cooing sounds. You know, cooing like a mother to her month old baby. Does the mother makes sense? Nope. The point is that warm cozy feeling that the cooing imparts. The mother knows she is doing what is necessary to reduce the likelihood of the baby making noises for sustained periods. The baby knows that mom’s heart is thudding along and the comfort speaks volumes.

Financial professionals in Manhattan enjoy the AI revolution. They know there is no bubble. I see bubbles (plural). Thanks, MidJourney. Good enough.
Sorry. The JP Morgan cooing is not working for me.
The write up says, quoting the estimable financial institution:
“The ingredients are certainly in place for a market bubble to form, but for now, at least, we believe the rally in AI-related investments is justified and sustainable. Capex is massive, and adoption is accelerating.”
What about this statement in the cited article?
JP Morgan contrasts the current AI investment environment to previous speculative cycles, noting the absence of cheap speculative capital or financial structures that artificially inflate prices. As AI investment continues, leverage may increase, but current AI spending is being driven by genuine earnings growth rather than assumptions of future returns.
After stating the “no bubble” argument three times, I think I understand.
Several observations:
- JP Morgan needed to make a statement that the AI data center thing, the depreciation issue, the power problem, and the potential for an innovation that derails the current LLM-type of processing are not big deals. These issues play no part in the non-bubble environment.
- The report is a rah rah for AI. Because there is no bubble, organizations should go forward and implement the current versions of smart software despite their proven “feature” of making up answers and failing to handle many routine human-performed tasks.
- The timing is designed to allow high net worth people a moment to reflect upon the wisdom of JP Morgan and consider moving money to the estimable financial institution for shepherding in what others think are effervescent moments.
My view: Consider the problems OpenAI has: [a] A need for something that knocks Googzilla off the sidewalk on Shoreline Drive and [b] more cash. Amazon — ever the consumer’s friend — is involved in making its own programmers use its smart software, not code cranked out by a non-Amazon service. Plus, Amazon is in the building mode, but it has allegedly government money to spend, a luxury some other firms are denied. Oracle is looking less like a world beater in databases and AI and more of a media-type outfit. Perplexity is probably perplexed because there are rumors that it may be struggling. Microsoft is facing some backlash because of its [a] push to make Copilot everyone’s friend and [b] dealing with the flawed updates to its vaunted Windows 11 software. Gee, why is FileManager not working? Let’s ask Copilot. On the other hand, let’s not.
Net net: JP Morgan is marketing too hard, and I am not sure it is resonating with me as unbiased and completely objective. As sales collateral, the report is good. As evidence there is no bubble, nope.
Stephen E Arnold, December 5, 2025
Mid Tier Consulting Firm Labels AI As a Chaos Agent.
December 5, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
A mid tier consulting firm (Forrester) calls smart software a chaos agent. Is the company telling BAIT (big AI tech) firms not to hire them for consulting projects? I am a dinobaby. When I worked at a once big time blue chip outfit, labeling something that is easy to sell a problem was not a standard practice. But what do I know? I am a dinobaby.
The write up in the content marketing-type publication is not exactly a sales pitch. Could it be a new type of article? Perhaps it is an example of contrarianism and a desire to make sure people know that smart software is an expensive boondoggle? I noted a couple of interesting statements in “Forrester: Gen AI Is a Chaos Agent, Models Are Wrong 60% of the Time.”
Sixty percent is, even with my failing math skills, is more than half of something. I think the idea is that smart software is stupid, and it gets an F for failure. Let’s look at a couple of statements from the write up:
Forrester says, gen AI has become that predator in the hands of attackers: The one that never tires or sleeps and executes at scale. “In Jaws, the shark acts as the chaos agent,” Forrester principal analyst Allie Mellen told attendees at the IT consultancy firm’s 2025 Security and Risk Summit. “We have a chaos agent of our own today… And that chaos agent is generative AI.”
This is news?
How about this statement?
Of the many studies Mellen cited in her keynote, one of the most damning is based on research conducted by the Tow Center for Digital Journalism at Columbia University, which analyzed eight different AI models, including ChatGPT and Gemini. The researchers found that overall, models were wrong 60% of the time; their combined performance led to more failed queries than accurate ones.
I think it is fair to conclude that Forrester is not thrilled with smart software. I don’t know if the firm uses AI or just reads about AI, but its stance is crystal clear. Need proof? A Forrester wizard recycled research that says “specialized enterprise agents all showed systemic patterns of failure. Top performers completed only 24% of tasks autonomously.
Okay, that means today’s AI gets an F. How do the disappointed parents at BAIT outfits cope with Claude, Gemini, and Copilot getting sent to a specialized school? My hunch is that the leadership in BAIT firms will ignore the criticism, invest in data centers, and look for consultants not affiliated with an outfit that dumps trash at their headquarters.
Forrester trots out a solution of course. The firm does sell time and expertise. What’s interesting is that Venture Beat rolled out some truisms about smart software, including buzzwords like red team and machine speed.
Net net: AI will be wrong most of the time. AI will be used by bad actors to compromise organizations. AI gets an F; threat actors find that AI delivers a slam dunk A. Okay, which is it? I know. It’s marketing.
Stephen E Arnold, December 5, 2025
Apples Misses the AI Boat Again
December 4, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
Apple and Telegram have a characteristic in common. Both did not recognize the AI boomlet that began in 2020 or so. Apple was thinking about Granny scarfs that could hold an iPhone and working out ways to cope with its dependence on Chinese manufacturing. Telegram was struggling with the US legal system and trying to create a programming language that a mere human could use to code a distributed application.
Apple’s ship has sailed, and it may dock at Google’s Gemini private island or it could decide to purchase an isolated chunk of real estate and build its de-perplexing AI system at that location.

Thanks, MidJourney. Good enough.
I thought about missing a boat or a train. The reason? I read “Apple AI Chief John Giannandrea Retiring After Siri Delays.” I simply don’t know who has been responsible for Apple AI. Siri did not work when I looked at it on my wife’s iPhone many years ago. Apparently it doesn’t work today. Could that be a factor in the leadership changes at the Tim Apple outfit?
The write up states:
Giannandrea will serve as an advisor between now and 2026, with former Microsoft AI researcher Amar Subramanya set to take over as vice president of AI. Subramanya will report to Apple engineering chief Craig Federighi, and will lead Apple Foundation Models, ML research, and AI Safety and Evaluation. Subramanya was previously corporate vice president of AI at Microsoft, and before that, he spent 16 years at Google.
Apple will probably have a person who knows some people to call at Softie and Google headquarters. However, when will the next AI boat arrive. Apple excelled at announcing AI, but no boat arrived. Telegram has an excuse; for example, our owner Pavel Durov has been embroiled in legal hassles and arm wrestling with the reality that developing complex applications for the Telegram platform is too difficult. One would have thought that Apple could have figured out a way to improve Siri, but it apparently was lost in a reality distortion field. Telegram didn’t because Pavel Durov was in jail in Paris, then confined to the country, and had to report to the French judiciary like a truant school boy. Apple just failed.
The write up says:
Giannandrea’s departure comes after Apple’s major iOS 18 Siri failure. Apple introduced a smarter, “?Apple Intelligence?” version of ?Siri? at WWDC 2024, and advertised the functionality when marketing the iPhone 16. In early 2025, Apple announced that it would not be able to release the promised version of ?Siri? as planned, and updates were delayed until spring 2026. An exodus of Apple’s AI team followed as Apple scrambled to improve ?Siri? and deliver on features like personal context, onscreen awareness, and improved app integration. Apple is now rumored to be partnering with Google for a more advanced version of ?Siri? and other ?Apple Intelligence? features that are set to come out next year.
My hunch is that grafting AI into the bizarro world of the iPhone and other Apple computing devices may be a challenge. Telegram’s solution is to not do hardware. Apple is now an outfit distinguishing itself by missing the boat. When does the next one arrive?
Stephen E Arnold, December 4, 2025
Microsoft Demonstrates Its Commitment to Security. Right, Copilot
December 4, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I read on November 20, 2025, an article titled “Critics Scoff after Microsoft Warns AI Feature Can Infect Machines and Pilfer Data.” My immediate reaction was, “So what’s new?” I put the write up aside. I had to run an errand, so I grabbed the print out of this Ars Technica story in case I had to wait for the shop to hunt down my dead lawn mower.

A hacking club in Moscow celebrates Microsoft’s decision to enable agents in Windows. The group seems quite happy despite sanctions, food shortages, and the special operation. Thanks, MidJourney. Good enough.
I worked through the short write up and spotted a couple of useful (if true) factoids. It may turn out that the information in this Ars Technica write up provide insight about Microsoft’s approach to security. If I am correct, threat actors, assorted money laundering outfits, and run-of-the-mill state actors will be celebrating. If I am wrong, rest easy. Cyber security firms will have no problem blocking threats — for a small fee of course.
The write up points to what the article calls a “warning” from Microsoft on November 18, 2025. The report says:
an experimental AI agent integrated into Windows can infect devices and pilfer sensitive user data
Yep, Ars Technica then puts a cherry on top with this passage:
Microsoft introduced Copilot Actions, a new set of “experimental agentic features” that, when enabled, perform “everyday tasks like organizing files, scheduling meetings, or sending emails,” and provide “an active digital collaborator that can carry out complex tasks for you to enhance efficiency and productivity.”
But don’t worry. Users can use these Copilot actions:
if you understand the security implications.
Wow, that’s great. We know from the psycho-pop best seller Thinking Fast and Slow that more than 80 percent of people cannot figure out how much a ball costs if the total is $1.10 and the ball costs one dollar more. Also, Microsoft knows that most Windows users do not disable defaults. I think that even Microsoft knows that turning on agentic magic by default is not a great idea.
Nevertheless, this means that agents combined with large language models are sparking celebrations among the less trustworthy sectors of those who ignore laws and social behavior conventions. Agentic Windows is the new theme part for online crime.
Should you worry? I will let you decipher this statement allegedly from Microsoft. Make up your own mind, please:
“As these capabilities are introduced, AI models still face functional limitations in terms of how they behave and occasionally may hallucinate and produce unexpected outputs,” Microsoft said. “Additionally, agentic AI applications introduce novel security risks, such as cross-prompt injection (XPIA), where malicious content embedded in UI elements or documents can override agent instructions, leading to unintended actions like data exfiltration or malware installation.”
I thought this sub head in the article exuded poetic craft:
Like macros on Marvel superhero crack
The article reports:
Microsoft’s warning, one critic said, amounts to little more than a CYA (short for cover your ass), a legal maneuver that attempts to shield a party from liability. “Microsoft (like the rest of the industry) has no idea how to stop prompt injection or hallucinations, which makes it fundamentally unfit for almost anything serious,” critic Reed Mideke said. “The solution? Shift liability to the user. Just like every LLM chatbot has a ‘oh by the way, if you use this for anything important be sure to verify the answers” disclaimer, never mind that you wouldn’t need the chatbot in the first place if you knew the answer.”
Several observations are warranted:
- How about that commitment to security after SolarWinds? Yeah, I bet Microsoft forgot that.
- Microsoft is doing what is necessary to avoid the issues that arise when the Board of Directors has a macho moment and asks whoever is the Top Dog at the time, “What about the money spent on data centers and AI technology? You know, How are you going to recoup those losses?
- Microsoft is not asking its users about agentic AI. Microsoft has decided that the future of Microsoft is to make AI the next big thing. Why? Microsoft is an alpha in a world filled with lesser creatures. The answer? Google.
Net net: This Ars Technica article makes crystal clear that security is not top of mind among Softies. Hey, when’s the next party?
Stephen E Arnold, December 4, 2025
Titanic Talk: This Ship Will Not Fail
December 4, 2025
It’s too big to fail! How many times have we heard that phrase? There’s another common expression that makes more sense: The bigger they are the harder they fall. On his blog, Will Gallego writes about that idea: “Big Enough To Fail.” Through a lot of big words (intelligently used BTW), Gallego explains that big stuff fails all the time.
It’s actually a common occurrence, because crap happens. Outages daily occur, Mother Nature shows her wraith, acts of God happen, and systems fail due to mistakes. Gallego makes the observation that we’ve accepted these issues and he explains why:
- “It’s so exceptional (or feels that way). This is less so about frequency but that when a company becomes so big you just assume they’re impervious to failure, a shock and awe to the impossible.
- The lack of choices in services informs your response. Are there other providers? Sure, but with the continuous consolidation of businesses, we have fewer options every day.
- You’re locked in on your choices. Are you going to knock on Google’s door and complain, take three years to move out of one virtual data center and into another, while retraining your staff, updating your internal documents, and updating your code? No, you’re likely not.
- Failover is costly. Similarly, those at the sharp end know that the level of effort in building failover for something like this is frequently impractical. It would cost too much to set up, to maintain as developers, it would remove effort that could be put towards new features, and the financial cost backing that might be considered infeasible.
- The brittleness is everywhere. The level of complexity and the highly coupled nature of interconnected services means we’ve become brittle to failures. Doubly so when those services are the underpinnings of what we build on. “The internet is down today” as the saying goes, despite the internet having no principle nucleus. This is considered acceptable.
- We’re all in it together. When a service as large as these goes down, there’s a good chance we’re seeing so many failures in so many places that it becomes reasonable to also be down. Your competitors are likely down, your customers might be – there might be too much failure to go around to cast it in any one direction.
Ultimately, this leads into resilience engineering which is “reframing how we look incidents.” Gallego ends the article by saying we should take everything in stride, show some show patience, and give a break to the smaller players in the game. His approach is more human aka realistic unlike the egotistical rants that sank the Titanic. It’s unsinkable or it won’t fail! Yes, it will. Prepare for the eventualities. Whitney Grace, December 4, 2025
A New McKinsey Report Introduces New Jargon for Its Clients
December 3, 2025
Another dinobaby original. If there is what passes for art, you bet your bippy, that I used smart software. I am a grandpa but not a Grandma Moses.
I read “Agents, Robots, and Us: Skill Partnerships in the Age of AI.” The write up explains that lots of employees will be terminated. Think of machines displacing seamstresses. AI is going to do that to jobs, lots of jobs.
I want to focus on a different aspect of the McKinsey Global Institute Report (a PR and marketing entity not unlike Telegram’s TON Foundation in my view).

Thanks, Vencie. Your cartoon contains neither females nor minorities. That’s definitely a good enough approach. But you have probably done a number on a few graphic artists.
First, the report offers you the potential client an opportunity to use McKinsey’s AI chatbot. The service is a test, but I have a hunch that it is not much of a test. The technology will be deployed so that McKinsey can terminate those who underperform in certain work related tasks. The criteria for keeping one’s job at a blue chip consulting firm varies from company to company. But those who don’t quit to find greener or at least less crazy pastures will now work under the watchful eye of McKinsey AI. It takes a great deal of time to write a meaningful analysis of a colleague’s job performance. Let AI do it with exceptions made for “special” hires of course. Give it a whirl.
Second, the report what I call consultant facts. These are statements which link the painfully obvious with a rationale. Let me give you an example from this pre-Thanksgiving sales document. McKinsey states:
Two thirds of US work hours require only nonphysical capabilities
The painfully obvious: Most professional work is not “physical.” That means 67 percent of an employee’s fully loaded cost can be shifted to smart or semi-smart, good enough AI agentic systems. Then the obvious and the implication of financial benefits is supported by a truly blue chip chart. I know because as you can see, the graphics are blue. Here’s a segment of the McKinsey graph:

Notice that the chart is presented so that a McKinsey professional can explain the nested bar charts and expand on such data as “5 percent of a health care workforce can be agentized.” Will that resonate with hospital administrators working for roll up of individual hospitals. That’s big money. Get the AI working in one facility and then roll it out. Boom. An upside that seems credible. That’s the key to the consultant facts. Additional analysis is needed to tailor these initial McKinsey graph data to a specific use case. As a sales hook, this works and has worked for decades. Fish never understand hooks with plastic bait. Deer never quite master automobiles and headlights.
Third, the report contains sales and marketing jargon for 2026 and possibly beyond. McKinsey hopes for decades to come I think. Here’s a small selection of the words that will be used, recycled, and surface in lectures by AI experts to quite large crowds of conference attendees:
AI adjacent capabilities
AI fluency
Embodied AI
HMC or human machine collaboration
High prevalence skills
Human-agent-robot roles
technical automation potential
If you cannot define these, you need to hire McKinsey. If you want to grow as a big time manager, send that email or FedEx with a handwritten note on your engraved linen stationery.
Fourth, some humans will be needed. McKinsey wants to reassure its clients that software cannot replace the really valuable human. What do you think makes a really valuable worker beyond AI fluency? [a] A professional who signed off on a multi-million McKinsey consulting contract? [b] A person who helped McKinsey consultants get the needed data and interviews from an otherwise secretive client with compartmentalized and secure operating units? [b] A former McKinsey consultant now working for the firm to which McKinsey is pitching an AI project.
Fifth, the report introduces a new global index. The data in this free report is unlikely to be free in the future. McKinsey clients can obtain these data. This new global index is called the Skills Change Index. Here’s an example. You can get a bit more marketing color in the cited report. Just feast your eyes on this consultant fact packed chart:

Several comments. The weird bubble in the right hand page’s margin is your link to the McKinsey AI system. Give it a whirl, please. Look at the wonderland of information in a single chart presented in true blue, “just the facts, mam” style. The hawk-eyed will see that “leadership” seems immune to AI. Obviously senior management smart enough to hire McKinsey will be AI fluent and know the score or at least the projected financial payoff resulting from terminating staff who fail to up their game when robots do two thirds of the knowledge workers’ tasks.
Why has McKinsey gone to such creative lengths to create an item like this marketing collateral? Multiple teams labored on this online brochure. Graphic designers went through numerous versions of the sliding panels. McKinsey knows there is money in those AI studies. The firm will apply its intellectual method to the wizards who are writing checks to AI companies to build big data centers. Even Google is hedging its bets by packaging its data centers as providers to super wary customers like NATO. Any company can benefit from AI fluency-centric efficiency inputs. Well, not any. The reason is that only companies who can pay McKinsey fees quality to be clients.
The 11 people identified as the authors have completed the equivalent of a death march. Congratulations. I applaud you. At some point in your future career, you can look back on this document and take pride in providing a road map for companies eager to dump human workers for good enough AI systems. Perhaps one of you will be able to carry a sign in a major urban area that calls attention to your skills? You can look back and tell your friends and family, “I was part of this revolution.” So Happy holidays to you, McKinsey, and to the other blue chip firms exploiting good enough smart software.
Stephen E Arnold, December 3, 2025
An SEO Marketing Expert Is an Expert on Search: AI Is Good for You. Adapt
December 2, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I found it interesting to learn that a marketer is an expert on search and retrieval. Why? The expert has accrued 20 years of experience in search engine optimization aka SEO. I wondered, “Was this 20 years of diverse involvement in search and retrieval, or one year of SEO wizardry repeated 20 times?” I don’t know.
I spotted information about this person’s view of search in a newsletter from a group whose name I do not know how to pronounce. (I don’t know much.) The entity does business as BrXnd.ai. After some thought (maybe two seconds) I concluded that the name represented the concept “branding” with a dollop of hipness or ai.
Am I correct? I don’t know. Hey, that’s three admissions of intellectual failure a 10 seconds. Full disclosure: I know does not care.

Agentic SEO will put every company on the map. Relevance will become product sales. The methodology will be automated. The marketing humanoids will get fat bonuses. The quality of information available will soar upwards. Well, probably downwards. But no worries. Thanks, Venice.ai. Good enough.
The article is titled “The Future of Search and the Death of Links // BRXND Dispatch vol 96.” It points to a video called “The Future of Search and the Death of Links.” You can view the 22 minute talk at this link. Have at it, please.
Let me quote from the BrXnd.ai write up:
…we’re moving from an era of links to an era of recommendations. AI overviews now appear on 30-40% of search results, and when they do, clicks drop 20-40%. Google’s AI Mode sends six times fewer clicks than traditional search.
I think I have heard that Google handles 75 to 85 percent of global searches. If these data are on the money or even close to the eyeballs Google’s advertising money machine flogs, the estimable company will definitely be [a] pushing for subscriptions to anything and everything it once subsidized with oodles of advertisers’ cash; [b] sticking price tags on services positioned as free; [c] charging YouTube TV viewers the way disliked cable TV companies squeezed subscribers for money; [d] praying to the gods of AI that the next big thing becomes a Google sandbox; and [e] embracing its belief that it can control governments and neuter regulators with more than 0.01 milliliters of testosterone.
The write up states:
When search worked through links, you actively chose what to click—it was manual research, even if imperfect. Recommendations flip that relationship. AI decides what you should see based on what it thinks it knows about you. That creates interesting pressure on brands: they can’t just game algorithms with SEO tricks anymore. They need genuine value propositions because AI won’t recommend bad products. But it also raises questions about what happens to our relationship with information when we move from active searching to passive receiving.
Okay, let’s work through a couple of the ideas in this quoted passage.
First, clicking on links is indeed semi-difficult and manual job. (Wow. Take a break from entering 2.3 words and looking for a likely source on the first page of search results. Demanding work indeed.) However, what if those links are biased by inept programmers, the biases of the data set processed by the search machine, or intentionally manipulated to weaponize content to achieve a goal?
Second, the hook for the argument is that brands can no longer can game algorithms. Bid farewell to keyword stuffing. There is a new game in town: Putting a content object in as many places as possible in multiple formats, including the knowledge nugget delivered by TikTok-type services. Most people it seems don’t think about this and rely on consultants to help them.
Finally, the notion of moving from clicking and reading to letting a BAIT (big AI tech) company define one’s knowledge universe strikes me as something that SEO experts don’t find problematic. Good for them. Like me, the SEO mavens think the business opportunities for consulting, odd ball metrics, and ineffectual work will be rewarding.
I appreciate BrXnd.ai for giving me this glimpse of a the search and retrieval utopia I will now have available. Am I excited? Yeah, sure. However, I will not be dipping into the archive of the 95 previous issues of BrXnd “dispatches.” I know this to be a fact.
Stephen E Arnold, December 2, 2025

