Microsoft Knows How to Avoid an AI Bubble: Listen Up, Grunts, Discipline Now!
November 18, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
I relish statements from the leadership of BAIT (big AI tech) outfits. A case in point is Microsoft. The Fortune story “AI Won’t Become a Bubble As Long As Everyone Stays thoughtful and Disciplined, Microsoft’s Brad Smith Says.” First, let’s consider the meaning of the word “everyone.” I navigated to Yandex.com and used its Alice smart software to get the definition of “everyone”:
The word “everyone” is often used in social and organizational contexts, and to denote universal truths or principles.
That’s a useful definition. Universal truths and principles. If anyone should know, it is Yandex.

Thanks, Venice.ai. Good enough, but the Russian flag is white, blue, and red. Your inclusion of Ukraine yellow was one reason why AI is good enough, not a slam dunk.
But isn’t there a logical issue with the subjective flag “if” and then a universal assertion about everyone? I find the statement illogical. It mostly sounds like English, but it presents a wild and crazy idea at a time when agreement about anything is quite difficult to achieve. Since I am a dinobaby, my reaction to the Fortune headline is obviously out of touch with the “real” world as it exists are Fortune and possibly Microsoft.
Let’s labor forward with the write up, shall we?
I noted this statement in the cited article attributed to Microsoft’s president Brad Smith:
“I obviously can’t speak about every other agreement in the AI sector. We’re focused on being disciplined but being ambitious. And I think it’s the right combination,” he said. “Everybody’s going to have to be thoughtful and disciplined. Everybody’s going to have to be ambitious but grounded. I think that a lot of these companies are [doing that].”
It was not Fortune’s wonderful headline writers who stumbled into a logical swamp. The culprit or crafter of the statement was “1000 Russian programmers did it” Smith. It is never Microsoft’s fault in my view.
But isn’t this the AI go really fast, don’t worry about the future, and break things?
Mr. Smith, according the article said,
“We see ongoing growth in demand. That’s what we’ve seen over the past year. That’s what we expect today, and frankly our biggest challenge right now is to continue to add capacity to keep pace with it.”
I wonder if Microsoft’s hiring social media influencers is related to generating demand and awareness, not getting people to embrace Copilot. Despite its jumping off the starting line first, Microsoft is now lagging behind its “partner” OpenAI and a two or three other BAIT entities.
The Fortune story includes supporting information from a person who seems totally, 100 percent objective. Here’s the quote:
At Web Summit, he met Anton Osika, the CEO of Lovable, a vibe-coding startup that lets anyone create apps and software simply by talking to an AI model. “What they’re doing to change the prototyping of software is breathtaking. As much as anything, what these kinds of AI initiatives are doing is opening up technology opportunities for many more people to do more things than they can do before…. This will be one of the defining factors of the quarter century ahead…”
I like the idea of Microsoft becoming a “defining factor” for the next 25 years. I would raise the question, “What about the Google? Is it chopped liver?
Several observations:
- Mr. Smith’s informed view does not line up with hiring social media influencers to handle the “growth and demand.” My hunch is that Microsoft fears that it is losing the consumer perception of Microsoft as the really Big Dog. Right now, that seems to be Super sized OpenAI and the mastiff-like Gemini.
- The craziness of “everybody” illustrates a somewhat peculiar view of consensus today. Does everybody include those fun-loving folks fighting in the Russian special operation or the dust ups in Sudan to name two places where “everybody” could be labeled just plain crazy?
- Mr. Smith appears to conflate putting Copilot in Notepad and rolling out Clippy in Yeezies with substantive applications not prone to hallucinations, mistakes, and outputs that could get some users of Excel into some quite interesting meetings with investors and clients.
Net net: Yep, everybody. Not going to happen. But the idea is a-thoughtful, which is interesting to me.
Stephen E Arnold, November 18, 2025
AI Content: Most People Will Just Accept It and Some May Love It or Hum Along
November 18, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
The trust outfit Thomson Reuters summarized as real news a survey. The write up sports the title “Are You Listening to Bots? Survey Shows AI Music Is Virtually Undetectable?” Truth be told, I wanted the magic power to change the headline to “Are You Reading News? Survey Shows AI Content Is Virtually Undetectable.” I have no magic powers, but I think the headline I just made up is going to appear in the near future.

Elvis in heaven looks down on a college dance party and realizes that he has been replaced by a robot. Thanks, Venice.ai. Wow, your outputs are deteriorating in my opinion.
What does the trust outfit report about a survey? I learned:
A staggering 97% of listeners cannot distinguish between artificial intelligence-generated and human-composed songs, a Deezer–Ipsos survey showed on Wednesday, underscoring growing concerns that AI could upend how music is created, consumed and monetized. The findings of the survey, for which Ipsos polled 9,000 participants across eight countries, including the U.S., Britain and France, highlight rising ethical concerns in the music industry as AI tools capable of generating songs raise copyright concerns and threaten the livelihoods of artists.
I won’t trot out my questions about sample selection, demographics, and methodology. Let’s just roll with what the “trust” outfit presents as “real” news.
I noted this series of factoids:
- “73% of respondents supported disclosure when AI-generated tracks are recommended”
- “45% sought filtering options”
- “40% said they would skip AI-generated songs entirely.”
- Around “71% expressed surprise at their inability to distinguish between human-made and synthetic tracks.”
Isn’t that last dot point the major finding. More than two thirds cannot differentiate synthesized, digitized music from humanoid performers.
The study means that those who have access to smart software and whatever music generation prompt expertise is required can bang out chart toppers. Whip up some synthetic video and go on tour. Years ago I watched a recreation of Elvis Presley. Judging from the audience reaction, no one had any problem doing the willing suspension of disbelief. No opium required at that event. It was the illusion of the King, not the fried banana version of him that energized the crowd.
My hunch is that AI generated performances will become a very big thing. I am assuming that the power required to make the models work is available. One of my team told me that “Walk My Walk” by Breaking Rust hit the Billboard charts.
The future is clear. First, customer support staff get to find their future elsewhere. Now the kind hearted music industry leadership will press the delete button on annoying humanoid performers.
My big take away from the “real” news story is that most people won’t care or know. Put down that violin and get a digital audio workstation. Did you know Mozart got in trouble when he was young for writing math and music on the walls in his home. Now he can stay in his room and play with his Mac Mini computer.
Stephen E Arnold, November 18, 2025
Microsoft Could Be a Microsnitch
November 14, 2025
Remember when you were younger and the single threat of, “I’m going to tell!” was enough to send chills through your body? Now Microsoft plans to do the same thing except on an adult level. Life Hacker shares that, “Microsoft Teams Will Soon Tell Your Boss When You’re Not In The Office.” The article makes an accurate observation that since the pandemic most jobs can be done from anywhere with an Internet connection.
Since the end of quarantine, offices are fighting to get their workers back into physical workspaces. Some of them have implemented hybrid working, while others have become more extreme by counting clock-ins and badge swipes. Microsoft is adding its own technology to the fight by making it possible to track remote workers.
“As spotted by Tom’s Guide, Microsoft Teams will roll out an update in December that will have the option to report whether or not you’re working from your company’s office. The update notes are sparse on details, but include the following: ‘When users connect to their organization’s [wifi], Teams will soon be able to automatically update their work location to reflect the building they’re working from. This feature will be off by default. Tenant admins will decide whether to enable it and require end-users to opt-in.’”
Microsoft whitewashed the new feature by suggesting employees use it to find their teammates. The article’s author says it all:
“But let’s be real. This feature is also going to be used by companies to track their employees, and ensure that they’re working from where they’re supposed to be working from. Your boss can take a look at your Teams status at any time, and if it doesn’t report you’re working from one of the company’s buildings, they’ll know you’re not in the office. No, the feature won’t be on by default, but if your company wants to, your IT can switch it on, and require that you enable it on your end as well.”
It is ridiculous to demand that employees return to offices, but at the same time many workers aren’t actually doing their job. The professionals are quiet quitting, pretending to do the work, and ignoring routine tasks. Surveillance seems to be a solution of interest.
It would be easier if humans were just machines. You know, meat AI systems. Bummer, we’re human. If we can get away with something, many will. But is Microsoft is going too far here to make sales to ineffective “leadership”? Worker’s aren’t children, and the big tech company is definitely taking the phrase, “I’m going to tell!” to heart.
Whitney Grace, November 14, 2025
Walmart Plans To Change Shopping With AI
November 14, 2025
Walmart shocked the world when it deployed robots to patrol aisles. The purpose of the robot wasn’t to steal jobs but report outages and messes to employees. Walmart has since backtracked on the robots, but they are turning to AI to enhance and forever alter the consumer shopping experience. According to MSN, “Walmart’s Newest Plan Could Change How You Shop Forever.”
Walmart plus to make the shopping experience smarter by using OpenAI’s ChatGPT. Samsung is also part of this partnership that will offer product suggestions to shoppers of both companies. The idea of incorporating ChatGPT takes the search bar and search query pattern to the next level:
“Far from just a search bar and a click experience, Walmart says the AI will learn your habits, can predict what you need, and even plan your shopping before realizing you’re in need of it. “ ‘Through AI-first shopping, the retail experience shifts from reactive to proactive as it learns, plans, and predicts, helping customers anticipate their needs before they do,’ Walmart stared in the release.
Amazon, Walmart, and other big retailers have been tracking consumer habits for years and sending them coupons and targeted ads. This is a more intrusive way to make consumers spend money. What will they think of next? How about Kroger’s smart price displays. These can deliver dynamic prices to “help” the consumer and add a bit more cash to the retailer. Yeah, AI is great.
Whitney Grace, November 14, 2025
Sweet Dreams of Data Centers for Clippy Version 2: The Agentic Operation System
November 13, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
If you have good musical recall, I want you to call up the tune for “Sweet Dreams (Are Made of This) by the Eurythmics. Okay, with that sound track buzzing through your musical memory, put it on loop. I want to point you to two write ups about Microsoft’s plans for a global agentic operating system and its infrastructure. From hiring social media influencers to hitting the podcast circuit, Microsoft is singing its own songs to its often reluctant faithful. Let’s turn down “Sweet Dream” and crank up the MSFT chart climbers.

Trans-continental AI infrastructure. Will these be silent, reduce pollution, and improve the life of kids who live near the facilities? Of course, because some mommies will say, “Just concentrate and put in your ear plugs. I am not telling you again.” Thanks, Venice. Good enough after four tries. Par for the AI course.
The first write up is by the tantalizingly named consulting firm doing business as SemiAnalysis. I smile everything I think about how some of my British friends laugh when they see a reference to a semi-truck. One quipped, “What you don’t have complete trucks in the US?” That same individual would probably say in response to the company name SemiAnalysis, “What you don’t have a complete analysis in the US?” I have no answer to either question, but “SemiAnalysis” does strike me as more amusing a moniker than Booz, Allen, McKinsey, or Bain.
You can find a 5000 word plus segment of a report with the remarkable title “Microsoft’s AI Strategy Deconstructed – From Energy to Tokens” online. To get the complete report, presumably not the semi report, one must subscribe. Thus, the document is content marketing, but I want to highlight three aspects of the MBA-infused write up. These reflect my biases, so if you are not into dinobaby think, click away, gentle reader.
The title “Microsoft’s AI Strategy Deconstructed” is a rah rah rah for Microsoft. I noted:
- Microsoft was first, now its is fifth, and it will be number one. The idea is that the inventor of Bob and Clippy was the first out of the gate with “AI is the future.” It stands fifth in terms of one survey’s ranking of usage. This “Microsoft’s AI Strategy Deconstructed” asserts that it is going to be a big winner. My standard comment to this blending of random data points and some brown nosing is, “Really?”
- Microsoft is building or at least promising to build lots of AI infrastructure. The write up does not address the very interesting challenge of providing power at a manageable cost to these large facilities. Aerial photos of some of the proposed data centers look quite a bit like airport runways stuffed with bland buildings filled with large numbers of computing devices. But power? A problem looming it seems.
- The write up does not pay much attention to the Google. I think that’s a mistake. From data centers in boxes to plans to put these puppies in orbit, the Google has been doing infrastructure, including fiber optic, chips, and interesting investments like its interest in digital currency mining operations. But Google appears to be of little concern to the Microsoft-tilted semi analysis from SemiAnalysis. Remember, I am a dinobaby, so my views are likely to rock the young wizards who crafted this “Microsoft is going to be a Big Dog.” Yeah, but the firm did Clippy. Remember?
The second write up picks up on the same theme: Microsoft is going to do really big things. “Microsoft Is Building Datacenter Superclusters That Span Continents” explains that MSFT’s envisioned “100 Trillion Parameter Models of the Near Future Can’t Be Built in One Place” and will be sort of like buildings that are “two stories tall, use direct-to-chip liquid cooling, and consume “almost zero water.”
The write up adds:
Microsoft is famously one of the few hyperscalers that’s standardized on Nvidia’s InfiniBand network protocol over Ethernet or a proprietary data fabric like Amazon Web Service’s EFA for its high-performance compute environments. While Microsoft has no shortage of options for stitching datacenters together, distributing AI workloads without incurring bandwidth- or latency-related penalties remains a topic of interest to researchers.
The real estate broker Arvin Haddad uses the phrase “Can you spot the flaw?” Okay, let me ask, “Can you spot the flaw in Microsoft’s digital mansions?” You have five seconds. Okay. What happens if the text centric technology upon which current AI efforts are based gets superseded by [a] a technical breakthrough that renders TensorFlow approaches obsolete, expensive, and slow? or [b] China dumps its chip and LLM technology into the market as cheap or open source? My thought is that the data centers that span continents may end up like the Westfield San Francisco Centre as a home for pigeons, graffiti artists, and security guards.
Yikes.
Building for the future of AI may be like shooting at birds not in sight. Sure, a bird could fly though the pellets, but probably not if they are nesting in pond a mile away.
Net net: Microsoft is hiring influencers and shooting where ducks will be. Sounds like a plan.
Stephen E Arnold, November 13, 2025
Dark Patterns Primer
November 13, 2025
Here is a useful explainer for anyone worried about scams brought to us by a group of concerned designers and researchers. The Dark Patterns Hall of Shame arms readers with its Catalog of Dark Patterns. The resource explores certain misleading tactics we all encounter online. The group’s About page tells us:
“We are passionate about identifying dark patterns and unethical design examples on the internet. Our [Hall of Shame] collection serves as a cautionary guide for companies, providing examples of manipulative design techniques that should be avoided at all costs. These patterns are specifically designed to deceive and manipulate users into taking actions they did not intend. HallofShame.com is inspired by Deceptive.design, created by Harry Brignull, who coined the term ‘Dark Pattern’ on 28 July 2010. And as was stated by Harry on Darkpatterns.org: The purpose of this website is to spread awareness and to shame companies that use them. The world must know its ‘heroes.’”
Can companies feel shame? We are not sure. The first page of the Catalog provides a quick definition of each entry, from the familiar Bait-and-Switch to the aptly named Privacy Zuckering (“service or a website tricks you into sharing more information with it than you really want to.”) One can then click through to real-world examples pulled from the Hall of Shame write-ups. Some other entries include:
“Disguised Ads. What’s a Disguised Ad? When an advertisement on a website pretends to be a UI element and makes you click on it to forward you to another website.
Roach Motel. What’s a roach motel? This dark pattern is usually used for subscription services. It is easy to sign up for it, but it’s much harder to cancel it (i.e. you have to call customer support).
Sneak into Basket. What’s a sneak into basket? When buying something, during your checkout, a website adds some additional items to your cart, making you take the action of removing it from your cart.
Confirmshaming. What’s confirmshaming? When a product or a service is guilting or shaming a user for not signing up for some product or service.”
One case of Confirmshaming: the pop-up Microsoft presents when one goes to download Chrome through Edge. Been there. See the post for the complete list and check out the extensive examples. Use the information to protect yourself or the opposite.
Cynthia Murrell, November 13, 2025
Someone Is Not Drinking the AI-Flavored Kool-Aid
November 12, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
The future of AI is in the hands of the masters of the digital PT Barnum’s. A day or so ago, I wrote about Copilot in Excel. Allegedly a spreadsheet can be enhanced by Microsoft. Google is beavering away with a new enthusiasm for content curation. This is a short step to weaponizing what is indexed, what is available to Googlers and Mama, and what is provided to Google users. Heroin dealers do not provide consumer oriented labels with ingredients.

Thanks, Venice.ai. Good enough.
Here’s another example of this type of soft control: “I’ll Never Use Grammarly Again — And This Is the Reason Every Writer Should Care.” The author makes clear that Grammarly, developed and operated from Ukraine, now wants to change her writing style. The essay states:
What once felt like a reliable grammar checker has now turned into an aggressive AI tool always trying to erase my individuality.
Yep, that’s what AI companies and AI repackagers will do: Use the technology to improve the human. What a great idea? Just erase the fingerprints of the human. Introduce AI drivel and lowest common denominator thinking. Human, the AI says, take a break. Go to the yoga studio or grab a latte. AI has your covered.
The essay adds:
Superhuman [Grammarly’s AI solution for writers] wants to manage your creative workflow, where it can predict, rephrase, and automate your writing. Basically, a simple tool that helped us write better now wants to replace our words altogether. With its ability to link over a hundred apps, Superhuman wants to mimic your tone, habits, and overall style. Grammarly may call it personalized guidance, but I see it as data extraction wrapped with convenience. If we writers rely on a heavily AI-integrated platform, it will kill the unique voice, individual style, and originality.
One human dumped Grammarly, writing:
I’m glad I broke up with Grammarly before it was too late. Well, I parted ways because of my principles. As a writer, my dedication is towards original writing, and not optimized content.
Let’s go back to ubiquitous AI (some you know is there and other AI that operates in dark pattern mode). The object of the game for the AI crowd is to extract revenue and control information. By weaponizing information and making life easy, just think who will be in charge of many things in a few years. If you think humans will rule the roost, you are correct. But the number of humans pushing the buttons will be very small. These individuals have zero self awareness and believe that their ideas — no matter how far out and crazy — are the right way to run the railroad.
I am not sure most people will know that they are on a train taking them to a place they did not know existed and don’t want to visit.
Well, tough luck.
Stephen E Arnold, November 11, 2025
Temptation Is Powerful: Will Big AI Tech Take the Bait
November 12, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
I have been using the acronym BAIT for “big AI tech” in my talks. I find it an easy way to refer to the companies with the money and the drive to try to take over the use of software to replace most humans’ thinking. I want to raise a question, “Will BAIT take the bait?”
What is this lower case “bait”? In my opinion, lower case “bait” refers to information people and organizations would consider proprietary, secret, off limits, and out of bounds. Digital data about health, contracts, salaries, inventions, interpersonal relations, and similar categories of information would fall into the category of “none of your business” or something like “it’s secret.”

A calculating predator is about to have lunch. Thanks, Venice.ai. Not what I specified but good enough like most things in 2025.
Consider what happens when a large BAIT outfit gains access to the contents of a user’s mobile device, a personal computer, storage devices, images, and personal communications? What can a company committed to capturing information to make its smart software models more intelligent and better informed learn from these types of data? What if that data acquisition takes place in real time? In an organization or a personal life situation, an individual entity may not be able to cross tabulate certain data. The information is in the organization or the data stream for a household, but it is not connected. Data processing can acquire the information, perform the calculations, and “identify” the significant items. These can be sued to predict or guess what response, service, action, or investment can be made.
Microsoft’s efforts with Copilot in Excel raise the possibility and opportunity to examine an organization’s or a person’s financial calculations as part of a routine “let’s make the Excel experience better.” If you don’t know that data are local or on a cloud provider server, access to that information may not be important to you. But are those data important to a BAIT outfit? I think those data are tempting, desirable, and ultimately necessary for the AI company to “learn.”
One possible solution is for the BAIT company to tap into personal data, offering assurances that these types of information are not fodder for training smart software. Can people resist temptation? Some can. But others, with large amounts of money at stake, can’t.
Let’s consider a recent news announcement and then ask some hypothetical questions. I am just asking questions, and I am not suggesting that today’s AI systems are sufficiently organized to make use of the treasure trove of secret information. I do have enough experience to know that temptation is often hard to resist in a certain type of organization.
The article I noted today (November 6, 2025) is “Gemini Deep Research Can Tap into Your Gmail and Google Drive.” The write up reports what I assume to be accurate data:
After adding PDF support in May [2025], [Google] Gemini Deep Research can now directly tap information stored in your Gmail and Google Chat conversations, as well as Google Drive files…. Now, [Google] Deep Research can “draw on context from your [Google] Gmail, Drive and Chat and work it directly into your research.” [Google] Gemini will look through Docs, Slides, Sheets and PDFs stored in your Drive, as well as emails and messages across Google Workspace. [Emphasis added by Beyond Search for clarity]
Can Google resist the mouth watering prospect of using these data sources to train its large language models and its experimental AI technology?
There are some other hypotheticals to consider:
- What informational boundaries is Google allegedly crossing with this omnivorous approach to information?
- How can Google put meaningful barriers around certain information to prevent data leakage?
- What recourse do people or organizations have if Google’s smart software exposes sensitive information to a party not authorized to view these data?
- How will Google’s advertising algorithms use such data to shape or weaponize information for an individual or an organization?
- Will individuals know when a secret has been incorporated in a machine generated report for a government entity?
Somewhere in my reading I recall a statement attributed to Napoleon. My recollection is that in his letters or some other biographical document about Napoleon’s approach to war, he allegedly said something like:
Information in nine tenths of any battle.
The BAIT organizations are moving with purpose and possibly extreme malice toward systems and methods that give them access to data never meant to be used to train smart software. If Copilot in Excel happens and if Google processes data in their grasp, will these types of organizations be able to resist juicy, unique, high-calorie morsels zeros and ones?
I am not sure these types of organizations can or will exercise self control. There is money and power and prestige at stake. Humans have a long track record of doing some pretty interesting things. Is this omnivorous taking of information wrapped in making one’s life easier an example of overreach?
Will BAIT outfits take the bait? Good question.
Stephen E Arnold, November 12, 2025
Innovation Cored, Diced, Cooked and Served As a Granny Scarf
November 11, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
I do not pay much attention to Vogue, once a giant, fat fashion magazine. However, my trusty newsfeed presented this story to me this morning at 626 am US Eastern: “Apple and Issey Miyake Unite for the iPhone Pocket. It’s a Moment of Connecting the Dots.” I had not idea what an Issey Miyake was. I navigated to Yandex.com (a more reliable search service than Google which is going to bail out the sinking Apple AI rowboat) and learned:
Issey Miyake … the brand name under which designer clothing, shoes, accessories and perfumes are produced.
Okay, a Japanese brand selling collections of clothes, women’s clothes with pleating, watches, perfumes, and a limited edition of an Evian mineral water in bottles designed by someone somewhere, probably Southeast Asia.
But here’s the word that jarred me: Moment. A moment?
The Vogue write up explains:
It’s a moment of connecting the dots.
Moment? Huh.
Upon further investigation, the innovation is a granny scarf; that is, a knitted garment with a pocket for an iPhone. I poked around and here’s what the “moment” looks like:
Source: Engadget, November 2025
I don’t recall my great grandmother (my father’s mother had a mother. This person was called “Granny” or “Gussy”, and I know she was alive in 1958. She died at the age of 102 or 103. She knitted and tatted scarfs, odd little white cloths called antimacassars and small circular or square items called doilies (singular “doily”).
Apple and the Japanese fashion icon have inadvertently emulated some of the outputs of my great grandmother “Granny” or “Gussy.” Were she, my grandmother, and my father alive, one or all of them would have taken legal action. But time makes us fools, and “the spirits of the wise sit in the clouds and mock” scarfs with pouches like an NBA bound baby kangaroo.
But the innovation which may be either Miyake’s, Apple’s, or a combo brainstorm of Miyake and Apple comes in short and long sizes. My Granny cranked out her knit confections like a laborer in a woolen mill in Ipswich in the 19th century. She gave her outputs away.
You can acquire this pinnacle of innovation for US $150 or US $230.
Several observations:
- Apple’s skinny phone flopped; Apple’s AI flopped. Therefore, Apple is into knitted scarfs to revivify its reputation for product innovation. Yeah, innovative.
- Next to Apple’s renaming Apple iTV as Apple TV, one may ask, “Exactly what is going on in Cupertino other than demanding that I log into an old iPhone I use to listen to podcasts?” Desperation gives off an interesting vibe. I feel it. Do you?
- Apple does good hardware. It does not do soft goods with the same élan. Has its leadership lost the thread?
Smell that desperation yet? Publicity hunger, the need to be fashionable and with it, and taking the hard edges off a discount Mac laptop.
Net net: I like the weird pink version, but why didn’t the geniuses behind the Genius Bar do the zippy orange of the new candy bar but otherwise indistinguishable mobile device rolled out a short time ago? Orange? Not in the scarf palate.
Granny’s white did not make the cut.
Stephen E Arnold, November 11, 2025
Agentic Software: Close Enough for Horse Shoes
November 11, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
I read a document that I would describe as tortured. The lingo was trendy. The charts and graphs sported trendy colors. The data gathering seemed to be a mix of “interviews” and other people’s research. Plus the write up was a bit scattered. I prefer the rigidity of old-fashioned organization. Nevertheless, I did spot one chunk of information that I found interesting.
The title of the research report (sort of an MBA- or blue chip consulting firm-type of document) is “State of Agentic AI: Founder’s Edition.” I think it was issued in March 2025, but with backdating popular, who knows. I had the research report in my files, and yesterday (November 3, 2025) I was gathering some background information for a talk I am giving on November 6, 2025. The document walked through data about the use of software to replace people. Actually, the smart software agents generally do several things according to the agent vendors’ marketing collateral. The cited document restated these items this way:
- Agents are set up to reach specific goals
- Agents are used to reason which means “break down their main goal … into smaller manageable tasks and think about the next best steps.”
- Agents operate without any humans in India or Pakistan operating invisibly and behind the scenes
- Agents can consult a “memory” of previous tasks, “experiences,” work, etc.
Agents, when properly set up and trained, can perform about as well as a human. I came away from the tan and pink charts with a ball park figure of 75 to 80 percent reliability. Close enough for horseshoes? Yep.
There is a run down of pricing options. Pricing seems to be challenge for the vendors with API usage charges and traditional software licensing used by a third of the agentic vendors.
Now here’s the most important segment from the document:
We asked founders in our survey: “What are the biggest issues you have encountered when deploying AI Agents for your customers? Please rank them in order of magnitude (e.g. Rank 1 assigned to the biggest issue)” The results of the Top 3 issues were illuminating: we’ve frequently heard that integrating with legacy tech stacks and dealing with data quality issues are painful. These issues haven’t gone away; they’ve merely been eclipsed by other major problems. Namely:
- Difficulties in integrating AI agents into existing customer/company workflows, and the human-agent interface (60% of respondents)
- Employee resistance and non-technical factors (50% of respondents)
- Data privacy and security (50% of respondents).
Here’s the chart tallying the results:

Several ideas crossed my mind as I worked through this research data:
- Getting the human-software interface right is a problem. I know from my work at places like the University of Michigan, the Modern Language Association, and Thomson-Reuters that people have idiosyncratic ways to do their jobs. Two people with similar jobs add the equivalent of extra dashboard lights and yard gnomes to the process. Agentic software at this time is not particularly skilled in the dashboard LED and concrete gnome facets of a work process. Maybe someday, but right now, that’s a common deal breaker. Employees says, “I want my concrete unicorn, thank you.”
- Humans say they are into mobile phones, smart in-car entertainment systems, and customer service systems that do not deliver any customer service whatsoever. Change as somebody from Harvard said in a lecture: “Change is hard.” Yeah, and it may not get any easier if the humanoid thinks he or she will allowed to find their future pushing burritos at the El Nopal Restaurant in the near future.
- Agentic software vendors assume that licensees will allow their creations to suck up corporate data, keep company secrets, and avoid disappointing customers by presenting proprietary information to a competitor. Security is “regular” enterprise software is a bit of a challenge. Security in a new type of agentic software is likely to be the equivalent of a ride on roller coaster which has tossed several middle school kids to their death and cut off the foot of a popular female. She survived, but now has a non-smart, non-human replacement.
Net net: Agentic software will be deployed. Most of its work will be good enough. Why will this be tolerated in personnel, customer service, loan approvals, and similar jobs? The answer is reduced headcounts. Humans cost money to manage. Humans want health care. Humans want raises. Software which is good enough seems to cost less. Therefore, welcome to the agentic future.
Stephen E Arnold, November 11, 2025


