Someone Is Not Drinking the AI-Flavored Kool-Aid
November 12, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
The future of AI is in the hands of the masters of the digital PT Barnum’s. A day or so ago, I wrote about Copilot in Excel. Allegedly a spreadsheet can be enhanced by Microsoft. Google is beavering away with a new enthusiasm for content curation. This is a short step to weaponizing what is indexed, what is available to Googlers and Mama, and what is provided to Google users. Heroin dealers do not provide consumer oriented labels with ingredients.

Thanks, Venice.ai. Good enough.
Here’s another example of this type of soft control: “I’ll Never Use Grammarly Again — And This Is the Reason Every Writer Should Care.” The author makes clear that Grammarly, developed and operated from Ukraine, now wants to change her writing style. The essay states:
What once felt like a reliable grammar checker has now turned into an aggressive AI tool always trying to erase my individuality.
Yep, that’s what AI companies and AI repackagers will do: Use the technology to improve the human. What a great idea? Just erase the fingerprints of the human. Introduce AI drivel and lowest common denominator thinking. Human, the AI says, take a break. Go to the yoga studio or grab a latte. AI has your covered.
The essay adds:
Superhuman [Grammarly’s AI solution for writers] wants to manage your creative workflow, where it can predict, rephrase, and automate your writing. Basically, a simple tool that helped us write better now wants to replace our words altogether. With its ability to link over a hundred apps, Superhuman wants to mimic your tone, habits, and overall style. Grammarly may call it personalized guidance, but I see it as data extraction wrapped with convenience. If we writers rely on a heavily AI-integrated platform, it will kill the unique voice, individual style, and originality.
One human dumped Grammarly, writing:
I’m glad I broke up with Grammarly before it was too late. Well, I parted ways because of my principles. As a writer, my dedication is towards original writing, and not optimized content.
Let’s go back to ubiquitous AI (some you know is there and other AI that operates in dark pattern mode). The object of the game for the AI crowd is to extract revenue and control information. By weaponizing information and making life easy, just think who will be in charge of many things in a few years. If you think humans will rule the roost, you are correct. But the number of humans pushing the buttons will be very small. These individuals have zero self awareness and believe that their ideas — no matter how far out and crazy — are the right way to run the railroad.
I am not sure most people will know that they are on a train taking them to a place they did not know existed and don’t want to visit.
Well, tough luck.
Stephen E Arnold, November 11, 2025
Temptation Is Powerful: Will Big AI Tech Take the Bait
November 12, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
I have been using the acronym BAIT for “big AI tech” in my talks. I find it an easy way to refer to the companies with the money and the drive to try to take over the use of software to replace most humans’ thinking. I want to raise a question, “Will BAIT take the bait?”
What is this lower case “bait”? In my opinion, lower case “bait” refers to information people and organizations would consider proprietary, secret, off limits, and out of bounds. Digital data about health, contracts, salaries, inventions, interpersonal relations, and similar categories of information would fall into the category of “none of your business” or something like “it’s secret.”

A calculating predator is about to have lunch. Thanks, Venice.ai. Not what I specified but good enough like most things in 2025.
Consider what happens when a large BAIT outfit gains access to the contents of a user’s mobile device, a personal computer, storage devices, images, and personal communications? What can a company committed to capturing information to make its smart software models more intelligent and better informed learn from these types of data? What if that data acquisition takes place in real time? In an organization or a personal life situation, an individual entity may not be able to cross tabulate certain data. The information is in the organization or the data stream for a household, but it is not connected. Data processing can acquire the information, perform the calculations, and “identify” the significant items. These can be sued to predict or guess what response, service, action, or investment can be made.
Microsoft’s efforts with Copilot in Excel raise the possibility and opportunity to examine an organization’s or a person’s financial calculations as part of a routine “let’s make the Excel experience better.” If you don’t know that data are local or on a cloud provider server, access to that information may not be important to you. But are those data important to a BAIT outfit? I think those data are tempting, desirable, and ultimately necessary for the AI company to “learn.”
One possible solution is for the BAIT company to tap into personal data, offering assurances that these types of information are not fodder for training smart software. Can people resist temptation? Some can. But others, with large amounts of money at stake, can’t.
Let’s consider a recent news announcement and then ask some hypothetical questions. I am just asking questions, and I am not suggesting that today’s AI systems are sufficiently organized to make use of the treasure trove of secret information. I do have enough experience to know that temptation is often hard to resist in a certain type of organization.
The article I noted today (November 6, 2025) is “Gemini Deep Research Can Tap into Your Gmail and Google Drive.” The write up reports what I assume to be accurate data:
After adding PDF support in May [2025], [Google] Gemini Deep Research can now directly tap information stored in your Gmail and Google Chat conversations, as well as Google Drive files…. Now, [Google] Deep Research can “draw on context from your [Google] Gmail, Drive and Chat and work it directly into your research.” [Google] Gemini will look through Docs, Slides, Sheets and PDFs stored in your Drive, as well as emails and messages across Google Workspace. [Emphasis added by Beyond Search for clarity]
Can Google resist the mouth watering prospect of using these data sources to train its large language models and its experimental AI technology?
There are some other hypotheticals to consider:
- What informational boundaries is Google allegedly crossing with this omnivorous approach to information?
- How can Google put meaningful barriers around certain information to prevent data leakage?
- What recourse do people or organizations have if Google’s smart software exposes sensitive information to a party not authorized to view these data?
- How will Google’s advertising algorithms use such data to shape or weaponize information for an individual or an organization?
- Will individuals know when a secret has been incorporated in a machine generated report for a government entity?
Somewhere in my reading I recall a statement attributed to Napoleon. My recollection is that in his letters or some other biographical document about Napoleon’s approach to war, he allegedly said something like:
Information in nine tenths of any battle.
The BAIT organizations are moving with purpose and possibly extreme malice toward systems and methods that give them access to data never meant to be used to train smart software. If Copilot in Excel happens and if Google processes data in their grasp, will these types of organizations be able to resist juicy, unique, high-calorie morsels zeros and ones?
I am not sure these types of organizations can or will exercise self control. There is money and power and prestige at stake. Humans have a long track record of doing some pretty interesting things. Is this omnivorous taking of information wrapped in making one’s life easier an example of overreach?
Will BAIT outfits take the bait? Good question.
Stephen E Arnold, November 12, 2025
Innovation Cored, Diced, Cooked and Served As a Granny Scarf
November 11, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
I do not pay much attention to Vogue, once a giant, fat fashion magazine. However, my trusty newsfeed presented this story to me this morning at 626 am US Eastern: “Apple and Issey Miyake Unite for the iPhone Pocket. It’s a Moment of Connecting the Dots.” I had not idea what an Issey Miyake was. I navigated to Yandex.com (a more reliable search service than Google which is going to bail out the sinking Apple AI rowboat) and learned:
Issey Miyake … the brand name under which designer clothing, shoes, accessories and perfumes are produced.
Okay, a Japanese brand selling collections of clothes, women’s clothes with pleating, watches, perfumes, and a limited edition of an Evian mineral water in bottles designed by someone somewhere, probably Southeast Asia.
But here’s the word that jarred me: Moment. A moment?
The Vogue write up explains:
It’s a moment of connecting the dots.
Moment? Huh.
Upon further investigation, the innovation is a granny scarf; that is, a knitted garment with a pocket for an iPhone. I poked around and here’s what the “moment” looks like:
Source: Engadget, November 2025
I don’t recall my great grandmother (my father’s mother had a mother. This person was called “Granny” or “Gussy”, and I know she was alive in 1958. She died at the age of 102 or 103. She knitted and tatted scarfs, odd little white cloths called antimacassars and small circular or square items called doilies (singular “doily”).
Apple and the Japanese fashion icon have inadvertently emulated some of the outputs of my great grandmother “Granny” or “Gussy.” Were she, my grandmother, and my father alive, one or all of them would have taken legal action. But time makes us fools, and “the spirits of the wise sit in the clouds and mock” scarfs with pouches like an NBA bound baby kangaroo.
But the innovation which may be either Miyake’s, Apple’s, or a combo brainstorm of Miyake and Apple comes in short and long sizes. My Granny cranked out her knit confections like a laborer in a woolen mill in Ipswich in the 19th century. She gave her outputs away.
You can acquire this pinnacle of innovation for US $150 or US $230.
Several observations:
- Apple’s skinny phone flopped; Apple’s AI flopped. Therefore, Apple is into knitted scarfs to revivify its reputation for product innovation. Yeah, innovative.
- Next to Apple’s renaming Apple iTV as Apple TV, one may ask, “Exactly what is going on in Cupertino other than demanding that I log into an old iPhone I use to listen to podcasts?” Desperation gives off an interesting vibe. I feel it. Do you?
- Apple does good hardware. It does not do soft goods with the same élan. Has its leadership lost the thread?
Smell that desperation yet? Publicity hunger, the need to be fashionable and with it, and taking the hard edges off a discount Mac laptop.
Net net: I like the weird pink version, but why didn’t the geniuses behind the Genius Bar do the zippy orange of the new candy bar but otherwise indistinguishable mobile device rolled out a short time ago? Orange? Not in the scarf palate.
Granny’s white did not make the cut.
Stephen E Arnold, November 11, 2025
Agentic Software: Close Enough for Horse Shoes
November 11, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
I read a document that I would describe as tortured. The lingo was trendy. The charts and graphs sported trendy colors. The data gathering seemed to be a mix of “interviews” and other people’s research. Plus the write up was a bit scattered. I prefer the rigidity of old-fashioned organization. Nevertheless, I did spot one chunk of information that I found interesting.
The title of the research report (sort of an MBA- or blue chip consulting firm-type of document) is “State of Agentic AI: Founder’s Edition.” I think it was issued in March 2025, but with backdating popular, who knows. I had the research report in my files, and yesterday (November 3, 2025) I was gathering some background information for a talk I am giving on November 6, 2025. The document walked through data about the use of software to replace people. Actually, the smart software agents generally do several things according to the agent vendors’ marketing collateral. The cited document restated these items this way:
- Agents are set up to reach specific goals
- Agents are used to reason which means “break down their main goal … into smaller manageable tasks and think about the next best steps.”
- Agents operate without any humans in India or Pakistan operating invisibly and behind the scenes
- Agents can consult a “memory” of previous tasks, “experiences,” work, etc.
Agents, when properly set up and trained, can perform about as well as a human. I came away from the tan and pink charts with a ball park figure of 75 to 80 percent reliability. Close enough for horseshoes? Yep.
There is a run down of pricing options. Pricing seems to be challenge for the vendors with API usage charges and traditional software licensing used by a third of the agentic vendors.
Now here’s the most important segment from the document:
We asked founders in our survey: “What are the biggest issues you have encountered when deploying AI Agents for your customers? Please rank them in order of magnitude (e.g. Rank 1 assigned to the biggest issue)” The results of the Top 3 issues were illuminating: we’ve frequently heard that integrating with legacy tech stacks and dealing with data quality issues are painful. These issues haven’t gone away; they’ve merely been eclipsed by other major problems. Namely:
- Difficulties in integrating AI agents into existing customer/company workflows, and the human-agent interface (60% of respondents)
- Employee resistance and non-technical factors (50% of respondents)
- Data privacy and security (50% of respondents).
Here’s the chart tallying the results:

Several ideas crossed my mind as I worked through this research data:
- Getting the human-software interface right is a problem. I know from my work at places like the University of Michigan, the Modern Language Association, and Thomson-Reuters that people have idiosyncratic ways to do their jobs. Two people with similar jobs add the equivalent of extra dashboard lights and yard gnomes to the process. Agentic software at this time is not particularly skilled in the dashboard LED and concrete gnome facets of a work process. Maybe someday, but right now, that’s a common deal breaker. Employees says, “I want my concrete unicorn, thank you.”
- Humans say they are into mobile phones, smart in-car entertainment systems, and customer service systems that do not deliver any customer service whatsoever. Change as somebody from Harvard said in a lecture: “Change is hard.” Yeah, and it may not get any easier if the humanoid thinks he or she will allowed to find their future pushing burritos at the El Nopal Restaurant in the near future.
- Agentic software vendors assume that licensees will allow their creations to suck up corporate data, keep company secrets, and avoid disappointing customers by presenting proprietary information to a competitor. Security is “regular” enterprise software is a bit of a challenge. Security in a new type of agentic software is likely to be the equivalent of a ride on roller coaster which has tossed several middle school kids to their death and cut off the foot of a popular female. She survived, but now has a non-smart, non-human replacement.
Net net: Agentic software will be deployed. Most of its work will be good enough. Why will this be tolerated in personnel, customer service, loan approvals, and similar jobs? The answer is reduced headcounts. Humans cost money to manage. Humans want health care. Humans want raises. Software which is good enough seems to cost less. Therefore, welcome to the agentic future.
Stephen E Arnold, November 11, 2025
Sure, Sam. We Trust Your with Our Data
November 11, 2025
OpenAI released a new AI service called “company knowledge” that collects and analyzes all information within an organization. Why does this sound familiar? Because malware does the same thing for nefarious purposes. The story comes from Computer World and is entitled, “OpenAI’s Company Knowledge Wants Access To All Of Your Internal Data.”
A major problem is that OpenAI is a still a relatively young company and organizations are reluctant to share all of their data with it. AI is still an untested pool and so much can go wrong when it comes to regulating security and privacy. Here’s another clincher in the deal:
“Making granting that trust yet more difficult is the lack of clarity around the ultimate OpenAI business model. Specifically, how much OpenAI will leverage sensitive enterprise data in terms of selling it, even with varying degrees of anonymization, or using it to train future models.”
What does the vice-president and principal analyst at Forrester, Jeff Pollard say?
“ ‘The capabilities across all these solutions are similar, and benefits exist: Context and intelligence when using AI, more efficiency for employees, and better knowledge for management.”
But there’s a big but that Pollard makes clear:
“ ‘Data privacy, security, regulatory, compliance, vendor lock-in, and, of course, AI accuracy and trust issues. But for many organizations, the benefits of maximizing the value of AI outweighs the risks.’”
The current AI situation is that applications are transiting from isolated to connected agents and agentic systems developed to maximize value for the users. In other words, according to Pollard, “high risk and high reward.” The rewards are tempting but the consequences are also alarming.
Experts say that companies won’t place all of their information and proprietary knowledge in the hands of a young company and untested technology. They could but there aren’t any regulations to protect them.
OpenAI should practice with its own company first, then see what happens.
Whitney Grace, November 11, 2025
Microsoft: Desperation or Inspiration? Copilot, Have We Lost an Engine?
November 10, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
Microsoft is an interesting member of the high-tech in-crowd. It is the oldest of the Big Players. It invented Bob and Clippy. It has not cracked the weirdness of Word’s numbering weirdness. Updates routinely kill services. I think it would be wonderful if Task Manager did not spawn multiple instances of itself.
Furthermore Microsoft, the cloudy giant with oodles of cash, ignited the next big thing frenzy a couple of years ago. The announcement that Bob and Clippy would operate on AI steroids. Googzilla experienced the equivalent of traumatic stress injury and blinked red, yellow, and orange for months. Crisis bells rang. Klaxons interrupted Foosball games. Napping in pods became difficult.
Imagine what’s happening at Microsoft now that this Sensor Tower chart is popping up in articles like “Microsoft Bets on Influencers Like Alix Earle to Close the Gap With ChatGPT.” Here’s the gasp inducer:

Source: The chart comes from Sensor Tower. It carries Bloomberg branding. But it appeared in an MSN.com article. Who crafted the data? How were the data assembled? What mathematical processes were use to produce such nice round numbers? I have no clue, but let’s assume those fat, juicy round numbers are “real,” and the weird imaginary “i” things electrical engineers enjoy each day.
The write up states:
Microsoft Corp., eager to boost downloads of its Copilot chatbot, has recruited some of the most popular influencers in America to push a message to young consumers that might be summed up as: Our AI assistant is as cool as ChatGPT. Microsoft could use the help. The company recently said its family of Copilot assistants attracts 150 million active users each month. But OpenAI’s ChatGPT claims 800 million weekly active users, and Google’s Gemini boasts 650 million a month. Microsoft has an edge with corporate customers, thanks to a long history of selling them software and cloud services. But it has struggled to crack the consumer market — especially people under 30.
Microsoft came up with a novel solution to its being fifth in the smart software league table. Is Microsoft developing useful AI-infused services for Notepad? Yes. Is Microsoft pushing Copilot and its hallucinatory functions into Excel? Yes. Is Microsoft using Copilot to help partners code widgets for their customers to use in Azure? Yeah, sort of, but I have heard that Anthropic Claude has some Certified Partners as fans.
The Microsoft professionals, the leadership, and the legions of consultants have broken new marketing ground. Microsoft is paying social media influencers to pitch Microsoft Copilot as the one true smart software. Forget that “God is my copilot” meme. It is now “Meme makers are Microsoft’s Copilot.”
The write up includes this statement about this stunningly creative marketing approach:
“We’re a challenger brand in this area, and we’re kind of up and coming,” Consumer Chief Marketing Officer Yusuf Mehdi
Excuse me, Microsoft was first when it announced its deal with OpenAI a couple of years ago. Microsoft was the only game in town. OpenAI was a Silicon Valley start up with links to Sam AI-Man and Mr. Tesla. Now Microsoft, a giant outfit, is “up and coming.” No, I would suggest Microsoft is stalled and coming down.
The write up from that university / consulting outfit New York University is quoted in the cited write up. Here is that snippet:
Anindya Ghose, a marketing professor at New York University’s Stern School of Business, expressed surprised that Microsoft is using lifestyle influencers to market Copilot. But he can see why the company would be attracted to their cult followings. “Even if the perceived credibility of the influencer is not very high but the familiarity with the influencers is high, there are some people who would be willing to bite on that apple,” Ghose said in an interview.
The article presents proof that the Microsoft creative light saber has delivered. Here’s that passage:
Mehdi cited a video Earle posted about the new Copilot Groups feature as evidence that the campaign is working. “We can see very much people say, ‘Oh, I’m gonna go try that,’ and we can see the usage it’s driving.” The video generated 1.9 million views on Earle’s Instagram account and 7 million on her TikTok. Earle declined to comment for this story.
Following my non-creative approach, here are several observations:
- From first to fifth. I am not sure social media influencers are likely to address the reason the firm associated with Clippy occupies this spot.
- I am not sure Microsoft knows how to fix the “problem.” My hunch is that the Softies see the issue as one that is the fault of the users. Its either the Russian hackers or the users of Microsoft products and services. Yeah, the problem is not ours.
- Microsoft, like Apple and Telegram, are struggling to graft smart software into ageing platforms, software, and systems. Google is doing a better job, but it is in second place. Imagine that. Google in the “place” position in the AI Derby. But Google has its own issues to resolve, and it is thinking about putting data centers in space, keeping its allegedly booming Web search business cranking along at top speed, and sucking enough cash from online advertising to pay for its smart software ambitions. Those wizards are busy. But Googzilla is in second place and coping with acute stress reaction.
Net net: The big players have put huge piles of casino chips in the AI poker game. Desperation takes many forms. The sport of extreme marketing is just one of the disorder’s manifestations. Watch it on TikTok-type services.
Stephen E Arnold, November 10, 2025
Train Models on Hostility Oriented Slop and You Get Happiness? Nope, Nastiness on Steroids
November 10, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Informed discourse, factual reports, and the type of rhetoric loved by Miss Spurling in 1959 can have a positive effect. I wasn’t sure at the time if I wanted to say Whoa! Nelly to my particular style of writing and speaking. She did her best to convert me from a somewhat weird 13 year old into a more civilized creature. She failed I fear.

A young student is stunned by the criticism of his approach to a report. An “F”. Failure. The solution is not to listen. Some AI vendors take the same approach. Thanks, Venice.ai, good enough.
When I read “Study: AI Models Trained on Clickbait Slop Result In AI Brain Rot, Hostility,” I thought about Miss Spurling and her endless supply of red pencils. Algorithms, it seems, have some of the characteristics of an immature young person. Feed that young person some unusual content, and you get some wild and crazy outputs
The write up reports:
To see how these [large language] models would “behave” after subsisting on a diet of clickbait sewage, the researchers cobbled together a sample of one million X posts and then trained four different LLMs on varying mixtures of control data (long form, good faith, real articles and content) and junk data (lazy, engagement chasing, superficial clickbait) to see how it would affect performance. Their conclusion isn’t too surprising; the more junk data that is fed into an AI model, the lower quality its outputs become and the more “hostile” and erratic the model is …
But here’s the interesting point:
They also found that after being fed a bunch of ex-Twitter slop, the models didn’t just get “dumber”, they were (shocking, I know) far more likely to take on many of the nastier “personality traits” that now dominate the right wing troll platform …
The write up makes a point about the wizards creating smart software; to wit:
The problem with AI generally is a decidedly human one: the terrible, unethical, and greedy people currently in charge of it’s implementation (again, see media, insurance, countless others) — folks who have cultivated some unrealistic delusions about AI competency and efficiency (see this recent Stanford study on how rushed AI adoption in the workforce often makes people less efficient).
I am not sure that the highly educated experts at Google-type AI companies would agree. I did not agree with Miss Spurling. On may points, she was correct. Adolescent thinking produces some unusual humans as well as interesting smart software. I particularly like some of the newer use cases; for instance, driving some people wacky or appealing to the underbelly of human behavior.
Net net: Scale up, shut up, and give up.
Stephen E Arnold, November 10, 2025
Mobile Hooking People: Digital Drugs
November 10, 2025
Most of us know that spending too much time on our phones is a bad idea, especially for young minds. We also know the companies on the other end profit from keeping us glued to the screen. The Conversation examines the ways “Smartphones Manipulate our Emotions and Trigger our Reflexes– No Wonder We’re Addicted.” Yes–try taking a 12 year old’s mobile phone and let us know how that goes.
Of course, social media, AI chatbots, games, and other platforms have their own ways of capturing our attention. This article, however, focuses on ways the phones themselves manipulate users. Author Stephen Monteiro writes:
“As I argue in my newly published book, Needy Media: How Tech Gets Personal, our phones — and more recently, our watches — have become animated beings in our lives. These devices can build bonds with us by recognizing our presence and reacting to our bodies. Packed with a growing range of technical features that target our sensory and psychological soft spots, smartphones create comforting ties that keep us picking them up. The emotional cues designed into these objects and interfaces imply that they need our attention, while in actuality, the devices are soaking up our data.”
The write-up explores how phones’ responsive features, like facial recognition, geolocation, touchscreen interactions, vibrations and sounds, and motion and audio sensing, combine to build a potent emotional attachment. Meanwhile, devices have drastically increased how much information they collect and when. They constantly record data on everything we do on our phones and even in our environments. One chilling example: With those sensors, software can build a fairly accurate record of our sleep patterns. Combine that with health and wellness apps, and that gives app-makers a surprisingly comprehensive picture. Have you seen any eerily insightful ads for fitness, medical, or mindfulness products lately? Soon, they will be even be able to gauge our emotions through analysis of our facial expressions. Just what we need.
Given a cell phone is pretty much required to navigate life these days, what are we to do? Monteiro suggests:
“We can access device settings and activate only those features we truly require, adjusting them now and again as our habits and lifestyles change. Turning on geolocation only when we need navigation support, for example, increases privacy and helps break the belief that a phone and a user are an inseparable pair. Limiting sound and haptic alerts can gain us some independence, while opting for a passcode over facial recognition locks reminds us the device is a machine and not a friend. This may also make it harder for others to access the device.”
If these measures do not suffice, one can go retro with a “dumb” phone. Apparently, that is a trend among Gen Z. Perhaps there is hope for humanity yet.
Cynthia Murrell, November 10, 2025
How Frisky Will AI Become? Users Like Frisky… a Lot
November 7, 2025
OpenAI promised to create technology that would benefit humanity, much like Google and other Big tech companies. We know how that has gone. Much to the worry of its team, OpenAI released a TikTok-like app powered by AI. What could go wrong? Well we’re still waiting to see the fallout, but TechCrunch shares that possibilities in the story: “OpenAI Staff Grapples With The Company’s Social Media Push.”
OpenAI is headed into social media because that is where the money is. The push for social media is by OpenAI’s bigwigs. The new TikTok-like app is called Sora 2 and it has an AI-based feed. Past and present employees are concerned how Sora 2 will benefit humanity. They are worried that Sora 2 will produce more AI slop, the equivalent of digital brain junk food, to consumers instead of benefitting humanity. Even OpenAI’s CEO Sam Altman is astounded by the amount of money allowed to AI social media projects:
‘ ‘We do mostly need the capital for build [sic] AI that can do science, and for sure we are focused on AGI with almost all of our research effort,’ said Altman. ‘It is also nice to show people cool new tech/products along the way, make them smile, and hopefully make some money given all that compute need.’ ‘When we launched chatgpt there was a lot of ‘who needs this and where is AGI,’ Altman continued. ‘[R]eality is nuanced when it comes to optimal trajectories for a company.’”
Here’s another quote about the negative effects of AI:
‘One of the big mistakes of the social media era was [that] the feed algorithms had a bunch of unintended, negative consequences on society as a whole, and maybe even individual users. Although they were doing the thing that a user wanted — or someone thought users wanted — in the moment, which is [to] get them to, like, keep spending time on the site.’”
Let’s start taking bets about how long it will take the bad actors to transform Sora 2 into quite frisky service.
Whitney Grace, November 7, 2025
Copilot in Excel: Brenda Has Another Problem
November 6, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
Simon Wilson posted an interesting snippet from a person whom I don’t know. The handle is @belligerentbarbies who is a member of TikTok. You can find the post “Brenda” on Simon Wilson’s Weblog. The main idea in the write up is that a person in accounting or finance assembles an Excel worksheet. In many large outfits, the worksheets are templates or set up to allow the enthusiastic MBA to plug in a few numbers. Once the numbers are “in,” then the bright, over achiever hits Shift F9 to recalculate the single worksheet. If it looks okay, the MBA mashes F9 and updates the linked spreadsheets. Bingo! A financial services firm has produced the numbers needed to slap into a public or private document. But, and here’s the best part…

Thanks, Venice.ai. Good enough.
Before the document leaves the office, a senior professional who has not used Excel checks the spreadsheet. Experience dictates to look at certain important cells of data. If those pass the smell test, then the private document is moved to the next stage of its life. It goes into production so that the high net worth individual, the clued in business reporter, the big customers, and people in the CEO’s bridge group get the document.
Because those “reports” can move a stock up or down or provide useful information about a deal that is not put into a number context, most outfits protect Excel spreadsheets. Heck, even the fill-in-the-blank templates are big time secrets. Each of the investment firms for which I worked over the years follow the same process. Each uses its own, custom-tailored, carefully structure set of formulas to produce the quite significant reports, opinions, and marketing documents.
Brenda knows Excel. Most Big Dogs know some Excel, but as these corporate animals fight their way to Carpetland, those Excel skills atrophy. Now Simon Wilson’s post enters and references Copilot. The post is insightful because it highlights a process gap. Specifically if Copilot is involved in an Excel spreadsheet, Copilot might— just might in this hypothetical — make a change. The Big Dog in Carpetland does not catch the change. The Big Dog just sniffs a few spots in the forest or jungle of numbers.
Before Copilot Brenda or similar professional was involved. Copilot may make it possible to ignore Brenda and push the report out. If the financial whales make money, life is good. But what happens if the Copilot tweaked worksheet is hallucinating. I am not talking a few disco biscuits but mind warping errors whipped up because AI is essentially operating at “good enough” levels of excellence.
Bad things transpire. As interesting as this problem is to contemplate, there’s another angle that the Simon Wilson post did not address. What if Copilot is phoning home. The idea is that user interaction with a cloud-based service is designed to process data and add those data to its training process. The AI wizards have some jargon for this “learn as you go” approach.
The issue is, however, what happens if that proprietary spreadsheet or the “numbers” about a particular company find their way into a competitor’s smart output? What if Financial firm A does not know this “process” has compromised the confidentiality of a worksheet. What if Financial firm B spots the information and uses it to advantage firm B?
Where’s Brenda in this process? Who? She’s been RIFed. What about Big Dog in Carpetland? That professional is clueless until someone spots the leak and the information ruins what was a calm day with no fires to fight. Now a burning Piper Cub is in the office. Not good, is it.
I know that Microsoft Copilot will be or is positioned as super secure. I know that hypotheticals are just that: Made up thought donuts.
But I think the potential for some knowledge leaking may exist. After all Copilot, although marvelous, is not Brenda. Clueless leaders in Carpetland are not interested in fairy tales; they are interested in making money, reducing headcount, and enjoying days without a fierce fire ruining a perfectly good Louis XIV desk.
Net net: Copilot, how are you and Brenda communicating? What’s that? Brenda is not answering her company provided mobile. Wow. Bummer.
Stephen E Arnold, November 6, 2025


