How to Get a Job in the Age of AI?
December 23, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
Two interesting employment related articles appeared in my newsfeeds this morning. Let’s take a quick look at each. I will try to add some humor to these write ups. Some may find them downright gloomy.
The first is “An OpenAI Exec Identifies 3 Jobs on the Cusp of Being Automated.” I want to point out that the OpenAI wizard’s own job seems to be secure from his point of view. The write up points out:
Olivier Godement, the head of product for business products at the ChatGPT maker, shared why he thinks a trio of jobs — in life sciences, customer service, and computer engineering — is on the cusp of automation.
Let’s think about each of these broad categories. I am not sure what life sciences means in OpenAI world. The term is like a giant umbrella. Customer service makes some sense. Companies were trying to ignore, terminate, and prevent any money sucking operation related to answer customer’s questions and complaints for years. No matter how lousy and AI model is, my hunch is that it will be slapped into a customer service role even if it is arguably worse than trying to understand the accent of a person who speaks English as a second or third language.
Young members of “leadership” realize that the AI system used to replace lower-level workers has taken their jobs. Selling crafts on Etsy.com is a career option. Plus, there is politics and maybe Epstein, Epstein, Epstein related careers for some. Thanks, Qwen, you just output a good enough image but you are free at this time (December 13, 2025).
Now we come to computer engineering. I assume the OpenAI person will position himself as an AI adept, which fits under the umbrella of computer engineering. My hunch is that the reference is to coders who do grunt work. The only problem is that the large language model approach to pumping out software can be problematic in some situations. That’s why the OpenAI person is probably not worrying about his job. An informed human has to be in the process of machine-generated code. LLMs do make errors. If the software is autogenerated for one of those newfangled portable nuclear reactors designed to power football field sized data centers, someone will want to have a human check that software. Traditional or next generation nuclear reactors can create some excitement if the software makes errors. Do you want a thorium reactor next to your domicile? What about one run entirely by smart software?
What’s amusing about this write up is that the OpenAI person seems blissfully unaware of the precarious financial situation that Sam AI-Man has created. When and if OpenAI experiences a financial hiccup, will those involved in business products keep their jobs. Oliver might want to consider that eventuality. Some investors are thinking about their options for Sam AI-Man related activities.
The second write up is the type I absolutely get a visceral thrill writing. A person with a connection (probably accidental or tenuous) lets me trot out my favorite trope — Epstein, Epstein, Epstein — as a way capture the peculiarity of modern America. This article is “Bill Gates Predicts That Only Three Jobs Will Be Safe from Being Replaced by AI.” My immediate assumption upon spotting the article was that the type of work Epstein, Epstein, Epstein did would not be replaced by smart software. I think that impression is accurate, but, alas, the write up did not include Epstein, Epstein, Epstein work in its story.
What are the safe jobs? The write up identifies three:
-
Biology. Remember OpenAI thinks life sciences are toast. Okay, which is correct?
-
Energy expertise
-
Work that requires creative and intuitive thinking. (Do you think that this category embraces Epstein, Epstein, Epstein work? I am not sure.)
The write up includes a statement from Bill Gates:
“You know, like baseball. We won’t want to watch computers play baseball,” he said. “So there’ll be some things that we reserve for ourselves, but in terms of making things and moving things, and growing food, over time, those will be basically solved problems.”
Several observations:
-
AI will cause many people to lose their jobs
-
Young people will have to make knick knacks to sell on Etsy or find equally creative ways of supporting themselves
-
The assumption that people will have “regular” jobs, buy houses, go on vacations, and do the other stuff organization man type thinking assumed was operative, is a goner.
Where’s the humor in this? Epstein, Epstein, Epstein and OpenAI debt, OpenAI debt, and OpenAI debt. Ho ho ho.
Stephen E Arnold, December x, 2025
Telegram News: AlphaTON, About Face
December 22, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
Starting in January 2026, my team and I will be writing about Telegram’s Cocoon, the firm’s artificial intelligence push. Unlike the “borrow, buy, hype, and promise” approach of some US firms, Telegram is going a different direction. For Telegram, it is early days for smart software. The impact will be that posts in Beyond Search will decrease beginning Christmas week. The new Telegram News posts will be on a different url or service. Our preliminary tests show that a different approach won’t make much difference to the Arnold IT team. Frankly I am not sure how people will find the new service. I will post the links on Beyond Search, but with the exceptional indexing available from Bing, Google, et al, I have zero clue if these services will find our Telegram Notes.
Why am I making this shift?
Here’s one example. With a bit of fancy footwork, a publicly traded company popped into existence a couple of months ago. Telegram itself does not appear to have any connection to this outfit. However, the TON Foundation’s former president set up an outfit called the TON Strategy Co., which is listed on the US NASDAQ. Then following a similar playbook, AlphaTON popped up to provide those who believe in TONcoin a way to invest in a financial firm anchored to TONcoin. Yeah, I know that having these two public companies semi-linked to Telegram’s TON Foundation is interesting.
But even more fascinating is the news story about AlphaTON using some financial fancy dancing to link itself to Andruil. This is one of the companies familiar to those who keep track of certain Silicon Valley outfits generating revenue from Department of War contracts.
What’s the news?
The deal is off. According to “AlphaTON Capital Corp Issues Clarification on Anduril Industries Investment Program.” The word clarification is not one I would have chosen. The deal has vaporized. The write up says:
It has now come to the Company’s attention that the Anduril Industries common stock underlying the economic exposure that was contractually offered to our Company is subject to transfer restrictions and that Anduril will not consent to any such transfer. Due to these material limitations and risk on ownership and transferability, AlphaTON has made the decision to cancel the Anduril tokenized investment program and will not be proceeding with the transaction. The Company remains committed to strategic investments and the tokenization of desirable assets that provide clear ownership rights and align with shareholder value creation objectives.
I interpret this passage to mean, “Fire, Aim, Ready Maybe.”
With the stock of AlphaTON Capital as of December 18, 2025, at about $0.70 at 11 30 am US Eastern, this fancy dancing may end this set with a snappy rendition of Mozart’s Requiem.
That’s why Telegram Notes will be an interesting organization to follow. We think Pavel Durov’s trial in France, the two or maybe one surviving public company, two “foundations” linked to Telegram, and the new Cocoon AI play are going to be more interesting. If Mr. Durov goes to jail, the public company plays fail, and the Cocoon thing dies before it becomes a digital butterfly, I may flow more stories to Beyond Search.
Stay tuned.
Stephen E Arnold, December 22, 2025
How Do You Get Numbers for Copilot? Microsoft Has a Good Idea
December 22, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
In past couple of days, I tested some of the latest and greatest from the big tech outfits destined to control information flow. I uploaded text to Gemini, asked it a question answered in the test, and it spit out the incorrect answer. Score one for the Googlers. Then I selected an output from ChatGPT and asked it to determine who was really innovating in a very, very narrow online market space. ChatGPT did not disappoint. It just made up a non-existent person. Okay Sam AI-Man, I think you and Microsoft need to do some engineering.

Could a TV maker charge users to uninstall a high value service like Copilot? Could Microsoft make the uninstall app available for a fee via its online software store? Could both the TV maker and Microsoft just ignore the howls of the demented few who don’t love Copilot? Yeah, I go with ignore. Thanks, Venice.ai. Good enough.
And what did Microsoft do with its Copilot online service? According to Engadget, “LG quietly added an unremovable Microsoft Copilot app to TVs.” The write up reports:
Several LG smart TV owners have taken to Reddit over the past few days to complain that they suddenly have a Copilot app on the device
But Microsoft has a seductive way about its dealings. Engadget points out:
[LG TV owners] cannot uninstall it.
Let’s think about this. Most smart TVs come with highly valuable to the TV maker baloney applications. These can be uninstalled if one takes the time. I don’t watch TV very much, so I just leave the set the way it was. I routinely ignore pleas to update the software. I listen, so I don’t care if weird reminders obscure the visuals.
The Engadget article states:
LG said during the 2025 CES season that it would have a Copilot-powered AI Search in its next wave of TV models, but putting in a permanent AI fixture is sure to leave a bad taste in many customers’ mouths, particularly since Copilot hasn’t been particularly popular among people using AI assistants.
Okay, Microsoft has a vision for itself. It wants to be the AI operating system just as Google and other companies desire. Microsoft has been a bit pushy. I suppose I would come up with ideas that build “numbers” and provide fodder for the Microsoft publicity machine. If I hypothesize myself in a meeting at Microsoft (where I have been but that was years ago), I would reason this way:
- We need numbers.
- Why not pay a TV outfit to install Copilot.
- Then either pay more or provide some inducements to our TV partner to make Copilot permanent; that is, the TV owner has no choice.
The pushback for this hypothetical suggestion would be:
- How much?
- How many for sure?
- How much consumer backlash?
I further hypothesize that I would say:
- We float some trial balloon numbers and go from there.
- We focus on high end models because those people are more likely to be willing to pay for additional Microsoft services
- Who cares about consumer backlash? These are TVs and we are cloud and AI people.
Obviously my hypothetical suggestion or something similar to it took place at Microsoft. Then LG saw the light or more likely the check with some big numbers imprinted on it, and the deal was done.
The painful reality of consumer-facing services is that something like 95 percent of the consumers do not change the defaults. By making something uninstallable will not even register as a problem for most consumers.
Therefore, the logic of the LG play is rock solid. Microsoft can add the LG TVs with Copilot to its confirmed Copilot user numbers. Win.
Microsoft is not in the TV business so this is just advertising. Win
Microsoft is not a consumer product company like a TV set company. Win.
As a result, the lack of an uninstall option makes sense. If a lawyer or some other important entity complains, making Copilot something a user can remove eliminates the problem.
Love those LGs. Next up microwaves, freezers, smart lights, and possibly electric blankets. Numbers are important. Users demonstrate proof that Microsoft is on the right path.
But what about revenue from Copilot. No problem. Raise the cost of other services. Charging Outlook users per message seems like an idea worth pursuing? My hypothetical self would argue with type of toll or taxi meter approach. A per pixel charge in Paint seems plausible as well.
The reality is that I believe LG will backtrack. Does it need the grief?
Stephen E Arnold, December 22, 2025
Modern Management Method with and without Smart Software
December 22, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I enjoy reading and thinking about business case studies. The good ones are few and far between. Most are predictable, almost as if the author was relying on a large language model for help.
“I’m a Tech Lead, and Nobody Listens to Me. What Should I Do?” is an example of a bright human hitting on tactics to become more effective in his job. You can work through the full text of the article and dig out the gems that may apply to you. I want to focus on two points in the write up. The first is the matrix management diagram based on or attributed to Spotify, a music outfit. The second is a method for gaining influence in a modern, let’s go fast company.
Here’s the diagram that caught my attention:

Instead of the usual business school lingo, you will notice “alliance,” “tribe,” “squad,” and “trio.” I am not sure what these jazzy words mean, but I want to ask you a question, “Looking at this matrix, who is responsible when a problem occurs?” Take you time. I did spend some time looking at this chart, and I formulated several hypotheses:
- The creator wanted to make sure that a member of leadership would have a tough time figuring out who screwed up. If you disagree, that’s okay. I am a dinobaby, and I like those old fashioned flow diagrams with arrows and boxes. In those boxes is the name of the person who has to fix a problem. I don’t know about one’s tribe. I know Problem A is here. Person B is going to fix it. Simple.
- The matrix as displayed allows a lot of people to blame other people. For example, what if the coach is like the leader of the Cleveland Browns, who brilliantly equipped a young quarterback with the incorrect game plan for the first quarter of a football game. Do we blame the coach or do we chase down a product owner? What if the problem is a result of a dependency screw up involving another squad in a different tribe? In practical terms, there is no one with direct responsibility for the problem. Again: Don’t agree? That’s okay.
- The matrix has weird “leadership” or “employment categories” distributed across the X axes at the top of the chart. What’s a chapter? What’s an alliance? What’s self organized and autonomous in a complex technical system? My view is that this is pure baloney designed to make people feel important yet shied any one person from responsibility. I bet some reading this numbered point find my statement out of line. Tough.
The diagram makes clear that the organization is presented as one that will just muddle forward. No one will have responsibility when a problem occurs? No one will know how to fix the problem without dropping other work and reverse engineering what is happening. The chart almost guarantees bafflement when a problem surfaces.
The second item I noticed was this statement or “learning” from the individual who presented the case example. Here’s the passage:
When you solve a real problem and make it visible, people join in. Trust is also built that way, by inviting others to improve what you started and celebrating when they do it better than you.
For this passage hooks into the one about solving a problem; to wit:
Helping people debug. I have never considered myself especially smart, but I have always been very systematic when connecting error messages, code, hypotheses, and system behavior. To my surprise, many people saw this as almost magical. It was not magic. It was a mix of experience, fundamentals, intuition, knowing where to look, and not being afraid to dive into third-party library code.
These two passages describe human interactions. Working with others can result in a collective effort greater than the sum of its parts. It is a human manifestation. One fellow described this a interaction efflorescence. Fancy words for what happens when a few people face a deadline and severe consequences for failure.
Why did I spend time pointing out an organizational structure purpose built to prevent assigning responsibility and the very human observations of the case study author?
The answer is, “What will happen when smart software is tossed into this management structure?” First, people will be fired. The matrix will have lots of empty boxes. Second, the human interaction will have to adapt to the smart software. The smart software is not going to adapt to humans. Don’t believe me. One smart software company defended itself by telling a court it is in our terms of service that suicide in not permissible. Therefore, we are not responsible. The dead kid violated the TOS.
How functional will the company be as the very human insight about solving real problems interfaces with software? Man machine interface? Will that be an issue in a go fast outfit? Nope. The human will be excised as a consequence of efficiency.
Stephen E Arnold, December 23, 2025
Poor Meta! Allegations about Accepting Scam Advertising
December 19, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
That well managed, forward leaning, AI and goggle centric company is in the news again. This time the focus is advertising that is scammy. “Mark Zuckerberg’s Meta Allows Rampant Scam Ads from China While Raking in Billions, Explosive Report Says” states:
According to an investigation by Reuters, Meta earned more than $3 billion in China last year through scam ads for illegal gambling, pornography, and other inappropriate content. That figure represents nearly 19 percent of the company’s $18 billion in total ad revenue from China during the same period. Reuters had previously reported that 10 percent of Meta’s global revenue came from fraudulent ads.
The write up makes a pointed statement:
The investigation suggests Meta knew about the scale of the ad fraud problem on its platforms, but chose not to act because it would have affected revenue.

Guess what happens when senior managers at a large social media outfit pay little attention to what happens around them? Thanks, ChatGPT, good enough.
Let’s assume that the allegations are accurate and verifiable. The question is, “Why did Meta take in billions from scam ads?” My view is that there were several reasons:
- Revenue
- Figuring out what is and is not “spammy” is expensive. Spam may be like the judge’s comment years ago, “I will know it when I see it.” Different people have different perceptions
- Changing the ad sales incentive programs is tricky, time consuming, and expensive.
The logical decision is, based on my limited understanding of how managerial decisions are made at Meta simple: Someone may have said, “Hey, keep doing it until someone makes us stop.”
Why would a very large company adopt this hypothetical response to spammy ads?
My hunch is that management looked the other way. Revenue is important.
Stephen E Arnold, December 19, 2025
First, Virtual AI Compute and Now a Virtual Supercomputation Complex
December 19, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
Do you remember the good old days at AT&T? No Judge Green. No Baby Bells. Just the Ma Bell. Devices were boxes or plastic gizmos. Western Electric paid people to throw handsets out of a multi story building to make sure the stuff was tough. That was the old Ma Bell. Today one has virtual switches, virtual exchanges, and virtual systems. Software has replaced quite a bit of the fungible.
A few days ago, Pavel Durov rolled out his Cocoon. This is a virtual AI complex or VAIC. Skip that build out of data centers. Telegram is using software to provide an AI compute service to anyone with a mobile device. I learned today (December 6, 2025) that Stephen Wolfram has rolled out “instant supercompute.”
When those business plans don’t work out, the buggy whip boys decide to rent out their factory and machines. Too bad about those new fangled horseless carriages. Will the AI data center business work out? Stephen Wolfram and Pavel Durov seem to think that excess capacity is a business opportunity. Thanks, Venice.ai. Good enough.
A Mathematica user wants to run a computation at scale. According to “Instant Supercompute: Launching Wolfram Compute Services”:
Well, today we’ve released an extremely streamlined way to do that. Just wrap the scaled up computation in RemoteBatchSubmit and off it’ll go to our new Wolfram Compute Services system. Then—in a minute, an hour, a day, or whatever—it’ll let you know it’s finished, and you can get its results. For decades I’ve often needed to do big, crunchy calculations (usually for science). With large volumes of data, millions of cases, rampant computational irreducibility, etc. I probably have more compute lying around my house than most people—these days about 200 cores worth. But many nights I’ll leave all of that compute running, all night—and I still want much more. Well, as of today, there’s an easy solution—for everyone: just seamlessly send your computation off to Wolfram Compute Services to be done, at basically any scale.
And the payoff to those using Mathematica for big jobs:
One of the great strengths of Wolfram Compute Services is that it makes it easy to use large-scale parallelism. You want to run your computation in parallel on hundreds of cores? Well, just use Wolfram Compute Services!
One major point in the announcement is:
Wolfram Compute Services is going to be very useful to many people. But actually it’s just part of a much larger constellation of capabilities aimed at broadening the ways Wolfram Language can be used…. An important direction is the forthcoming Wolfram HPCKit—for organizations with their own large-scale compute facilities to set up their own back ends to RemoteBatchSubmit, etc. RemoteBatchSubmit is built in a very general way, that allows different “batch computation providers” to be plugged in.
Does this suggest that Supercompute is walking down the same innovation path as Pavel and Nikolai Durov? I seem some similarities, but there are important differences. Telegram’s reputation is enhanced with some features of considerable value to a certain demographic. Wolfram Computer Services is closely associated with heavy duty math. Pavel Durov awaits trial in France on more than a dozen charges of untoward online activities. Stephen Wolfram collects awards and gives enthusiastic if often incomprehensible talks on esoteric subjects.
But the technology path is similar in my opinion. Both of these organizations want to use available compute resources; they are not too keen on buying GPUs, building data centers, and spending time in meetings about real estate.
The cost of running a job on the Supercompute system depends on a number of factors. A user buys “credits” and pays for a job with those. No specific pricing details are available to me at this time: 0800 US Eastern on December 6, 2025.
Net net: Two very intelligent people — Stephen Wolfram and Pavel Durov — seem to think that the folks with giant data centers will want to earn some money. Messrs. Wolfram and Durov are resellers of excess computing capacity. Will Amazon, Google, Microsoft, et al be signing up if the AI demand does not meet the somewhat robust expectations of big AI tech companies?
Stephen E Arnold, December 19, 2025
Windows Strafed by Windows Fanboys: Incredible Flip
December 19, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
When the Windows folding phone came out, I remember hunting around for blog posts, podcasts, and videos about this interesting device. Following links I bumbled onto the Windows Central Web site. The two fellows who seemed to be front and center had a podcast (a quite irregularly published podcast I might add). I was amazed at the pro-folding gizmo. One of the write ups was panting with excitement. I thought then and think now that figuring out how to fold a screen is a laboratory exercise, not something destined to be part of my mobile phone experience.
I forgot about Windows Central and the unflagging ability to find something wonderfully bigly about the Softies. Then I followed a link to this story: “Microsoft Has a Problem: Nobody Wants to Buy or Use Its Shoddy AI Products — As Google’s AI Growth Begins to Outpace Copilot Products.”

An athlete failed at his Dos Santos II exercise. The coach, a tough love type, offers the injured gymnast a path forward with Mistral AI. Thanks, Qwen, do you phone home?
The cited write up struck me as a technology aficionado pulling off what is called a Dos Santos II. (If you are not into gymnastics, this exercise “trick” involves starting backward with a half twist into a double front in the layout position. Boom. Perfect 10. From folding phone to “shoddy AI products.”
If I were curious, I would dig into the reasons for this change in tune, instruments, and concert hall. My hunch is that a new manager replaced a person who was talking (informally, of course) to individuals who provided the information without identifying the source. Reuters, the trust outfit, does this on occasion as do other “real” journalists. I prefer to say, here are my observations or my hypotheses about Topic X. Others just do the “anonymous” and move forward in life.
Here are a couple of snips from the write up that I find notable. These are not quite at the “shoddy AI products” level, but I find them interesting.
Snippet 1:
If there’s one thing that typifies Microsoft under CEO Satya Nadella‘s tenure: it’s a general inability to connect with customers. Microsoft shut down its retail arm quietly over the past few years, closed up shop on mountains of consumer products, while drifting haphazardly from tech fad to tech fad.
I like the idea that Microsoft is not sure what it is doing. Furthermore, I don’t think Microsoft every connected with its customers. Connections come from the Certified Partners, the media lap dogs fawning at Microsoft CEO antics, and brilliant statements about how many Russian programmers it takes to hack into a Windows product. (Hint: The answer is a couple if the Telegram posts I have read are semi accurate.)
Snippet 2:
With OpenAI’s business model under constant scrutiny and racking up genuinely dangerous levels of debt, it’s become a cascading problem for Microsoft to have tied up layer upon layer of its business in what might end up being something of a lame duck.
My interpretation of this comment is that Microsoft hitched its wagon to one of AI’s Cybertrucks, and the buggy isn’t able to pull the Softie’s one-horse shay. The notion of a “lame duck” is that Microsoft cannot easily extricate itself from the money, the effort, the staff, and the weird “swallow your AI medicine, you fool” approach the estimable company has adopted for Copilot.
Snippet 3:
Microsoft’s “ship it now fix it later” attitude risks giving its AI products an Internet Explorer-like reputation for poor quality, sacrificing the future to more patient, thoughtful companies who spend a little more time polishing first. Microsoft’s strategy for AI seems to revolve around offering cheaper, lower quality products at lower costs (Microsoft Teams, hi), over more expensive higher-quality options its competitors are offering. Whether or not that strategy will work for artificial intelligence, which is exorbitantly expensive to run, remains to be seen.
A less civilized editor would have dropped in the industry buzzword “crapware.” But we are stuck with “ship it now fix it later” or maybe just never. So far we have customer issues, the OpenAI technology as a lame duck, and now the lousy software criticism.
Okay, that’s enough.
The question is, “Why the Dos Santos II” at this time? I think citing the third party “Information” is a convenient technique in blog posts. Heck, Beyond Search uses this method almost exclusively except I position what I do as an abstract with critical commentary.
Let my hypothesize (no anonymous “source” is helping me out):
- Whoever at Windows Central annoyed a Softie with power created is responding to this perceived injustice
- The people at Windows Central woke up one day and heard a little voice say, “Your cheerleading is out of step with how others view Microsoft.” The folks at Windows Central listened and, thus, the Dos Santos II.
- Windows Central did what the auth9or of the article states in the article; that is, using multiple AI services each day. The Windows Central professional realized that Copilot was not as helpful writing “real” news as some of the other services.
Which of these is closer to the pin? I have no idea. Today (December 12, 2025) I used Qwen, Anthropic, ChatGPT, and Gemini. I want to tell you that these four services did not provide accurate output.
Windows Central gets a 9.0 for its flooring Microsoft exercise.
Stephen E Arnold, December 19, 2025
Waymo and a Final Woof
December 19, 2025
We’re dog lovers. Canines are the best thing on this green and blue sphere. We were sickened when we read this article in The Register about the death of a bow-wow: “Waymo Chalks Up Another Four-Legged Casualty On San Francisco Streets.”
Waymo is a self-driving car company based in San Francisco. The company unfortunately confirmed that one of its self-driving cars ran over a small, unleashed dog. The vehicle had adults and children in it. The children were crying after hearing the dog’s suffering. The status of the dog is unknown. Waymo wants to locate the dog’s family to offer assistance and veterinary services.
Waymo cars are popular in San Francisco, but…
“Many locals report feeling uneasy about the fleet of white Jaguar I-Paces roaming the city’s roads, although the offering has proven popular with tourists, women seeking safer rides, and parents in need of a quick, convenient way to ferry their children to school. Waymo currently operates in the SF Bay Area, Los Angeles, and Phoenix, and some self-driving rides are available through Uber in Austin and Atlanta.”
Waymo cars also ran over a famous stray cat named Kit Kat, known as the “Mayor of 16th Street.” May these animals rest in peace. Does the Waymo software experience regret? Yeah.
Whitney Grace, December 19, 2025
Mistakes Are Biological. Do Not Worry. Be Happy
December 18, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I read a short summary of a longer paper written by a person named Paul Arnold. I hope this is not misinformation. I am not related to Paul. But this could be a mistake. This dinobaby makes many mistakes.
The article that caught my attention is titled “Misinformation Is an Inevitable Biological Reality Across nature, Researchers Argue.” The short item was edited by a human named Gaby Clark. The short essay was reviewed by Robert Edan. I think the idea is to make clear that nothing in the article is made up and it is not misinformation.
Okay, but…. Let’s look at couple of short statements from the write up about misinformation. (I don’t want to go “meta” but the possibility exists that the short item is stuffed full of information. What do you think?

Here’s an image capturing a youngish teacher outputting misinformation to his students. Okay, Qwen. Good enough.
Here’s snippet one:
… there is nothing new about so-called “fake news…”
Okay, does this mean that software that predicts the next word and gets it wrong is part of this old, long-standing trajectory for biological creatures. For me, the idea that algorithms cobbled together gets a pass because “there is nothing new about so-called ‘fake news’ shifts the discussion about smart software. Instead of worrying about getting about two thirds of questions right, the smart software is good enough.
A second snippet says:
Working with these [the models Paul Arnold and probably others developed] led the team to conclude that misinformation is a fundamental feature of all biological communication, not a bug, failure, or other pathology.
Introducing the notion of “pathology” adds a bit of context to misinformation. Is a human assembled smart software system, trained on content that includes misinformation and processed by algorithms that may be biased in some way is just the way the world works. I am not sure I am ready to flash the green light for some of the AI outfits to output what is demonstrably wrong, distorted, weaponized, or non-verifiable outputs.
What puzzled me is that the article points to itself and to an article by Ling Wei Kong et al, “A Brief Natural history of Misinformation” in the Journal of the Royal Society Interface.
Here’s the link to the original article. The authors of the publication are, if the information on the Web instance of the article is accurate, Ling-Wei Kong, Lucas Gallart, Abigail G. Grassick, Jay W. Love, Amlan Nayak, and Andrew M. Hein. Seven people worked on the “original” article. The three people identified in the short version worked on that item. This adds up to 10 people. Apparently the group believes that misinformation is a part of the biological being. Therefore, there is no cause to worry. In fact, there are mechanisms to deal with misinformation. Obviously a duck quack that sends a couple of hundred mallards aloft can protect the flock. A minimum of one duck needs to check out the threat only to find nothing is visible. That duck heads back to the pond. Maybe others follow? Maybe the duck ends up alone in the pond. The ducks take the viewpoint, “Better safe than sorry.”
But when a system or a mobile device outputs incorrect or weaponized information to a user, there may not be a flock around. If there is a group of people, none of them may be able to identify the incorrect or weaponized information. Thus, the biological propensity to be wrong bumps into an output which may be shaped to cause a particular effect or to alter a human’s way of thinking.
Most people will not sit down and take a close look at this evidence of scientific rigor:

and then follow the logic that leads to:

I am pretty old but it looks as if Mildred Martens, my old math teacher, would suggest the KL divergence wants me to assume some things about q(y). On the right side, I think I see some good old Bayesian stuff but I didn’t see the to take me from the KL-difference to log posterior-to-prior ratio. Would Miss Martens ask a student like me to clarify the transitions, fix up the notation, and eliminate issues between expectation vs. pointwise values? Remember, please, that I am a dinobaby and I could be outputting misinformation about misinformation.
Several observations:
- If one accepts this line of reasoning, misinformation is emergent. It is somehow part of the warp and woof of living and communicating. My take is that one should expect misinformation.
- Anything created by a biological entity will output misinformation. My take on this is that one should expect misinformation everywhere.
- I worry that researchers tackling information, smart software, and related disciplines may work very hard to prove that information is inevitable but the biological organisms can carry on.
I am not sure if I feel comfortable with the normalization of misinformation. As a dinobaby, the function of education is to anchor those completing a course of study in a collection of generally agreed upon facts. With misinformation everywhere, why bother?
Net net: One can read this research and the summary article as an explanation why smart software is just fine. Accept the hallucinations and misstatements. Errors are normal. The ducks are fine. The AI users will be fine. The models will get better. Despite this framing of misinformation is everywhere, the results say, “Knock off the criticism of smart software. You will be fine.”
I am not so sure.
Stephen E Arnold, December 18, 2025
Tim Apple Convinces a Person That Its AI Juice Is Lemonade
December 18, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I read “Apple’s Slow AI Pace Becomes a Strength as Market Grows Weary of Spending.” [Please, note that the source I used may kill the link. If that happens, complain to Yahoo, not to me.]
Everyone, it seems, is into AI. The systems hallucinate; they fail to present verifiable information; they draw stuff with too many fingers; they even do videos purpose built for scamming grannies.
Apple has been content to talk about AI and not much else other than experience staff turnover and some management waffling.
But that’s looking at Apple’s management approach to AI incorrectly. Apple was smart. Its missing the AI boat was brilliant. Just as doubts about the viability of using more energy than available to create questionable outputs, Apple’s slow movement positions it to thrive.
The write up makes sweet lemonade out of what I thought was gallons of sour, lukewarm apple cider.
I quote:
Apple now has a $4.1 trillion market capitalization and the second biggest weight in the S&P 500, leaping over Microsoft and closing in on Nvidia. The shift reflects the market’s questioning of the hundreds of billions of dollars Big Tech firms are throwing at AI development, as well as Apple’s positioning to eventually benefit when the technology is ready for mass use.
The write up includes this statement from a financial whiz:
“The stock is expensive, but Apple’s consumer franchise is unassailable,” Moffett said. “At a time when there are very real concerns about whether AI is a bubble, Apple is understandably viewed as the safe place to hide.”
Yep, lemonade. Next, up is down and down is up. I am ready. The only problem for me is that Apple tried to do AI and announced features and services. Then Apple could only produce the Granny scarf to hold another look-alike candy bar mobile device. Apple needs Splenda in its mix!
Stephen E Arnold, December 18, 2025

