Mistakes Are Biological. Do Not Worry. Be Happy
December 18, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I read a short summary of a longer paper written by a person named Paul Arnold. I hope this is not misinformation. I am not related to Paul. But this could be a mistake. This dinobaby makes many mistakes.
The article that caught my attention is titled “Misinformation Is an Inevitable Biological Reality Across nature, Researchers Argue.” The short item was edited by a human named Gaby Clark. The short essay was reviewed by Robert Edan. I think the idea is to make clear that nothing in the article is made up and it is not misinformation.
Okay, but…. Let’s look at couple of short statements from the write up about misinformation. (I don’t want to go “meta” but the possibility exists that the short item is stuffed full of information. What do you think?

Here’s an image capturing a youngish teacher outputting misinformation to his students. Okay, Qwen. Good enough.
Here’s snippet one:
… there is nothing new about so-called “fake news…”
Okay, does this mean that software that predicts the next word and gets it wrong is part of this old, long-standing trajectory for biological creatures. For me, the idea that algorithms cobbled together gets a pass because “there is nothing new about so-called ‘fake news’ shifts the discussion about smart software. Instead of worrying about getting about two thirds of questions right, the smart software is good enough.
A second snippet says:
Working with these [the models Paul Arnold and probably others developed] led the team to conclude that misinformation is a fundamental feature of all biological communication, not a bug, failure, or other pathology.
Introducing the notion of “pathology” adds a bit of context to misinformation. Is a human assembled smart software system, trained on content that includes misinformation and processed by algorithms that may be biased in some way is just the way the world works. I am not sure I am ready to flash the green light for some of the AI outfits to output what is demonstrably wrong, distorted, weaponized, or non-verifiable outputs.
What puzzled me is that the article points to itself and to an article by Ling Wei Kong et al, “A Brief Natural history of Misinformation” in the Journal of the Royal Society Interface.
Here’s the link to the original article. The authors of the publication are, if the information on the Web instance of the article is accurate, Ling-Wei Kong, Lucas Gallart, Abigail G. Grassick, Jay W. Love, Amlan Nayak, and Andrew M. Hein. Seven people worked on the “original” article. The three people identified in the short version worked on that item. This adds up to 10 people. Apparently the group believes that misinformation is a part of the biological being. Therefore, there is no cause to worry. In fact, there are mechanisms to deal with misinformation. Obviously a duck quack that sends a couple of hundred mallards aloft can protect the flock. A minimum of one duck needs to check out the threat only to find nothing is visible. That duck heads back to the pond. Maybe others follow? Maybe the duck ends up alone in the pond. The ducks take the viewpoint, “Better safe than sorry.”
But when a system or a mobile device outputs incorrect or weaponized information to a user, there may not be a flock around. If there is a group of people, none of them may be able to identify the incorrect or weaponized information. Thus, the biological propensity to be wrong bumps into an output which may be shaped to cause a particular effect or to alter a human’s way of thinking.
Most people will not sit down and take a close look at this evidence of scientific rigor:

and then follow the logic that leads to:

I am pretty old but it looks as if Mildred Martens, my old math teacher, would suggest the KL divergence wants me to assume some things about q(y). On the right side, I think I see some good old Bayesian stuff but I didn’t see the to take me from the KL-difference to log posterior-to-prior ratio. Would Miss Martens ask a student like me to clarify the transitions, fix up the notation, and eliminate issues between expectation vs. pointwise values? Remember, please, that I am a dinobaby and I could be outputting misinformation about misinformation.
Several observations:
- If one accepts this line of reasoning, misinformation is emergent. It is somehow part of the warp and woof of living and communicating. My take is that one should expect misinformation.
- Anything created by a biological entity will output misinformation. My take on this is that one should expect misinformation everywhere.
- I worry that researchers tackling information, smart software, and related disciplines may work very hard to prove that information is inevitable but the biological organisms can carry on.
I am not sure if I feel comfortable with the normalization of misinformation. As a dinobaby, the function of education is to anchor those completing a course of study in a collection of generally agreed upon facts. With misinformation everywhere, why bother?
Net net: One can read this research and the summary article as an explanation why smart software is just fine. Accept the hallucinations and misstatements. Errors are normal. The ducks are fine. The AI users will be fine. The models will get better. Despite this framing of misinformation is everywhere, the results say, “Knock off the criticism of smart software. You will be fine.”
I am not so sure.
Stephen E Arnold, December 18, 2025
Tim Apple Convinces a Person That Its AI Juice Is Lemonade
December 18, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I read “Apple’s Slow AI Pace Becomes a Strength as Market Grows Weary of Spending.” [Please, note that the source I used may kill the link. If that happens, complain to Yahoo, not to me.]
Everyone, it seems, is into AI. The systems hallucinate; they fail to present verifiable information; they draw stuff with too many fingers; they even do videos purpose built for scamming grannies.
Apple has been content to talk about AI and not much else other than experience staff turnover and some management waffling.
But that’s looking at Apple’s management approach to AI incorrectly. Apple was smart. Its missing the AI boat was brilliant. Just as doubts about the viability of using more energy than available to create questionable outputs, Apple’s slow movement positions it to thrive.
The write up makes sweet lemonade out of what I thought was gallons of sour, lukewarm apple cider.
I quote:
Apple now has a $4.1 trillion market capitalization and the second biggest weight in the S&P 500, leaping over Microsoft and closing in on Nvidia. The shift reflects the market’s questioning of the hundreds of billions of dollars Big Tech firms are throwing at AI development, as well as Apple’s positioning to eventually benefit when the technology is ready for mass use.
The write up includes this statement from a financial whiz:
“The stock is expensive, but Apple’s consumer franchise is unassailable,” Moffett said. “At a time when there are very real concerns about whether AI is a bubble, Apple is understandably viewed as the safe place to hide.”
Yep, lemonade. Next, up is down and down is up. I am ready. The only problem for me is that Apple tried to do AI and announced features and services. Then Apple could only produce the Granny scarf to hold another look-alike candy bar mobile device. Apple needs Splenda in its mix!
Stephen E Arnold, December 18, 2025
AI and Management: Look for Lists and Save Time
December 18, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
How does a company figure out whom to terminate? [a] Ask around. [b] Consult “objective” performance reviews. [c] Examine a sales professionals booked deal? [d] Look for a petition signed by employees unhappy with company policies? The answer is at the end of this short post.

A human resources professional has figured out which employees are at the top of the reduction in force task. Thanks Venice.ai. How many graphic artists did you annoy today?
I read “More Than 1,000 Amazon Employees Sign Open Letter Warning the Company’s AI Will Do Staggering Damage to Democracy, Our Jobs, and the Earth .”* The write up states:
The letter was published last week with signatures from over 1,000 unnamed Amazon employees, from Whole Foods cashiers to IT support technicians. It’s a fraction of Amazon’s workforce, which amounts to about 1.53 million, according to the company’s third-quarter earnings release. In it, employees claim the company is “casting aside its climate goals to build AI,” forcing them to use the tech while working toward cutting its workforce in favor of AI investments, and helping to build “a more militarized surveillance state with fewer protections for ordinary people.”
Okay, grousing employees. Signatures. Amazon AI. Hmm. I wonder if some of that old time cross correlation will highlight these individuals and their “close” connections in the company. Who are the managers of these individuals? Are the signers and their close connections linked by other factors; for example a manager? What if a manager has a disproportionate number of grousers? These are made up questions in a purely hypothetical scenario. But they crossed my mind
Do you think someone in Amazon leadership might think along similar lines?
The write up says:
Amazon announced in October it would cut around 14,000 corporate jobs, about 4% of its 350,000-person corporate workforce, as part of a broader AI-driven restructuring. Total corporate cuts could reach up to 30,000 jobs, which would be the company’s single biggest reduction ever, Reuters reported a day prior to Amazon’s announcement.
My reaction was, “Just 1,000 employees signed the grousing letter?” The rule of thumb in a company with pretty good in-person customer support had a truism, “One complaint means 100 people are annoyed just too lazy to call us.” I wonder if this rule of thumb would apply to an estimable firm like Amazon. It only took me 30 minutes to get a refund for the prone to burn or explode mobile phone battery. Pretty swift, but not exactly the type of customer services that company at which I worked responded.
The write up concludes with a quote from a person in carpetland at Amazon:
“What we need to remember is that the world is changing quickly. This generation of AI is the most transformative technology we’ve seen since the Internet, and it’s enabling companies to innovate much faster than ever before,” Beth Galetti, Amazon’s senior vice president of people and experience, wrote in the memo.
I like the royal “we” or the parental “we.” I don’t think it is the in the trenches we, but that is my personal opinion. I like the emphasis on faster and innovation. That move fast and break things is just an outstanding approach to dealing with complex problems.
Ah, Amazon, why does my Kindle iPad app no longer work when I don’t have an Internet connection? You are definitely innovating.
And the correct answer to the multiple choice test? [d] Names on a list. Just sayin’.
———————
* This is one of those wonky Yahoo news urls. If it doesn’t work, don’t hassle me. Speak with that well managed outfit Yahoo, not someone who is 81 and not well managed.
Stephen E Arnold, December 18, 2025
Un-Aliving Violates TOS and Some Linguistic Boundaries
December 18, 2025
Ah, lawyers.
Depression is a dark emotionally state and sometimes makes people take their lives. Before people unalive themselves, they usually investigate the act and/or reach out to trusted sources. These days the “trusted sources” are the host of AI chatbots that populate the Internet. Ars Technica shares the story about how one teenager committed suicide after using a chatbot: “OpenAI Says Dead Teen Violated TOS When He Used ChatGPT To Plan Suicide.”
OpenAI is facing a total of five lawsuits about wrongful deaths associated with ChatGPT. The first lawsuit came to court and OpenAI defended itself by claiming that the teen in question, Adam Raine, violated the terms of service because they prohibited self-harm and suicide. While pursuing the world’s “most engaging chatbot,” OpenAI relaxed their safety measures for ChatGPT which became Raine’s suicide coach.
OpenAI’s lawyers argued that Raine’s parents selected the most damaging chat logs. They also claim that the logs show that Raine had had suicidal ideations since age eleven and that his medication increased his un-aliving desires.
Along with the usual allegations about shifting the blame onto the parents and others, OpenAI says that people use the chatbot at their own risk. It’s a way to avoid any accountability.
“To overcome the Raine case, OpenAI is leaning on its usage policies, emphasizing that Raine should never have been allowed to use ChatGPT without parental consent and shifting the blame onto Raine and his loved ones. ‘ChatGPT users acknowledge their use of ChatGPT is ‘at your sole risk and you will not rely on output as a sole source of truth or factual information,’ the filing said, and users also “must agree to ‘protect people’ and ‘cannot use [the] services for,’ among other things, ‘suicide, self-harm,’ sexual violence, terrorism or violence.’”
OpenAI employees were also alarmed by the amount of “liberties” used to make the chatbot more engaging.
How far will OpenAI go with ChatGPT to make it intuitive, human-like, and intelligent? Raines already had underlying conditions that caused his death, but ChatGPT did exasperate them. Remember the terms of service.
Whitney Grace, December 18, 2025
Meta: An AI Management Issue Maybe?
December 17, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I really try not to think about Facebook, Mr. Zuckerberg, his yachts, and Llamas. I mean the large language model, not the creatures I associate with Peru. (I have been there, and I did not encounter any reptilian snakes. Cuy chactado, si. Vibora, no.)
I read in the pay-walled orange newspaper online “Inside Mark Zuckerberg’s Turbulent Bet on AI.” Hmm. Turbulent. I was thinking about synonyms I would have suggested; for example, unjustifiable, really big, wild and crazy, and a couple of others. I am not a real journalist so I will happily accept turbulent. The word means, however, “relating to or denoting flow of a fluid in which the velocity at any point fluctuates irregularly and there is continual mixing rather than a steady or laminar flow pattern” according to the Google’s opaque system. I think the idea is that Meta is operating in a chaotic way. What about “juiced running fast and breaking things”? Yep. Chaos, a modern management method that is supposed to just work.
A young executive with oodles of money hears an older person, probably a blue chip consultant, asking one of those probing questions about a top dog’s management method. Will this top dog listen or just fume and keep doing what worked for more than a decade? Thanks, Qwen. Good enough.
What does the write up present? Please, sign up for the FT and read the original article. I want to highlight two snippets.
The first is:
Investors are also increasingly skittish. Meta’s 2025 capital expenditures are expected to hit at least $70bn, up from $39bn the previous year, and the company has started undertaking complex financial maneuverings to help pay for the cost of new data centers and chips, tapping corporate bond markets and private creditors.
Not RIFed employees, not users, not advertisers, and not government regulators. The FT focuses on investors who are skittish. The point is that when investors get skittish, an already unsettled condition is sufficiently significant to increase anxiety. Investors do not want to be anxious. Has Mr. Zuckerberg mismanaged the investors that help keep his massive investments in to be technology chugging along. First, there was the metaverse. That may arrive in some form, but for Meta I perceive it as a dumpster fire for cash.
Now investors are anxious and the care and feeding of these entities is more important. The fact that the investors are anxious suggests that Mr. Zuckerberg has not managed this important category of professionals in a way that calms them down. I don’t think the FT’s article will do much to alleviate their concern.
The second snippet is:
But the [Meta] model performed worse than those by rivals such as OpenAI and Google on jobs including coding tasks and complex problem solving.
This suggests to me that Mr. Zuckerberg did not manage the process in an optimal way. Some wizards left for greener pastures. Others just groused about management methods. Regardless of the signals one receives about Meta, the message I receive is that management itself is the disruptive factor. Mismanagement is, I think, part of the method at Meta.
Several observations:
- Meta like the other AI outfits with money to toss in the smart software dumpster fire are in the midst of realizing “if we think it, it will become reality” is not working. Meta’s spinning off chunks of flaming money bundles and some staff don’t want to get burned.
- Meta is a technology follower, and it may have been aced by its message and social media competitor Telegram. If Telegram’s approach is workable, Meta may be behind another AI eight ball.
- Mr. Zuckerberg is a wonder of American business. He began as a boy wonder. Now as an adult wonder, the question is, “Why are investors wondering about his current wonder-fulness?”
Net net: Meta faces a management challenge. The AI tech is embedded in that. Some of its competitors lack management finesse, but some of them are plugging along and not yet finding their companies presented in the Financial Times as outfits making “increasingly skittish.” Perhaps in the future, but right now, the laser focus of the Financial Times is on Meta. The company is an easy target in my opinion.
Stephen E Arnold, December 17, 2025
The Google Has a New Sheep Herder: An AI Boss to Nip at the Heels of the AI Beasties
December 17, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
Staffing turmoil appears to be the end-of-year trend in some Silicon Valley outfits. Apple is spitting out executives. Meta is thrashing. OpenAI is doing the Code Red alert thing amidst unsettled wizards. And, today I learned that Google has a chief technologist for AI infrastructure. I think that means data centers, but it could extend some oversight to the new material science lab in the UK that will use AI (of course) to invent new materials. “Exclusive / Google Names New Chief of AI Infrastructure Buildout” reports:
Amin Vahdat, who joined Google from academia roughly 15 years ago, will be named chief technologist for AI infrastructure, according to the memo, and become one of 15 to 20 people reporting directly to CEO Sundar Pichai. Google estimates it will have spent more than $90 billion on capital expenditures by the end of 2025, most of it going into the part of the company Vahdat will now oversee.

The sheep dog attempts to herd the little data center doggies away from environmental issues, infrastructure inconsistencies, and roll-your-own engineering. Woof. Thanks, Venice.ai. Close enough for horseshoes.
I read this as making clear the following:
- Google spent “more than $90 billion” on infrastructure in 2025
- No one was paying attention to this investment
- For 2025, a former academic steeped in Googliness will herd the sheep in 2026.
I assume that is part of the McKinsey way, Fire, Aim, Ready! Dinobabies like me with some blue chip consulting experience feel slightly more comfortable with the old school Ready, Aim, Fire! But the world today is different from the one I traveled through decades ago. Nostalgia does not cut it in the “we have to win AI” business environment today.
Here’s a quote making clear that planning and organizing were not part of the 2025 check writing. I quote:
“This change establishes AI Infrastructure as a key focus area for the company,” wrote Google Cloud CEO Thomas Kurian in the Wednesday memo congratulating Vahdat.
The cited article puts this sheep herder in context:
In August, Google disclosed in a paper co-authored by Vahdat that the amount of energy used to run the median prompt on its AI models was equivalent to watching less than nine seconds of television and consuming five drops of water. The numbers were far less than what some critics had feared and competitors had likely hoped for. There’s no single answer for how to best run an AI data center. It’s small, coordinated efforts across disparate teams that span the globe. The job of coordinating it all now has an official title.
See and understand. The power consumption for the Google AI data centers is trivial. The Google can plug these puppies into the local power grid, nip at the heels of the people who complain about rising electricity prices and brown outs, and nuzzle the folks who:
- Promise small, local nuclear power generation facilities. No problems with licensing, component engineering, and nuclear waste. Trivialities.
- Repurposed jet engines from a sort of real supersonic jet source. Noise? No problem. Heat? No problem. Emission control? No problem.
- Brand spanking new pressurized water reactors built by the old school nuclear crowd. No problem. Time? No problem. The new folks are accelerationists.
- Recommissioning turned off (deactivated) nuclear power stations. No problem. Costs? No problem. Components? No problem. Environmental concerns? Absolutely no problem.
Google is tops in strategic planning and technology. It should be. It crafted its expertise selling advertising. AI infrastructure is a piece of cake. I think sheep dogs herding AI can do the job which apparently was not done for more than a year. When a problem becomes to big to ignore, restructure. Grrr or Woof, not Yipe, little herder.
Stephen E Arnold, December 17, 2025
Tech Whiz Wants to Go Fishing (No, Not Phishing), Hook, Link, Sinker Stuff
December 17, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
My deeply flawed service that feeds me links produced a rare gem. “I Work in Tech But I Hate Everything Big Tech Has Become” is interesting because it states clearly what I have heard from other Silicon Valley types recently. I urge you to read the essay because the discomfort the author feels jumps off the screen or printed page if you are a dinobaby. If the essay has a rhetorical weakness, it is no resolution. My hunch is that the author has found himself in a digital construct with No Exit signs on the door.

Thanks, Venice.ai. Good enough.
The essay states:
We try to build products that help people. We try to solve mostly problems we ourselves face using tech. We are nerds, misfits, borderline insane people driven by a passion to build. we could probably get a job in big tech if we tried as hard as we try building our own startup. but we don’t want to. in fact we can’t. we’d have to kill a little (actually a lot) of ourselves to do that.
This is an interesting comment. I interpreted it to mean that the tech workers and leadership who build “products that help people” are have probably “killed” some of their inner selves. I never thought of the luminaries who head the outfits pushing AI or deploying systems that governments have to ban for users under a certain age as being dead inside. Is it true? I am not sure. Thought provoking notion? Yes.
The essay states:
I hate everything big tech stands for today. Facebook openly admitting they earn millions from scam ads. VCs funding straight up brain rot or gambling. Big tech is not even pretending to be good these days.
The word “hate” provides a glimpse of how the author is responding to the current business set up in certain sectors of the technology industry. Instead of focusing on what might be called by some dinobaby like me as “ethical behavior” is viewed as abnormal by many people. My personal view is that this idea of doing whatever to reach a goal operates across many demographics. Is this a-ethical behavior now the norm.
The essay states:
If tech loses people like us, all it’ll have left are psychopaths. Look I’m not trying to take a holier-than-thou stance here. I’m just saying objectively it seems insane what’s happening in mainstream tech these days.
I noted a number of highly charged words. These make sense in the context of the author’s personal situation. I noted “psychopaths” and “insane.” When many instances of a-ethical behavior bubble up from technical, financial, and political sectors, a-ethics mean one cannot trust, rely, or believe words. Actions alone must be scrutinized.
The author wants to “keep fighting,” but against who or what system? Deception, trickery, double dealing, criminal activity can be identified in most business interactions.
The author mentions going fishing. The caution I would offer is to make sure you are not charged a dynamic price based on your purchasing profile. Shop around if any fishing stores are open. If not, Amazon will deliver what you need.
Stephen E Arnold, December 17, 2025
Australia: Kangaroos and Putting Kids in a Secure Pouch
December 17, 2025
Australia became the first country in the world to ban social media for kids under sixteen. They did it in a bid to protect the younger sect from addictive behaviors, online bullies, and predators. CNN details the ban in, “Millions Of Australian Children Just Lost Access To Social Media. What’s Happening And Will It Work?”
The ten platforms that are banned for under sixteen kids are, X, Twitch, Reddit, Kick, TikTok, Snapchat, Threats, Facebook, YouTube, and Instagram.
The ban will be implemented using age-verification technology, but the platforms don’t believe it will make kids safer. The Australian prime minister believes differently:
“Prime Minister Anthony Albanese said it was a “proud day” for Australia. ‘This is the day when Australian families are taking back power from these big tech companies. They are asserting the right of kids to be kids and for parents to have greater peace of mind,’ Albanese told the public broadcaster ABC Wednesday. But he conceded ‘it won’t be simple.’”
The platforms will use age-verification technology such as video selfies, email addresses, or official documents. The video selfies use facial data points to estimate age.
There are workarounds such as parents creating accounts for their kids and backup social media companies. People are saying it’s a game of whack-a-mole that the Australian government won’t win. There aren’t any punishments for parents who do make accounts for kids.
A follow up from The Nightly says, “Australian Under-16s Social Media Ban: Kids Claim Ban Didn’t Work As They Troll Anthony Albanese On TikTok.” The younger sect took to TikTok and did what kids do best: make fun of the incident. They’re trolling the Prime Minister with memes, videos, comments, and anything else to prove the ban isn’t working.
There are kinks to still work out, but maybe the ban will work. Some youngsters have good technical know how. Work arounds are inevitable. Even baby roos leave the pouch.
Whitney Grace, December 17, 2025
Google: Trying Hard Not to Be Noticed in a Crypto Club
December 16, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
Google continues to creep into crypto. Google has interacted with ANT Financial. Google has invested in some interesting compute services. And now Google will, if “Exclusive: YouTube Launches Option for U.S. Creators to Receive Stablecoin Payouts through PayPal” is on the money, give crypto a whirl among its creators.

A friendly creature warms up in a yoga studio. Few notice the suave green beast. But one person spots a subtle touch: Pink gym shoes purchased with PayPal crypto. Such a deal. Thanks, Venice.ai. Good enough.
The Fortune article reports as actual factual:
A spokesperson for Google, which owns YouTube, confirmed the video site has added payouts for creators in PayPal’s stablecoin but declined to comment further. YouTube is already an existing customer of PayPal’s and uses the fintech giant’s payouts service, which helps large enterprises pay gig workers and contractors.
How does this work?
Based on the research we did for our crypto lectures, a YouTuber in the US would have to have a PayPal account. Google puts the payment in PayPal’s crypto in the account. The YouTuber would then use PayPal to convert PayPal crypto into US dollars. Then the YouTuber could move the US dollars to his or her US bank account. Allegedly there would be no gas fee slapped on the transactions, but there is an opportunity to add service charges at some point. (I mean what self respecting MBA angling for a promotion wouldn’t propose that money making idea?)
Several observations:
- In my new monograph “The Telegram Labyrinth” available only to law enforcement officials, we identified Google as one of the firms moving in what we call the “Telegram direction.” The Google crypto creeps plus PayPal reinforce that observation. Why? Money and information.
- Information about how Google’s activities in crypto will conform to assorted money related rules and regulations are not clear to me. Furthermore as we completed our “The Telegram Labyrinth” research in early September 2025, not too many people were thinking about Google as a crypto player. But that GOOGcoin does seem like something even the lowest level wizard at Alphabet could envision, doesn’t it?
- Google has a track record of doing what it wants. Therefore, in my opinion, more little tests, baby steps, and semi-low profile moves probably are in the wild. Hopefully someone will start looking.
Net net: Google does do pretty much what it wants to do. From gaining new training data from its mobile-to-ear-bud translation service to expanding its AI capabilities with its new silicon, the Google is a giant creature doing some low impact exercises. When the Google shifts to lifting big iron, a number of interesting challenges will arise. Are regulators ready? Are online fraud investigators ready? Is Microsoft ready?
What’s your answer?
Stephen E Arnold, December 16, 2025
Ka-Ching: The EU Cash Registers Tolls for the Google
December 16, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
Thomson Reuters, the trust outfit because they say the company is, published another ka-ching story titled “Exclusive: Google Faces Fines Over Google Play if It Doesn’t Make More Concessions, Sources Say.” The story reports:
Alphabet’s Google is set to be hit with a potentially large EU fine early next year if it does not do more to ensure that its app store complies with EU rules aimed at ensuring fair access and competition, people with direct knowledge of the matter said.
An elected EU official introduces the new and permanent member of the parliament. Thanks, Venice.ai. Not exactly what I specified, but saving money on compute cycles is the name of the game today. Good enough.
I can hear the “Sorry. We’re really, really sorry” statement now. I can even anticipate the sequence of events; hence and herewith:
- Google says, “We believe we have complied.”
- The EU says, “Pay up.”
- Google says, “Let’s go to trial.”
- The EU says, “Fine with us.”
- The Google says, “We are innocent and have complied.”
- The EU says, “You are guilty and owe $X millions of dollars. (Note: The EU generates more revenue by fining US big tech companies than it does from certain tax streams I have heard.)
- The Google says, “Let’s negotiate.”
- The EU says, “Fine with us.”
- Google negotiates and says, “We have a deal plus we did nothing wrong.”
- The EU says, “Pay X millions less the Y millions we agree to deduct based on our fruitful negotiations.”
The actual factual article says:
DMA fines can be as much as 10% of a company’s global annual revenue. The Commission has also charged Google with favoring its associated search services in Google Search, and is investigating its use of online content for its artificial intelligence tools and services and its spam policy.
My interpretation of this snippet is that the EU has on deck another case of Google’s alleged law breaking. This is predictable, and the approach does generate revenue from companies with lots of cash.
Stephen E Arnold, December 16, 2025

