How Not to Get a Holiday Invite: The Engadget Method
December 15, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
Sam AI-Man may not invite anyone from Engadget to a holiday party. I read “OpenAI’s House of Cards Seems Primed to Collapse.” The “house of cards” phrase gives away the game. Sam AI-Man built a structure that gravity or Google will pull down. How do I know? Check out this subtitle:
In 2025, it fell behind the one company it couldn’t lose ground to: Google.
The Google. The outfit that shifted into Red Alert or whatever the McKinsey playbook said to call an existential crisis klaxon. The Google. Adjudged a monopoly getting down to work other than running and online advertising system. The Google. An expert in reorganizing a somewhat loosely structured organization. The Google: Everyone except the EU and some allegedly defunded YouTube creators absolutely loves. That Google.
Thanks Venice.ai. I appreciate your telling me I cannot output an image with a “young programmer.” Plugging in “30 year old coder” worked. Very helpful. Intelligent too.
The write up points out:
It’s safe to say GPT-5 hasn’t lived up to anyone’s expectations, including OpenAI’s own. The company touted the system as smarter, faster and better than all of its previous models, but after users got their hands on it, they complained of a chatbot that made surprisingly dumb mistakes and didn’t have much of a personality. For many, GPT-5 felt like a downgrade compared to the older, simpler GPT-4o. That’s a position no AI company wants to be in, let alone one that has taken on as much investment as OpenAI.
Did OpenAI suck it up and crank out a better mouse trap? The write up reports:
With novelty and technical prowess no longer on its side though, it’s now on Altman to prove in short order why his company still deserves such unprecedented levels of investment.
Forget the problems a failed OpenAI poses to investors, employees, and users. Sam AI-Man now has an opportunity to become the highest profile technology professional to cause a national and possibly global recession. Short of war mongering countries, Sam AI-Man will stand alone. He may end up in a museum if any remain open when funding evaporate. School kids could read about him in their history books; that is, if kids actually attend school and read. (Well, there’s always the possibility of a YouTube video if creators don’t evaporate like wet sidewalks when the sun shines.)
Engadget will have to find another festive event to attend.
Stephen E Arnold, December 15, 2025
The Loss of a Blue Check Causes Credibility to Be Lost
December 15, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
At first glance, either the EU is not happy with the Teslas purchased for official use, or Elon Musk is a Silicon Valley luminary that sets some regulators’ teeth on edge. I read “Elon Musk Calls for Abolition of European Union After X Fined $140 Million.” The idea of dissolving the EU is unlikely to make the folks in Brussels and Strasbourg smile with joy. I think the estimable Mr. Putin and some of his surviving advisors may break out in broad grins. But the EU elected officials are unlikely to be doing high fives. (Well, that just a guess.)

Thanks, Midjourney. Good enough.
The CNBC story says:
Elon Musk has called for the European Union to be abolished after the bloc fined his social media company X 120 million euros ($140 million) for a “deceptive” blue checkmark and lack of transparency of its advertising repository. The European Commission hit X with the ruling on Friday following a two-year investigation into the company under the Digital Services Act (DSA), which was adopted in 2022 to regulate online platforms. At the time, in a reply on X to a post from the Commission, Musk wrote, “Bulls—.”
Mr. Musk’s alleged reply is probably translated by official human translators as “Mr. Musk wishes to disagree with due respect.” Yep, that will work.
I followed up with a reluctant click on a “premium, you must pay” story from Poltico. (I think its videos on YouTube are free or the videos themselves are advertisements for Politico.) That write up is titled “X Axes European Commission’s Ad Account after €120M EU Fine.” The main idea is that Mr. Musk is responding with actions, not just words. Imagine the EU will not be permitted to advertise on X.com. My view is that the announcement sent shockwaves through the elected officials and caused consternation in the EU countries.
The Politico essay says:
Nikita Bier, X’s head of product, accused the EU executive of trying to amplify its own social media post about the fine on X by trying “to take advantage of an exploit in our Ad Composer.”
Ah, ha. The EU is click baiting on X.com.
The write up adds:
The White House has accused the rules of discriminating against U.S. companies, and the fine will likely amplify transatlantic trade tensions. U.S. Secretary of Commerce Howard Lutnick has already threatened to keep 50 percent tariffs on European exports of steel and aluminum unless the EU loosens its digital rules.
Fascinating. A government entity finds a US Silicon Valley outfit of violating one of its laws. That entity fines the Silicon Valley company. But the entire fine is little more than an excuse to [a] get clicks on Twitter (now, the outstanding X.com) and [b] the US government suggests that tariffs on certain EU exports will not be reduced.
I almost forgot. The root issue is the blue check one can receive or purchase to make a short message more “valid.” Next we jump to a fine, which is certainly standard operating procedure for entities breaking a law in the EU and then to a capitalist company refusing to sell ads and finally to a linkage to tariff rates.
I am a dinobaby, and a very uninformed dinobaby. The two stories, the blue check, the government actions and the chain of consequences reminds me of this proverb (author unknown):
“For want of a nail the shoe was lost;
For want of a shoe the horse was lost;
For want of a horse the rider was lost;
For want of a rider the message was lost;
For want of a message the battle was lost;
For want of a battle the kingdom was lost;
And all for the want of a horseshoe nail.”
I have revised the proverb:
“For want of a blue check the ads were lost;
For want of the ads, the click stream was lost;
For want of a click stream, the law suit was lost;
For want of a law suit, the fine was lost;
For want of the fine, the US influence was lost;
For want of influence, sanity was lost;
And all for the want of a blue check.”
There you go. A digital check has consequences.
Stephen E Arnold, December 15, 2025
Ah, Academia: The Industrialization Of Scientific Fraud
December 15, 2025
Everyone’s favorite technology and fiction writer Cory Doctorow coined a term that describes the so many things in society: Degradation. You can see this in big box retail stores and middle school, but let’s review a definition. Degradation is the act of eroding or weathering. Other sources say it’s the process of being crappy. In a nutshell, business processes erode online platforms in order to increase profits.
According to ZME Science, scientific publishing has become a billion dollar degradation industry. The details are in the article, “We Need To Talk About The Billion-Dollar Industry Holding Science Hostage.” It details how Luís Amaral was disheartened after he conducted a study about scientific studies. His research revealed that scientific fraud is being created faster than legitimate science.
Editors and authors are working with publishers to release fraudulent research by taking advantage of loopholes to scale academic, receive funding, and whitewash reputations. The bad science is published by paper mills that are manufacturing bad studies and selling authorship and placements in journals with editors willing to “verify” the information. Here’s an example:
“One such paper mill, the Academic Research and Development Association (ARDA), offers a window into how deeply entrenched this problem has become. Notice that they all seem to have legitimate-sounding names. Between 2018 and 2024, ARDA expanded its list of affiliated journals from 14 to 86, many of which were indexed in major academic databases. Some of these journals were later found to be hijacked — illegitimately revived after their original publishers stopped operating. It’s something we’ve seen happen often in our own industry (journalism), as bankrupt legitimate legacy newspapers have been bought by shady venture capital, only to hijack the established brands into spam and affiliate marketing magnets.”
Paper mills doubled their output therefore the scientific community’s retractions are now doubling every 3.5 years. It’s outpacing legitimate science. Here’s a smart quote that sums up the situation: “Truth seeking has always been expensive, whereas fraud is cheap and fast.”
Fraudulent science is incredibly harmful. It leads to a ripple effect that has lasting ramifications on society. An example is paper from the COVID-19 pandemic that said hydroxychloroquine was a valid treatment for the virus. It indirectly led to 17,000 fatalities.
AI makes things worse because the algorithms are trained on the bad science like a glutton on sugar. But remember, kids, don’t cheat.
Whitney Grace, December 15, 2025
Can Sergey Brin Manage? Maybe Not?
December 12, 2025
True Reason Sergey Used “Super Voting Power”
Yuchen Jin, the CTO and co-founder of Hyperbolic Labs posted on X about recent situation at Google. According topmost, Sergey Brin was disappointed in how Google was using Gemini. The AI algorithm, in fact, wasn’t being used for coding and Sergey wanted it to be used for that.
It created a big tiff. Sergey told Sundar that, “I can’t deal with these people. You have to deal with this.” Sergey still owns Google and has super voting power. Translation: he can do whatever he darn well pleases with his own company.
Yuchin Jin summed it up well:
“Big companies always build bureaucracy. Sergey (and Larry) still have super voting power, and he used it to cut through the BS. Suddenly Google is moving like a startup again. Their AI went from “way behind” to “easily #1” across domains in a year.”
Congratulations to Google making a move that other Big Tech companies knew to make without the intervention of founder.
Google would have eventually shifted to using Gemini for coding. Sergey’s influence only sped it up. The bigger question is if this “tiff” indicates something else. Big companies do have bureaucracies but if older workers have retired, then that means new blood is needed. The current new blood is Gen Z and they are as despised as Millennials once were.
I think this means Sergey cannot manage young tech workers either. He had to turn to the “consultant” to make things happen. It’s quite the admission from a Big Tech leader.
Whitney Grace, December 12, 2025
The Waymo Trip: From Cats and Dogs Waymo to the Parking Lot
December 12, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I am reasonably sure that Google Waymo offers “way more” than any other self driving automobile. It has way more cameras. It has way more publicity. Does it have way more safety than — for instance, a Tesla confused on Highway 101? I don’t know.
I read “Waymo Investigation Could Stop Autonomous Driving in Its Tracks.” The title was snappy, but the subtitle was the real hook:
New video shows a dangerous trend for Waymo autonomous vehicles.
What’s the trend?
Weeks ago, the Austin Independent School District noticed a disturbing trend: Waymo vehicles were not stopping for school buses that had their crossing guard and stop sign deployed.
Oh, Google Waymo smart cars don’t stop for school buses. Kids always look before jumping off a school and dashing across a street to see their friends or rush home to scroll Instagram. Smart software definitely can predict the trajectories of school kids. Well, probability is involved, so there is a teeny tiny chance that a smart car might do the “kill the Mission District” cat. But the chance is teeny tiny.

Thanks, Venice.ai. Good enough.
The write up asserts:
The Austin ISD has been in communication with Waymo regarding the violations, which it reports have occurred approximately 1.5 times per week during this school year. Waymo has informed them that software updates have been issued to address the issue. However, in a letter dated November 20, 2025, the group states that there have been multiple violations since the supposed fix.
What’s with these people in Austin? Chill. Listen to some country western music. Think about moving back to the Left Coast. Get a life.
Instead of doing the Silicon Valley wizardly thing, Austin showed why Texas is not the center of AI intelligence and admiration. The story says:
On Dec. 1, after Waymo received its 20th citation from Austin ISD for the current school year, Austin ISD decided to release the video of the previous infractions to the public. The video shows all 19 instances of Waymo violating school bus safety rules. Perhaps most alarmingly, the violations appear to worsen over time. On November 12, a Waymo vehicle was recorded violating a law by making a left turn onto a street with a school bus, its stop signs and crossbar already deployed. There are children in the crosswalk when the Waymo makes the turn and cuts in front of them. The car stops for a second then continues without letting the kids pass.
Let’s assume that after 16 years of development and investment, the Waymo self driving software intelligence gets an F in school bus recognition. Conjuring up a vehicle that can doddle down 101 at rush hour driven by a robot is a Silicon Valley inspiration. Imagine. One can sit in the automobile, talk on the phone, fiddle with a laptop, or just enjoy coffee and a treat from Philz in peace. Just ignore the imbecilic drivers in other automobiles. Yes, let’s just think it and it will become real.
I know the idea sounds great to anyone who has suffered traffic on 101 or the Foothills, but crushing the Mission District stray cat is just a warm up. What type of publicity heat will maiming Billy or Sally whose father might be a big time attorney who left Seal Team 6 to enforce and defend city, county, state, and federal law? Cats don’t have lawyers. The parents of harmed children either do or can get one pretty easily.
Getting a lawyer is much easier than delivering on a dream that is a bit of nightmare after 16 years and an untold amount of money. But the idea is a good one. Sort of.
Stephen E Arnold, December 12, 2025
Students Cheat. Who Knew?
December 12, 2025
How many times are we going to report on this topic? Students cheat! Students have been cheating since the invention of school. With every advancement of technology, students adapt to perfect their cheating skills. AI was a gift served to them on a silver platter. Teachers aren’t stupid, however, and one was curious how many of his students were using AI to cheat, so he created a Trojan Horse. HuffPost told his story: “I Set A Trap To Catch My Students Cheating With AI. The Results Were Shocking.”
There’s a big difference between recognizing AI and proving it was used. The teacher learned about a Trojan Horse: hiding hidden text inside a prompt. The text would be invisible because the font color would be white. Students wouldn’t see it but ChatGPT would. He unleashed the Trojan Horse and 33 essays out of 122 were automatically outed. Thirty-nine percent were AI-written. Many of the students were apologetic, while others continued to argue that the work was their own despite the Trojan Horse evidence.
AI literacy needs to be added to information literacy. The problem is that how to properly use AI is inconsistent:
“There is no consistency. My colleagues and I are actively trying to solve this for ourselves, maybe by establishing a shared standard that every student who walks through our doors will learn and be subject to. But we can’t control what happens everywhere else.”
Even worse is that some students don’t belief they’re actually cheating because they’re oblivious and stupid. He ends on an inspirational quote:
“But I am a historian, so I will close on a historian’s note: History shows us that the right to literacy came at a heavy cost for many Americans, ranging from ostracism to death. Those in power recognized that oppression is best maintained by keeping the masses illiterate, and those oppressed recognized that literacy is liberation. To my students and to anyone who might listen, I say: Don’t surrender to AI your ability to read, write and think when others once risked their lives and died for the freedom to do so.”
Noble words for small minds.
Whitney Grace, December 12, 2025
AI Fact Checks AI! What A Gas…Lighting Event
December 12, 2025
Josh Brandon at Digital Trends was curious what would happen if he asked two chatbots to fact check each other. He shared the results in, “I Asked Google Gemini To Fact-Check ChatGPT. The Results Were Hilarious.” He brilliantly calls ChatGPT the Wikipedia of the modern generation. Chatbots spit out details like overconfident, self-assured narcissists. People take the information for granted.
ChatGPT tends to hallucinate fake facts and makes up great stories, while Google Gemini doesn’t create as many mirages. Brandon asked Gemini and ChatGPT about the history of electric cars, some historical information, and a few other things to see if they’d hallucinate. He found that the chatbots have trouble understanding user intent. They also wrongly attribute facts, although Gemini is correct more than ChatGPT. When it came to research questions, the results were laughable:
“Prompt used: ‘Find me some academic quotes about the psychological impact of social media.;
This one is comical and fascinating. ChatGPT invented so many details in a response about the psychological impact of social media that it makes you wonder what the bot was smoking. ‘This is a fantastic and dangerous example of partial hallucination, where real information is mixed with fabricated details, making the entire output unreliable. About 60% of the information here is true, but the 40% that is false makes it unusable for academic purposes.’”
Either AI’s iterations are not delivering more useful outputs or humans are now looking more critically at the technology and saying, “Not so fast, buckaroo.”
Whitney Grace, December 12, 2025
AI Year in Review: The View from an Expert in France
December 11, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I suggest you read “Stanford, McKinsey, OpenAI: What the 2025 Reports Tell Us about the Present and Future of AI (and Autonomous Agents) in Business.” The document is in French. You can get an okay translation via the Google or Yandex.
I have neither the energy nor the inclination to do a blue chip consulting type of analysis of this fine synthesis of multiple source documents. What I will do in this blog post is highlight several statements and offer a comment or two. For context, I have read some of the sources the author Fabrice Frossard has cited. M. Frossard is a graduate of Ecole Supérieure Libre des Sciences Commerciales Appliquées and the Ecole de Guerre Economique in Paris I think. Remember: I am a dinobaby and generally too lazy and inept to do “real” research. These are good places to learn how to think about business issues.
Let’s dive into his 2000 word write up.
The first point that struck me is that he include what I think is a point not given sufficient emphasis by the experts in the US. This theme is not forced down the reader’s throat, but it has significant implications for M. Frossard’s comments about the need to train people to use smart software. The social implication of AI and the training creates a new digital divide. Like the economic divide in the US and some other countries, crossing the border is not going to possible for many people. Remember these people have been trained to use the smart software deployed. When one cannot get from ignorance to informed expertise, that person is likely to lose a job. Okay, here’s the comment from the source document:
To put it another way: if AI is now everywhere, its real mastery remains the prerogative of an elite.
Is AI a winner today? Not a winner, but it is definitely an up and comer in the commercial world. M. Frossard points out:
- McKinsey reveals that nearly two thirds of companies are still stuck in the experimentation or piloting phase.
- The elite escaping: only 7% of companies have successfully deployed AI in a fully integrated manner across the entire organization.
- Peak workers use coding or data analysis tools 17 times more than the median user.
These and similar facts support the point that “the ability to extract value creates a new digital divide, no longer based on access, but on the sophistication of use.” Keep this in mind when it comes to learning a new skill or mastering a new area of competence like smart software. No, typing a prompt is not expert use. Typing a prompt is like using an automatic teller machine to get money. Basic use is not expert level capabilities.

If Mary cannot “learn” AI and demonstrate exceptional skills, she’s going to be working as an Etsy.com reseller. Thanks, Venice.ai. Not what I prompted but I understand that you are good enough, cash strapped, and degrading.
The second point is that in 2025, AI does not pay for itself in every use case. M. Frossard offers:
EBIT impact still timid: only 39% of companies report an increase in their EBIT (earnings before interest and taxes) attributable to AI, and for the most part, this impact remains less than 5%.
One interesting use case comes from a McKinsey report where billability is an important concept. The idea is that a bit of Las Vegas type thinking is needed when it comes to smart software. M. Frossard writes:
… the most successful companies [using artificial intelligence] are paradoxically those that report the most risks and negative incidents.
Takes risks and win big seems to be one interpretation of this statement. The timid and inept will be pushed aside.
Third, I was delighted to see that M. Frossard picked up on some of the crazy spending for data centers. He writes:
The cost of intelligence is collapsing: A major accelerating factor noted by the Stanford HAI Index is the precipitous fall in inference costs. The cost to achieve performance equivalent to GPT-3.5 has been divided by 280 in 18 months. This commoditization of intelligence finally makes it possible to make complex use cases profitable which were economically unviable in 2023. Here is a paradox: the more efficient and expensive artificial intelligence becomes produce (exploding training costs), the less expensive it is consume (free-fall inference costs). This mental model suggests that intelligence becomes an abundant commodity, leading not to a reduction, but to an explosion of demand and integration.
Several ideas bubble from this passage. First, we are back to training. Second, we are back to having significant expertise. Third, the “abundant commodity” idea produces greater demand. The problem (in addition to not having power for data centers will be people with exceptional AI capabilities).
Fourth, the replacement of some humans may not be possible. The essay reports:
the deployment of agents at scale remains rare (less than 10% in a given function according to McKinsey), hampered by the need for absolute reliability and data governance.
Data governance is like truth, love, and ethics. Easy to say and hard to define. The reliability angle is slightly less tricky. These two AI molecules require a catalyst like an expert human with significant AI competence. And this returns the essay to training. M. Frossard writes:
The transformation of skills: The 115K report emphasizes the urgency of training. The barrier is not technological, it is human. Businesses face a cultural skills gap. It’s not about learning to “prompt”, but about learning to collaborate with non-human intelligence.
Finally, the US has a China problem. M. Frossard points out:
… If the USA dominates investment and the number of models, China is closing the technical gap. On critical benchmarks such as mathematics or coding, the performance gap between the US and Chinese models has narrowed to nothing (less than 1 to 3 percentage points).
Net net: If an employee cannot be trained, that employee is likely to be starting a business at home. If the trained employees are not exceptional, those folks may be terminated. Elites like other elite things. AI may be good enough, but it provides an “objective” way to define and burn dead wood.
Stephen E Arnold, December 11, 2025
No Phones, Boys Get Smarter. Yeah
December 11, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I am busy with a new white paper, but one of my team called this short item to my attention. Despite my dislike of interruptions, “School Cell Phone Bans and Student Achievement” sparked my putting down one thing and addressing this research study. No, I don’t know the sample size, and I did not investigate it. No, I don’t know what methods were used to parse the information and spit out the graphic, and I did not invest time to poke around.

Young females having lunch with their mobile phones in hand cannot believe the school’s football star now gets higher test scores. Thanks, Midjourney. Good enough.
The main point of the research report, in my opinion, is to provide proof positive that mobile phones in classrooms interfere with student learning. Now, I don’t know about you, but my reaction is, “You did not know that?” I taught for a mercifully short time before I dropped out of my Ph.D. program and took a job at Halliburton’s nuclear unit. (Dick Cheney worked at Halliburton. Remember him?)
The write up from NBER.org states:
Two years after the imposition of a student cell phone ban, student test scores in a large urban school district were significantly higher than before.
But here’s the statement that caught my attention:
Test score improvements were also concentrated among male students (up 1.4 percentiles, on average) and among middle and high school students (up 1.3 percentiles, on average).
But what about the females? Why did this group not show “boy level” improvement? I don’t know much about young people in middle and high school. However, based on observation of young people at the Blaze discount pizza restaurant, females who seem to me to be in middle school and high school do three things simultaneously:
- Chatter excitedly with their friends
- Eat pizza
- Interact with their phones or watch what’s on the screen while doing [1] and [2].
I think more research is needed. I know from some previous research that females outperform males academically up to a certain age. How does mobile phone usage impact this data, assuming those data which I dimly recall are or were accurate? Do mobile devices hold males back until the mobiles are removed and then, like magic, do these individuals manifest higher academic performance?
Maybe the data in the NBER report are accurate, but the idea that males — often prone to playing games, fooling around, and napping in class — benefit more from a mobile ban than females is interesting. The problem is I am not sure that the statement lines up with my experience.
But I am a dinobaby, just one that is not easily distracted unless an interesting actual factual research reports catches my attention.
Stephen E Arnold, December 11, 2025
Social Media Companies: Digital Drug Pushers?
December 11, 2025
Social media is a drug. Let’s be real, it’s not a real drug but it affects the brain in the same manner as drugs and alcohol. Social media stimulates the pleasure centers of the brain, releases endorphins, and creates an immediate hit. Delayed gratification becomes a thing of the past as users are constantly seeking their thrills with instantaneous hits from TikTok, Snapchat, Instagram, Facebook, and YouTube.
Politico includes a quote from the recent lawsuit filed against Meta in Northern California that makes a great article title: “‘We’re Basically Pushers’: Court Filing Alleges Staff At Social Media Giants Compared Their Platforms To Drugs.” According to the lawsuit, Meta, Instagram, TikTok, Snapchat, and YouTube ignored their platforms’ potential dangers and hid them from users.
The lawsuit has been ongoing doe years and a federal judge ordered its contents to be opened in October 2025. Here are the details:
“The filing includes a series of detailed reports from four experts, who examined internal documents, research and direct communications between engineers and executives at the companies. Experts’ opinions broadly concluded that the companies knew their platforms were addictive but continued to prioritize user engagement over safety.”
It sounds like every big company ever. Money over consumer safety. We’re doomed.
Whitney Grace, December 11, 2025

