Titanic Talk: This Ship Will Not Fail
December 4, 2025
It’s too big to fail! How many times have we heard that phrase? There’s another common expression that makes more sense: The bigger they are the harder they fall. On his blog, Will Gallego writes about that idea: “Big Enough To Fail.” Through a lot of big words (intelligently used BTW), Gallego explains that big stuff fails all the time.
It’s actually a common occurrence, because crap happens. Outages daily occur, Mother Nature shows her wraith, acts of God happen, and systems fail due to mistakes. Gallego makes the observation that we’ve accepted these issues and he explains why:
- “It’s so exceptional (or feels that way). This is less so about frequency but that when a company becomes so big you just assume they’re impervious to failure, a shock and awe to the impossible.
- The lack of choices in services informs your response. Are there other providers? Sure, but with the continuous consolidation of businesses, we have fewer options every day.
- You’re locked in on your choices. Are you going to knock on Google’s door and complain, take three years to move out of one virtual data center and into another, while retraining your staff, updating your internal documents, and updating your code? No, you’re likely not.
- Failover is costly. Similarly, those at the sharp end know that the level of effort in building failover for something like this is frequently impractical. It would cost too much to set up, to maintain as developers, it would remove effort that could be put towards new features, and the financial cost backing that might be considered infeasible.
- The brittleness is everywhere. The level of complexity and the highly coupled nature of interconnected services means we’ve become brittle to failures. Doubly so when those services are the underpinnings of what we build on. “The internet is down today” as the saying goes, despite the internet having no principle nucleus. This is considered acceptable.
- We’re all in it together. When a service as large as these goes down, there’s a good chance we’re seeing so many failures in so many places that it becomes reasonable to also be down. Your competitors are likely down, your customers might be – there might be too much failure to go around to cast it in any one direction.
Ultimately, this leads into resilience engineering which is “reframing how we look incidents.” Gallego ends the article by saying we should take everything in stride, show some show patience, and give a break to the smaller players in the game. His approach is more human aka realistic unlike the egotistical rants that sank the Titanic. It’s unsinkable or it won’t fail! Yes, it will. Prepare for the eventualities. Whitney Grace, December 4, 2025
A New McKinsey Report Introduces New Jargon for Its Clients
December 3, 2025
Another dinobaby original. If there is what passes for art, you bet your bippy, that I used smart software. I am a grandpa but not a Grandma Moses.
I read “Agents, Robots, and Us: Skill Partnerships in the Age of AI.” The write up explains that lots of employees will be terminated. Think of machines displacing seamstresses. AI is going to do that to jobs, lots of jobs.
I want to focus on a different aspect of the McKinsey Global Institute Report (a PR and marketing entity not unlike Telegram’s TON Foundation in my view).

Thanks, Vencie. Your cartoon contains neither females nor minorities. That’s definitely a good enough approach. But you have probably done a number on a few graphic artists.
First, the report offers you the potential client an opportunity to use McKinsey’s AI chatbot. The service is a test, but I have a hunch that it is not much of a test. The technology will be deployed so that McKinsey can terminate those who underperform in certain work related tasks. The criteria for keeping one’s job at a blue chip consulting firm varies from company to company. But those who don’t quit to find greener or at least less crazy pastures will now work under the watchful eye of McKinsey AI. It takes a great deal of time to write a meaningful analysis of a colleague’s job performance. Let AI do it with exceptions made for “special” hires of course. Give it a whirl.
Second, the report what I call consultant facts. These are statements which link the painfully obvious with a rationale. Let me give you an example from this pre-Thanksgiving sales document. McKinsey states:
Two thirds of US work hours require only nonphysical capabilities
The painfully obvious: Most professional work is not “physical.” That means 67 percent of an employee’s fully loaded cost can be shifted to smart or semi-smart, good enough AI agentic systems. Then the obvious and the implication of financial benefits is supported by a truly blue chip chart. I know because as you can see, the graphics are blue. Here’s a segment of the McKinsey graph:

Notice that the chart is presented so that a McKinsey professional can explain the nested bar charts and expand on such data as “5 percent of a health care workforce can be agentized.” Will that resonate with hospital administrators working for roll up of individual hospitals. That’s big money. Get the AI working in one facility and then roll it out. Boom. An upside that seems credible. That’s the key to the consultant facts. Additional analysis is needed to tailor these initial McKinsey graph data to a specific use case. As a sales hook, this works and has worked for decades. Fish never understand hooks with plastic bait. Deer never quite master automobiles and headlights.
Third, the report contains sales and marketing jargon for 2026 and possibly beyond. McKinsey hopes for decades to come I think. Here’s a small selection of the words that will be used, recycled, and surface in lectures by AI experts to quite large crowds of conference attendees:
AI adjacent capabilities
AI fluency
Embodied AI
HMC or human machine collaboration
High prevalence skills
Human-agent-robot roles
technical automation potential
If you cannot define these, you need to hire McKinsey. If you want to grow as a big time manager, send that email or FedEx with a handwritten note on your engraved linen stationery.
Fourth, some humans will be needed. McKinsey wants to reassure its clients that software cannot replace the really valuable human. What do you think makes a really valuable worker beyond AI fluency? [a] A professional who signed off on a multi-million McKinsey consulting contract? [b] A person who helped McKinsey consultants get the needed data and interviews from an otherwise secretive client with compartmentalized and secure operating units? [b] A former McKinsey consultant now working for the firm to which McKinsey is pitching an AI project.
Fifth, the report introduces a new global index. The data in this free report is unlikely to be free in the future. McKinsey clients can obtain these data. This new global index is called the Skills Change Index. Here’s an example. You can get a bit more marketing color in the cited report. Just feast your eyes on this consultant fact packed chart:

Several comments. The weird bubble in the right hand page’s margin is your link to the McKinsey AI system. Give it a whirl, please. Look at the wonderland of information in a single chart presented in true blue, “just the facts, mam” style. The hawk-eyed will see that “leadership” seems immune to AI. Obviously senior management smart enough to hire McKinsey will be AI fluent and know the score or at least the projected financial payoff resulting from terminating staff who fail to up their game when robots do two thirds of the knowledge workers’ tasks.
Why has McKinsey gone to such creative lengths to create an item like this marketing collateral? Multiple teams labored on this online brochure. Graphic designers went through numerous versions of the sliding panels. McKinsey knows there is money in those AI studies. The firm will apply its intellectual method to the wizards who are writing checks to AI companies to build big data centers. Even Google is hedging its bets by packaging its data centers as providers to super wary customers like NATO. Any company can benefit from AI fluency-centric efficiency inputs. Well, not any. The reason is that only companies who can pay McKinsey fees quality to be clients.
The 11 people identified as the authors have completed the equivalent of a death march. Congratulations. I applaud you. At some point in your future career, you can look back on this document and take pride in providing a road map for companies eager to dump human workers for good enough AI systems. Perhaps one of you will be able to carry a sign in a major urban area that calls attention to your skills? You can look back and tell your friends and family, “I was part of this revolution.” So Happy holidays to you, McKinsey, and to the other blue chip firms exploiting good enough smart software.
Stephen E Arnold, December 3, 2025
Meta: Flying Its Flag for Moving Fast and Breaking Things
December 3, 2025
Another dinobaby original. If there is what passes for art, you bet your bippy, that I used smart software. I am a grandpa but not a Grandma Moses.
Meta, a sporty outfit, is the subject of an interesting story in “Press Gazette,” an online publication. The article “News Publishers File Criminal Complaint against Mark Zuckerberg Over Scam Ads” asserts:
A group of news publishers have filed a police complaint against Meta CEO Mark Zuckerberg over scam Facebook ads which steal the identities of journalists. Such promotions have been widespread on the Meta platform and include adverts which purport to be authored by trusted names in the media.

Thanks, MidJourney. Good enough, the gold standard for art today.
I can anticipate the outputs from some Meta adherents; for example, “We are really, really sorry.” or “We have specific rules against fraudulent behavior and we will take action to address this allegation.” Or, “Please, contact our legal representative in Sweden.”
The write up does not speculate as I just did in the preceding paragraph. The article takes a different approach, reporting:
According to Utgivarna: “These ads exploit media companies and journalists, cause both financial and psychological harm to innocent people, while Meta earns large sums by publishing the fraudulent content.” According to internal company documents, reported by Reuters, Meta earns around $16bn per year from fraudulent advertising. Press Gazette has repeatedly highlighted the use of well-known UK and US journalists to promote scam investment groups on Facebook. These include so-called pig-butchering schemes, whereby scammers win the trust of victims over weeks or months before persuading them to hand over money. [Emphasis added by Beyond Search]
On November 22, 2025, Time Magazine ran this allegedly accurate story “Court Filings Allege Meta Downplayed Risks to Children and Misled the Public.” In that article, the estimable social media company found itself in the news. That write up states:
Sex trafficking on Meta platforms was both difficult to report and widely tolerated, according to a court filing unsealed Friday. In a plaintiffs’ brief filed as part of a major lawsuit against four social media companies, Instagram’s former head of safety and well-being Vaishnavi Jayakumar testified that when she joined Meta in 2020 she was shocked to learn that the company had a “17x” strike policy for accounts that reportedly engaged in the “trafficking of humans for sex.”
I find it interesting that Meta is referenced in legal issues involving two particularly troublesome problems in many countries around the world. The one two punch is sex trafficking and pig butchering. I — probably incorrectly — interpret these two allegations as kiddie crime and theft. But I am a dinobaby, and I am not savvy to the ways of the BAIT (big AI tech)-type companies. Getting labeled as a party of sex trafficking and pig butchering is quite interesting to me. Happy holidays to Meta’s PR and legal professionals. You may be busy and 100 percent billable over the holidays and into the new year.
Several observations may be warranted:
- There are some frisky BAIT outfits in Silicon Valley. Meta may well be competing for the title as the Most Frisky Firm (MFF). I wonder what the prize is?
- Meta was annoyed with a “tell all” book written by a former employee. Meta’s push back seemed a bit of a tell to me. Perhaps some of the information hit too close to the leadership of Meta? Now we have sex and fraud allegations. So…
- How will Facebook, Instagram, and WhatsApp innovate in ad sales once Meta’s AI technology is fully deployed? Will AI, for example, block ad sales that are questionable? AI does make errors, which might be a useful angle for Meta going forward.
Net net: Perhaps some journalist with experience in online crime will take a closer look at Meta. I smell smoke. I am curious about the fire.
Stephen E Arnold, December 3, 2025
Open Source Now for Rich Peeps
December 3, 2025
Once upon a time, open source was the realm of startups in a niche market. Nolan Lawson wrote about “The Fate Of ‘Small’ Open Source” on his blog Read The Tea Leaves. He explains that more developers are using AI in their work and it’s step ahead of how coding used to be in the past. He observed a societal change that has been happening since the invention of the Internet: “I do think it’s a future where we prize instant answers over teaching and understanding.”
Old-fashioned research is now an art that few decide to master except in some circumstances. However, that doesn’t help the open source libraries that built the foundation of modern AI and most systems. Lawson waxes poetic about the ending of an era and what’s the point of doing something new in an old language. He uses a lot of big words and tech speak that most people won’t understand, but I did decipher that he’s upset that big corporations and AI chatbots are taking away the work.
He remains hopeful though:
“So if there’s a conclusion to this meandering blog post (excuse my squishy human brain; I didn’t use an LLM to write this), it’s just that: yes, LLMs have made some kinds of open source obsolete, but there’s still plenty of open source left to write. I’m excited to see what kinds of novel and unexpected things you all come up with.”
My squishy brain comprehends that the future is as bleak as the present but it’s all relative and how we decide to make it.
Whitney Grace, December 3, 2025
An SEO Marketing Expert Is an Expert on Search: AI Is Good for You. Adapt
December 2, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I found it interesting to learn that a marketer is an expert on search and retrieval. Why? The expert has accrued 20 years of experience in search engine optimization aka SEO. I wondered, “Was this 20 years of diverse involvement in search and retrieval, or one year of SEO wizardry repeated 20 times?” I don’t know.
I spotted information about this person’s view of search in a newsletter from a group whose name I do not know how to pronounce. (I don’t know much.) The entity does business as BrXnd.ai. After some thought (maybe two seconds) I concluded that the name represented the concept “branding” with a dollop of hipness or ai.
Am I correct? I don’t know. Hey, that’s three admissions of intellectual failure a 10 seconds. Full disclosure: I know does not care.

Agentic SEO will put every company on the map. Relevance will become product sales. The methodology will be automated. The marketing humanoids will get fat bonuses. The quality of information available will soar upwards. Well, probably downwards. But no worries. Thanks, Venice.ai. Good enough.
The article is titled “The Future of Search and the Death of Links // BRXND Dispatch vol 96.” It points to a video called “The Future of Search and the Death of Links.” You can view the 22 minute talk at this link. Have at it, please.
Let me quote from the BrXnd.ai write up:
…we’re moving from an era of links to an era of recommendations. AI overviews now appear on 30-40% of search results, and when they do, clicks drop 20-40%. Google’s AI Mode sends six times fewer clicks than traditional search.
I think I have heard that Google handles 75 to 85 percent of global searches. If these data are on the money or even close to the eyeballs Google’s advertising money machine flogs, the estimable company will definitely be [a] pushing for subscriptions to anything and everything it once subsidized with oodles of advertisers’ cash; [b] sticking price tags on services positioned as free; [c] charging YouTube TV viewers the way disliked cable TV companies squeezed subscribers for money; [d] praying to the gods of AI that the next big thing becomes a Google sandbox; and [e] embracing its belief that it can control governments and neuter regulators with more than 0.01 milliliters of testosterone.
The write up states:
When search worked through links, you actively chose what to click—it was manual research, even if imperfect. Recommendations flip that relationship. AI decides what you should see based on what it thinks it knows about you. That creates interesting pressure on brands: they can’t just game algorithms with SEO tricks anymore. They need genuine value propositions because AI won’t recommend bad products. But it also raises questions about what happens to our relationship with information when we move from active searching to passive receiving.
Okay, let’s work through a couple of the ideas in this quoted passage.
First, clicking on links is indeed semi-difficult and manual job. (Wow. Take a break from entering 2.3 words and looking for a likely source on the first page of search results. Demanding work indeed.) However, what if those links are biased by inept programmers, the biases of the data set processed by the search machine, or intentionally manipulated to weaponize content to achieve a goal?
Second, the hook for the argument is that brands can no longer can game algorithms. Bid farewell to keyword stuffing. There is a new game in town: Putting a content object in as many places as possible in multiple formats, including the knowledge nugget delivered by TikTok-type services. Most people it seems don’t think about this and rely on consultants to help them.
Finally, the notion of moving from clicking and reading to letting a BAIT (big AI tech) company define one’s knowledge universe strikes me as something that SEO experts don’t find problematic. Good for them. Like me, the SEO mavens think the business opportunities for consulting, odd ball metrics, and ineffectual work will be rewarding.
I appreciate BrXnd.ai for giving me this glimpse of a the search and retrieval utopia I will now have available. Am I excited? Yeah, sure. However, I will not be dipping into the archive of the 95 previous issues of BrXnd “dispatches.” I know this to be a fact.
Stephen E Arnold, December 2, 2025
AI-Yai-Yai: Two Wizards Unload on What VCs and Consultants Ignore
December 2, 2025
Another dinobaby original. If there is what passes for art, you bet your bippy, that I used smart software. I am a grandpa but not a Grandma Moses.
I read “Ilya Sutskever, Yann LeCun and the End of Just Add GPUs.” The write up is unlikely to find too many accelerationists printing out the write up and handing it out to their pals at Philz Coffee. What does this indigestion maker way? Let’s take a quick look.
The write up says:
Ilya Sutskever – co-founder of OpenAI and now head of Safe Superintelligence Inc. – argued that the industry is moving from an “age of scaling” to an “age of research”. At the same time, Yann LeCun, VP & Chief AI Scientist at Meta, has been loudly insisting that LLMs are not the future of AI at all and that we need a completely different path based on “world models” and architectures like JEPA. [Beyond Search note because the author of the article was apparently making assumptions about what readers know. JEPA is short hand for Joint Embedding Predictive Architecture. The idea is to find a recipe to all machines learn about the world in a way a human does.]
I like to try to make things simple. Simple things are easier for me to remember. This passage means: Dead end. New approaches needed. Your interpretation may be different. I want to point out that my experience with LLMs in the past few months have left me with a sense that a “No Outlet” sign is ahead.

Thanks, Venice.ai. The signs are pointing in weird directions, but close enough for horse shoes.
Let’s take a look at another passage in the cited article.
“The real bottleneck [is] generalization. For Sutskever, the biggest unsolved problem is generalization. Humans can:
learn a new concept from a handful of examples
transfer knowledge between domains
keep learning continuously without forgetting everything
Models, by comparison, still need:
huge amounts of data
careful evals (sic) to avoid weird corner-case failures
extensive guardrails and fine-tuning
Even the best systems today generalize much worse than people. Fixing that is not a matter of another 10,000 GPUs; it needs new theory and new training methods.”
I assume “generalization” to AI wizards has this freight of meaning. For me, this is a big word way of saying, “Current AI models don’t work or perform like humans.” I do like the clarity of “needs new theory and training methods.” The “old” way of training has not made too many pals among those who hold copyright in my opinion. The article calls this “new recipes.”
Yann LeCun points out:
LLMs, as we know them, are not the path to real intelligence.
Yann LeCun likes world models. These have these attributes:
- “learn by watching the world (especially video)
- build an internal representation of objects, space and time
- can predict what will happen next in that world, not just what word comes next”
What’s the fix? You can navigate to the cited article and read the punch line to the experts’ views of today’s AI.
Several observations are warranted:
- Lots of money is now committed to what strikes these experts as dead ends
- The move fast and break things believers are in a spot where they may be going too fast to stop when the “Dead End” sign comes into view
- The likelihood of AI companies demonstrating that they can wish, think, and believe they have the next big thing and are operating with a willing suspension of disbelief.
I wonder if they positions presented in this article provide some insight into Google’s building dedicated AI data centers for big buck, security conscious clients like NATO and Pavel Durov’s decision to build the SETI-type of system he has announced.
Stephen E Arnold, December 2, 2025
Palantir Channels Moses, Blue Chip Consulting Baloney, and PR
December 2, 2025
Another dinobaby original. If there is what passes for art, you bet your bippy, that I used smart software. I am a grandpa but not a Grandma Moses.
Palantir Technologies is a company in search of an identity. You may know the company latched on to the Lord of the Rings as a touchstone. The Palantir team adopted the “seeing stone.” The idea was that its technology could do magical things. There are several hundred companies with comparable technology. Datawalk has suggested that its system is the equivalent of Palantir’s. Is this true? I don’t know, but when one company is used by another company to make sales, it suggests that Palantir has done something of note.
I am thinking about Palantir because I did a small job for i2 Ltd. when Mike Hunter still was engaged with the firm. Shortly after this interesting work, I learned that Palantir was engaged in litigation with i2 Ltd. The allegations included Palantir’s setting up a straw man company to license the i2 Ltd.’s Analyst Notebook software development kit. i2 was the ur-intelware. Many of the companies marketing link analysis, analytics focused on making sense of call logs, and other arcana of little interest to most people are relatives of i2. Some acknowledge this bloodline. Others, particularly young intelware company employees working trade shows, just look confused if I mention i2 Ltd. Time is like sandpaper. Facts get smoothed, rounded, or worn to invisibility.

We have an illustration output by MidJourney. It shows a person dressed in a wardrobe that is out of step with traditional business attired. The machine-generated figure is trying to convince potential customers that the peculiarly garbed speaker can be trusted. The sign would have been viewed as good marketing centuries ago. Today it is just peculiar, possibly desperate on some level.
I read “Palantir Uses the ‘5 Whys’ Approach to Problem Solving — Here’s How It Works.” What struck me about the article is that Palantir’s CEO Alex Karp is recycling business school truisms as the insights that have powered the company to record government contracts. Toyota was one of the first company’s to focus on asking “why questions.” That firm tried to approach selling automobiles in a way different from the American auto giants. US firms were the world leaders when Toyota was cranking out cheap vehicles. The company pushed songs, communal exercise, and ideas different from the chrome trim crowd in Detroit; for example, humility, something called genchi genbutsu or go and see first hand, employee responsibility regardless of paygrade, continuous improvement (usually not adding chrome trim), and thinking beyond quarterly results. To an America, Mr. Toyoda’s ideas were nutso.
The write up reports:
Karp is a firm believer in the Five Whys, a simple system that aims to uncover the root cause of an issue that may not be immediately apparent. The process is straightforward. When an issue arises, someone asks, “Why?” Whatever the answer may be, they ask “why?” again and again until they have done so five times. “We have found is that those who are willing to chase the causal thread, and really follow it where it leads, can often unravel the knots that hold organizations back” …
The article adds this bit of color:
Palantir’s culture is almost as iconoclastic as its leader.
We have the Lord of the Rings, we have a Japanese auto company’s business method, and we have the CEO as an iconoclast.
Let’s think about this type of PR. Obviously Palantir and its formal and informal “leadership” want to be more than an outfit known for ending up in court as a result of a less-than-intelligent end run about an outfit whose primary market was law enforcement and intelligence professionals. Palantir is in the money or at least money from government contract, and it rarely mentions its long march to today’s apparent success. The firm was founded in May 2003. After a couple of years, Palantir landed its first customer: The US Central Intelligence Agency.
The company ingested about $3 billion in venture funding and reported its first profitable quarter in 2022. That’s 19 years, one interesting legal dust up, and numerous attempts to establish long-term relationships with its “customers.” Palantir did some work for do-good outfits. It tried its hand at commercial projects. But the firm remained anchored to government agencies in the US and the UK.
But something was lacking. The article is part of a content marketing campaign to make the firm’s CEO a luminary among technology leaders. Thus, we have the myth building block like the five why’s. These are not exactly intellectual home runs. The method is not proprietary. The method breaks down in many engagements. People don’t know why something happened. Consultants or forward deployed engineers scurry around trying to figure out what’s going. At some blue chip consulting firms, trotting out Toyota’s precepts as a way to deal with social media cyber security threats might result in the client saying, “No, thanks. We need a less superficial approach.”
I am not going to get a T shirt that says, “The knots that hold organizations back.” I favor

From my point of view, there are a couple of differences between the Toyota and it why era and Palantir today; for instance, Toyota was into measured, mostly disciplined process improvement. Palantir is more like the “move fast, break things” Silicon Valley outfit. Toyota was reasonably transparent about its processes. I did see the lights out factory near the Tokyo airport which was off limits to Kentucky people like. Palantir is in my mind associated with faux secrecy, legal paperwork, and those i2-related sealed documents.
Net net: Palantir’s myth making PR campaign is underway. I have no doubt it will work for many people. Good for them.
Stephen E Arnold, December x, 2025
AI Breaks Career Ladders
December 2, 2025
Another dinobaby original. If there is what passes for art, you bet your bippy, that I used smart software. I am a grandpa but not a Grandma Moses.
My father used to tell me that it was important to work at a company and climb the career ladder. I did not understand the concept. In my first job, I entered at a reasonably elevated level. I reported to a senior vice president and was given a big, messy project to fix up and make successful. In my second job, I was hired to report to the president of a “group” of companies. I don’t think I had a title. People referred to me as Mr. X’s “get things done person.” My father continued to tell me about the career ladder, but it did not resonate with me.
Thanks Venice.ai. I fired five prompts before you came close to what I specified. Good work, considering.
Only later, when I ran my own small consulting firm did the concept connect. I realized that as people worked on tasks, some demonstrated exceptional skill. I tried to find ways to expand those individuals capabilities. I think I succeeded, and several have contacted me years after I retired to tell me they were grateful for the opportunities I provided.
Imagine my surprise when I read “The Career Ladder Just Got Terminated: AI Kills Jobs Before They’re Born.” I understand. Co workers have no way to learn and earn the right to pursue different opportunities in order to grow their capabilities.
The write up says:
Artificial intelligence isn’t just taking jobs. It’s removing the rungs of the ladder that turn rookies into experts.
Here’s a statement from the rock and roll magazine that will make some young, bright eyed overachievers nervous:
In addition to making labor more efficient, it [AI] actually makes labor optional. And the disruption won’t unfold over generations like past revolutions; it’s happening in real time, collapsing decades of economic evolution into a few short years.
Forget optional. If software can replace hard to manage, unpredictable, and good enough humans, AI will get the nod. The goal of most organizations is to generate surplus cash. Then that cash is disbursed to stakeholders, deserving members of the organization’s leadership, and lavish off site meetings, among other important uses.
Here’s another passage that unintentionally will make art history majors, programmers, and, yes, even some MBA with the right stuff think about becoming a plumber:
And this AI job problem isn’t confined to entertainment. It’s happening in law, medicine, finance, architecture, engineering, journalism — you name it. But not every field faces the same cliff. There’s one place where the apprenticeship still happens in real time: live entertainment and sports.
Perhaps there will be an MBA Comedy Club? Maybe some computer scientists will lean into their athletic prowess for table tennis or quoits?
Here’s another cause of heart burn for the young job hunter:
Today, AI isn’t hunting our heroes; it’s erasing their apprentices before they can exist. The bigger danger is letting short-term profits dictate our long-term cultural destiny. If the goal is simply to make the next quarter’s numbers look good, then automating and cutting is the easy answer. But if the goal is culture, originality and progress, then the choice is just as clear: protect the training grounds, take risks on the unknown and invest in the people who will surprise us.
I don’t see the BAIT (big AI technology companies) leaning into altruistic behavior for society. These outfits want to win, knock off the competition, and direct the masses to work within the bowling alley of life between two gutters. Okay, job hunters, have at it. As a dinobaby, I have no idea what it impact job hunting in the early days of AI will have. Did I mention plumbing?
Stephen E Arnold, December 2, 2025
China Smart US Dumb: An AI Content Marketing Push?
December 1, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I have been monitoring the China Smart, US Dumb campaign for some time. Most of the methods are below the radar; for example, YouTube videos featuring industrious people who seem to be similar to the owner of the Chinese restaurant not far from my office or posts on social media that remind me of the number of Chinese patents achieved each year. Sometimes influencers tout the wonders of a China-developed electric vehicle. None of these sticks out like a semi mainstream media push.

Thanks, Venice.ai, not exactly the hutong I had in mind but close enough for chicken kung pao in Kentucky.
However, that background “China Smart, US Dumb” messaging may be cranking up. I don’t know for sure, but this NBC News (not the Miss Now news) report caught my attention. Here’s the title:
The subtitle is snappier than Girl Fixes Generator, but you judge for yourself:
AI Startups Are Seeing Record Valuations, But Many Are Building on a Foundation of Cheap, Free-to-Download Chinese AI Models.
The write up states:
Surveying the state of America’s artificial intelligence landscape earlier this year, Misha Laskin was concerned. Laskin, a theoretical physicist and machine learning engineer who helped create some of Google’s most powerful AI models, saw a growing embrace among American AI companies of free, customizable and increasingly powerful “open” AI models.
We have a Xoogler who is concerned. What troubles the wizardly Misha Laskin? NBC News intones in a Stone Phillips’ tone:
Over the past year, a growing share of America’s hottest AI startups have turned to open Chinese AI models that increasingly rival, and sometimes replace, expensive U.S. systems as the foundation for American AI products.
Ever cautious, NBC News asserts:
The growing embrace could pose a problem for the U.S. AI industry. Investors have staked tens of billions on OpenAI and Anthropic, wagering that leading American artificial intelligence companies will dominate the world’s AI market. But the increasing use of free Chinese models by American companies raises questions about how exceptional those models actually are — and whether America’s pursuit of closed models might be misguided altogether.
Bingo! The theme is China smart and the US “misguided.” And not just misguided, but “misguided altogether.”
NBC News slams the point home with more force that the generator repairing Asian female closes the generator’s housing:
in the past year, Chinese companies like Deepseek and Alibaba have made huge technological advancements. Their open-source products now closely approach or even match the performance of leading closed American models in many domains, according to metrics tracked by Artificial Analysis, an independent AI benchmarking company.
I know from personal conversations that most of the people with whom I interreact don’t care. Most just accept the belief that the US is chugging along. Not doing great. Not doing terribly. Just moving along. Therefore, I don’t expect you, gentle reader, to think much of this NBC News report.
That’s why the China Smart, US Dumb messaging is effective. But this single example raises the question, “What’s the next major messaging outlet to cover this story?”
Stephen E Arnold, December 1, 2025
AI ASICs: China May Have Plans for AI Software and AI Hardware
December 1, 2025
Another dinobaby original. If there is what passes for art, you bet your bippy, that I used smart software. I am a grandpa but not a Grandma Moses.
I try to avoid wild and crazy generalizations, but I want to step back from the US-centric AI craziness and ask a question, “Why is the solution to anticipated AI growth more data centers?” Data centers seem like a trivial part of the broader AI challenge to some of the venture firms, BAIT (big AI technology) companies, and some online pundits. Building a data center is a cheap building filled with racks of computers, some specialized gizmos, a connection to the local power company, and a handful of network engineers. Bingo. You are good to go.
But what happens if the compute is provided by Application-Specific Integrated Circuits or ASICs? When ASICs became available for crypto currency mining, the individual or small-scale miner was no longer attractive. What happened is that large, industrialized crypto mining farms pushed out the individual miners or mom-and-pop data centers.
The Ghana ASIC roll out appears to have overwhelmed the person taking orders. Demand for cheap AI compute is strong. Is that person in the blue suit from Nvidia? Thanks, MidJourney. Good enough, the mark of excellence today.
Amazon, Google, and probably other BAIT outfits want to design their own AI chips. The problem is similar to moving silos of corn to a processing plant with a couple of pick up trucks. Capacity at chip fabrication facilities is constrained. Big chip ideas today may not be possible on the time scale set by the team designing NFL arena size data centers in Rhode Island- or Mississippi-type locations.
A Chinese startup founded by a former Google engineer claims to have created a new ultra-efficient and relatively low cost AI chip using older manufacturing techniques. Meanwhile, Google itself is now reportedly considering whether to make its own specialized AI chips available to buy. Together, these chips could represent the start of a new processing paradigm which could do for the AI industry what ASICs did for bitcoin mining.
What those ASICs did for crypto mining was shift calculations from individuals to large, centralized data centers. Yep, centralization is definitely better. Big is a positive as well.
The write up adds:
The Chinese startup is Zhonghao Xinying. Its Ghana chip is claimed to offer 1.5 times the performance of Nvidia’s A100 AI GPU while reducing power consumption by 75%. And it does that courtesy of a domestic Chinese chip manufacturing process that the company says is "an order of magnitude lower than that of leading overseas GPU chips." By "an order or magnitude lower," the assumption is that means well behind in technological terms given China’s home-grown chip manufacturing is probably a couple of generations behind the best that TSMC in Taiwan can offer and behind even what the likes of Intel and Samsung can offer, too.
The idea is that if these chips become widely available, they won’t be very good. Probably like the first Chinese BYD electric vehicles. But after some iterative engineering, the Chinese chips are likely to improve. If these improvements coincide with the turn on of the massive data centers the BAIT outfits are building, there might be rethinking required by the Silicon Valley wizards.
Several observations will be offered but these are probably not warranted by anyone other than myself:
- China might subsidize its home grown chips. The Googler is not the only person in the Middle Kingdom trying to find a way around the US approach to smart software. Cheap wins or is disruptive until neutralized in some way.
- New data centers based on the Chinese chips might find customers interested in stepping away from dependence on a technology that most AI companies are using for “me too”, imitative AI services. Competition is good, says Silicon Valley, until it impinges on our business. At that point, touch-to-predict actions come into play.
- Nvidia and other AI-centric companies might find themselves trapped in AI strategies that are comparable to a large US aircraft carrier. These ships are impressive, but it takes time to slow them down, turn them, and steam in a new direction. If Chinese AI ASICs hit the market and improve rapidly, the captains of the US-flagged Transformer vessels will have their hands full and financial officers clamoring for the leaderships’ attention.
Net net: Ponder this question: What is Ghana gonna do?
Stephen E Arnold, December 1, 2025

