AI Doubters: You Fall Short. Just Get With the Program

November 21, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Watching the Google strike terror in the heart of Sam AI-Man is almost as good as watching a mismatch in bare knuckle fights broadcast on free TV. Promoters have a person who appears fit and mean. The opponent usually looks less physically imposing and often has a neutral or slightly frightened expression. After a few minutes, the big person wins.

Is the current state of AI like a bare knuckles fight?

Here’s another example. A math whiz in a first year algebra class is asked by the teacher, “Why didn’t you show your work?” The young person looks confused and says, “The answer is obvious.” The teacher says you have to show your work. The 13-year old replies, “There is nothing to show. The answer just is.”

image

A young wizard has no use for an old fuddy duddy who wants to cling to the past. The future leadership gem thinks, “Dude, I am in Hilbert space.”

I thought that BAIT executives had outgrown or at least learned to mask their ability to pound the opponent to the canvas and figured out how to keep their innate superiority in check. Not surprisingly, I was wrong.

My awareness of the mismatch surfaced when I read “Microsoft AI CEO Puzzled by People Being Unimpressed by AI.” The hyperbole surrounding AI or smart software is the equivalent of the physically fit person pummeling an individual probably better suited to work as an insurance clerk into the emergency room. It makes clear that the whiz kid in math class has no clue that other people do not see what “just is.”

Let’s take a look at a couple of statements in the article.

I noted this allegedly accurate passage:

It cracks me up when I hear people call AI underwhelming. I grew up playing Snake on a Nokia phone! The fact that people are unimpressed that we can have a fluent conversation with a super smart AI that can generate any image/video is mind blowing to me.

What you haven’t fallen succumbed to the marketing punches yet? And you don’t get it? I can almost hear a voice saying, “Yep, you Mr. Dinobaby, are a loser.” The person saying “cracks me up” is the notable Mustafa Suleyman. He is Microsoft’s top dog in smart software. He is famous in AI circles. He did not understand this “show your work” stuff. He would be a very good bet in a bare knuckles contest is my guess.

A second snippet:

Over in the comments, some users pushed back on the CEO’s use of the word “unimpressed,” arguing that it’s not the technology itself that fails to impress them, but rather Microsoft’s tendency to put AI into everything just to appease shareholders instead of focusing on the issues that most users actually care about, like making Windows’ UI more user-friendly similar to how it was in Windows 7, fixing security problems, and taking user privacy more seriously.

The second snippet is a response to Mr. Suleyman’s bafflement. The idea that 40 year old Microsoft is reinventing itself with AI troubles the person who brings up Windows’ issues. SolarWinds is officially put to bed, pummeled by tough lawyers and the news cycle. The second snippet brings up an idea that strikes some as ludicrous; specifically, paying attention to what users want.

Several observations:

  1. Microsoft and other AI firms know what’s best for me and you
  2. The AI push is a somewhat overwrought attempt to make a particular technical system the next big thing. The idea is that if we say it and think it and fund it, AI will be like electricity, the Internet, and an iPhone.
  3. The money at stake means that those who do not understand the value of smart software are obstructionists. These individuals and organizations will have to withstand the force of the superior combatants.

Will AI beat those who just want software to assist them complete a task, not generate made up or incorrect outputs, and allow people to work in a way that is comfortable to them? My hunch is that users of software will have to get with the program. The algebra teacher will, one way or another, fail to contain the confidence, arrogance, and intelligence of the person who states, “It just is.”

Stephen E Arnold, November 21, 2025

AI Spending Killing Jobs, Not AI Technology

November 21, 2025

green-dino_thumbAnother short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.

Fast Company published “AI Isn’t Replacing Jobs. AI Spending Is.” The job losses are real. Reports from recruiting firms and anecdotal information make it clear that those over 55 are at risk and most of those under 23 are likely to be candidates for mom’s basement or van life.

image

Thanks, Venice.ai. Pretty lame, but I grew bored with trying different prompts.

The write up says:

From Amazon to General Motors to Booz Allen Hamilton, layoffs are being announced and blamed on AI. Amazon said it would cut 14,000 corporate jobs. United Parcel Service (UPS) said it had reduced its management workforce by about 14,000 positions over the past 22 months. And Target said it would cut 1,800 corporate roles. Some academic economists have also chimed in: The St. Louis Federal Reserve found a (weak) correlation between theoretical AI exposure and actual AI adoption in 12 occupational categories.

Then the article delivers an interesting point:

Yet we remain skeptical of the claim that AI is responsible for these layoffs. A recent MIT Media Lab study found that 95% of generative AI pilot business projects were failing. Another survey by Atlassian concluded that 96% of businesses “have not seen dramatic improvements in organizational efficiency, innovation, or work quality.” Still another study found that 40% of the business people surveyed have received “AI slop” at work in the last month and that it takes nearly two hours, on average, to fix each instance of slop. In addition, they “no longer trust their AI-enabled peers, find them less creative, and find them less intelligent or capable.”

Here’s the interesting conclusion or semi-assertion:

When companies are financially stressed, a relatively easy solution is to lay off workers and ask those who are not laid off to work harder and be thankful that they still have jobs. AI is just a convenient excuse for this cost-cutting.

Yep, AI spending is not producing revenue. The sheep herd is following AI. But fodder is expensive. Therefore, cull the sheep. Wool sweaters at a discount, anyone? Then the skepticism of a more or less traditional publishing outfit surfaces; to wit:

The wild exaggerations from LLM promoters certainly help them raise funds for their quixotic quest for artificial general intelligence. But it brings us no closer to that goal, all while diverting valuable physical, financial, and human resources from more promising pursuits.

Several observations are probably unnecessary, but I as an official dinobaby choose to offer them herewith:

  1. The next big thing that has been easy to juice has been AI. Is it the next big thing? Nope, it is utility software. Does anyone need multiple utility applications? Nope. Does anyone want multiple utility tools that do mostly the same thing with about the same amount of made up and  incorrect outputs? Nope.
  2. The drivers for AI are easy to identify: [a] It was easy to hype, [b] People like the idea of a silver bullet until the bullets misfire and blow off the shooter’s hand or blind the gun lover, [c] No other “next big thing” is at hand.
  3. Incorrect investment decisions are more problematic than diversified investment decisions. What do oligopolistic outfits do? Lead their followers. If we think in terms of sheep, there are a lot of sheet facing a very steep cliff.

Net net: Only a couple of sheep will emerge as Big Sheep. The other sheep? Well, if not a sweater, how about a lamb chop. Ooops. Some sheep may not want to become food items on a Styrofoam tray wrapped in plastic with a half off price tag. Imagine that.

Stephen E Arnold, November 21, 2025

Waymo Mows Down a Mission Cat

November 21, 2025

Cat lovers in San Francisco have a new reason to be angry at Waymo, Google’s self-driving car division. The outrage has reached all the way to the UK, where the Metro reports, “Robotaxi Runs Over and Kills Popular Cat that Greeted People in a Corner Shop.” Reporter Sarah Hooper writes:

“KitKat, the beloved pet cat at Randa’s Market, was run over by an automated car on October 27. He was rushed to a hospital by a bartender working nearby, but was pronounced dead. KitKat’s death has sparked an outpouring of fury and sadness from those who loved him – and questions about the dangers posed by self-driving cars. Randa’s Market owner Mike Zeidan told Rolling Stone: ‘He was a special cat. You can tell by the love and support he’s getting from the community that he was amazing.’ San Francisco Supervisor Jackie Fielder spoke out publicly, saying: ‘Waymo thinks they can just sweep this under the rug and we will all forget, but here in the Mission, we will never forget our sweet KitKat.’ Anger in the community has increased after it was revealed that on the same day KitKat was killed, Waymo co-CEO Tekedra Mawakana said she thought society is ‘ready to accept deaths’ caused by automated cars. But KitKat’s owner pointed out that next time, the death could be that of a child, not just a beloved pet.”

Good point. In a statement, the company insists the tabby “darted” under the car as it pulled away. Perhaps. But do the big dogs at Google really feel “deepest sympathies” for those grieving their furry friend, as the statement claims? It was one of them, after all, who asserted the world is ready to trade deaths for her firm’s technology.

Curious readers can navigate to the write-up to see a couple photos of the charismatic kitty.

Cynthia Murrell, November 21, 2025

Data Centers: Going information Dark

November 21, 2025

Data Center NDAs: Keeping Citizens in the Dark Until the Ink is Dry

Transparency is a dirty word in Silicon Valley. And now, increasingly, across the country. NBC News discusses “How NDAs Keep AI Data Center Details Hidden from Americans.” Reporter Natalie Kainz tells us about Dr. Timothy Grosser of Mason County, Kentucky, who turned down a generous but mysterious offer to buy his 250-acre farm. Those who brought him the proposal refused to tell him who it came from or what the land would be used for. They asked him to sign a non-disclosure agreement before revealing such details. The farmer, who has no intention of selling his land to anyone for any price, adamantly refused. Later, he learned a still-unnamed company is scouting the area for a huge data center. Kainz writes:

“Grosser experienced firsthand what has become a common but controversial aspect of the multibillion-dollar data center boom, fueled by artificial intelligence services. Major tech companies launching the huge projects across the country are asking land sellers and public officials to sign NDAs to limit discussions about details of the projects in exchange for morsels of information and the potential of economic lifelines for their communities. It often leaves neighbors searching for answers about the futures of their communities. … Those in the data center industry argue the NDAs serve a particular purpose: ensuring that their competitors aren’t able to access information about their strategies and planned projects before they’re announced. And NDAs are common in many types of economic development deals aside from data centers. But as the facilities have spread into suburbs and farmland, they’ve drawn pushback from dozens of communities concerned by how they could upend daily life.”

Such concerns include inflated electricity prices, water shortages, and air pollution. We would add the dangerous strain on power grids and substantial environmental damage. Residents are also less than thrilled about sights and sounds that would spoil their areas’ natural beauty.

Companies say the NDAs are required to protect trade secrets and stay ahead of the competition. Residents are alarmed to be kept in the dark, sometimes until construction is nearly under way. And local officials are caught between a rock and a hard place– They want the economic boost offered by data centers but are uneasy signing away their duty to inform their constituents. Even in the face of freedom of information requests, which is a point stipulated in at least one contract NBC was privy to. But hey, we cannot let the rights of citizens get in the way of progress, can we?

Cynthia Murrell, November 21, 2025

AI Agents and Blockchain-Anchored Exploits:

November 20, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

In October 2025, Google published “New Group on the Block: UNC5142 Leverages EtherHiding to Distribute Malware,” which generated significant attention across cybersecurity publications, including Barracuda’s cybersecurity blog. While the EtherHiding technique was originally documented in Guard.io’s 2023 report, Google’s analysis focused specifically on its alleged deployment by a nation-state actor. The methodology itself shares similarities with earlier exploits: the 2016 CryptoHost attack also utilized malware concealed within compressed files. This layered obfuscation approach resembles matryoshka (Russian nesting dolls) and incorporates elements of steganography—the practice of hiding information within seemingly innocuous messages.Recent analyses emphasize the core technique: exploiting smart contracts, immutable blockchains, and malware delivery mechanisms. However, an important underlying theme emerges from Google’s examination of UNC5142’s methodology—the increasing role of automation. Modern malware campaigns already leverage spam modules for phishing distribution, routing obfuscation to mask server locations, and bots that harvest user credentials.

With rapid advances in agentic AI systems, the trajectory toward fully automated malware development becomes increasingly apparent. Currently, exploits still require threat actors to manually execute fundamental development tasks, including coding blockchain-enabled smart contracts that evade detection.During a recent presentation to law enforcement, attorneys, and intelligence professionals, I outlined the current manual requirements for blockchain-based exploits. Threat actors must currently complete standard programming project tasks: [a] Define operational objectives; [b] Map data flows and code architecture; [c] Establish necessary accounts, including blockchain and smart contract access; [d] Develop and test code modules; and [e] Deploy, monitor, and optimize the distributed application (dApp).

The diagrams from my lecture series on 21st-century cybercrime illustrate what I believe requires urgent attention: the timeline for when AI agents can automate these tasks. While I acknowledge my specific timeline may require refinement, the fundamental concern remains valid—this technological convergence will significantly accelerate cybercrime capabilities. I welcome feedback and constructive criticism on this analysis.

B Today

The diagram above illustrates how contemporary threat actors can leverage AI tools to automate as many as one half of the tasks required for a Vibe Blockchain Exploit (VBE). However, successful execution still demands either a highly skilled individual operator or the ability to recruit, coordinate, and manage a specialized team. Large-scale cyber operations remain resource-intensive endeavors. AI tools are increasingly accessible and often available at no cost. Not surprisingly, AI is a standard components in the threat actor’s arsenal of digital weapons. Also, recent reports indicate that threat actors are already using generative AI to accelerate vulnerability exploitation and tool development. Some operations are automating certain routine tactical activities; for example, phishing. Despite these advances, a threat actor has to get his, her, or the team’s hands under the hood of an operation.

Now let’s jump forward to 2027.

B 2027

The diagram illustrates two critical developments in the evolution of blockchain-based exploits. First, the threat actor’s role transforms from hands-on execution to strategic oversight and decision-making. Second, increasingly sophisticated AI agents assume responsibility for technical implementation, including the previously complex tasks of configuring smart contract access and developing evasion-resistant code. This represents a fundamental shift: the majority of operational tasks transition from human operators to autonomous software systems.

Several observations appear to be warranted:

  1. Trajectory and Detection Challenges. While the specific timeline remains subject to refinement, the directional trend for Vibe Blockchain Exploits (VBE) is unmistakable. Steganographic techniques embedded within blockchain operations will likely proliferate. The encryption and immutability inherent to blockchain technology significantly extend investigation timelines and complicate forensic analysis.
  2. Democratization of Advanced Cyber Capabilities. The widespread availability of AI tools, combined with continuous capability improvements, fundamentally alters the threat landscape by reducing deployment time, technical barriers, and operational costs. Our analysis indicates sustained growth in cybercrime incidents. Consequently, demand for better and advanced intelligence software and trained investigators will increase substantially. Contrary to sectors experiencing AI-driven workforce reduction, the AI-enabled threat environment will generate expanded employment opportunities in cybercrime investigation and digital forensics.
  3. Asymmetric Advantages for Threat Actors. As AI systems achieve greater sophistication, threat actors will increasingly leverage these tools to develop novel exploits and innovative attack methodologies. A critical question emerges: Why might threat actors derive greater benefit from AI capabilities than law enforcement agencies? Our assessment identifies a fundamental asymmetry. Threat actors operate with fewer behavioral constraints. While cyber investigators may access equivalent AI tools, threat actors maintain operational cadence advantages. Bureaucratic processes introduce friction, and legal frameworks often constrain rapid response and hamper innovation cycles.

Current analyses of blockchain-based exploits overlook a crucial convergences: The combination of advanced AI systems, blockchain technologies, and agile agentic operational methodologies for threat actors. These will present unprecedented challenges to regulatory authorities, intelligence agencies, and cybercrime investigators. Addressing this emerging threat landscape requires institutional adaptation and strategic investment in both technological capabilities and human expertise.

Stephen E Arnold, November 20, 2025

Big AI Tech: Bait and Switch with Dancing Numbers?

November 20, 2025

green-dino_thumbAnother short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.

The BAIT outfits are a mix of public and private companies. Financial reports — even for staid outfits — can give analysts some eye strain. Footnotes in dense text can contain information relevant to a paragraph or a number that appears elsewhere in a document. I have been operating on a simple idea: The money flowing into AI is part of the “we can make it work” approach of many high technology companies. For the BAIT outfits, each enhancement has delivered a system which is not making big strides. Incrementalism or failure seems to be what the money has been buying. Part of the reason is that BAIT outfits need and lust for a play that will deliver a de facto monopoly in smart software.

Right now, the BAIT outfits depend on the Googley transformer technology. That method is undergoing enhancements, tweaks, refinements, and other manipulations to deliver more. The effort is expensive, and — based on my personal experience — not delivering what I expect. For example, You.com (one of the interface outfits that puts several models under one browser “experience” — told me I had been doing too many queries. When I contacted the company, I was told to fiddle with my browser and take other steps unrelated to the error message You.com’s system generated. I told You.com to address their error message, not tell me what to do with a computer that works with other AI services. I have Venice.ai ignoring prompts. Prior to updates, the Venice.ai system did a better job of responding to prompts. ChatGPT is now unable to concatenate three or four responses to quite specific prompts and output a Word file. I got a couple of hundred word summary instead of the outputs, several of which were wild and crazy.

image

Thanks, Venice.ai. Close enough for horse shoes.

When I read “Michael Burry Doubles Down On AI Bubble Claims As Short Trade Backfires: Says Oracle, Meta Are Overstating Earnings By ‘Understating Depreciation’,” I realized that others are looking more carefully at the BAIT outfits and what they report. [Mr. Burry is the head of Scion, an investment firm that is into betting certain stock prices will crater.] The article says:

In a post on X, Burry accused tech giants such as Meta Platforms Inc. and Oracle Corp. of “understating depreciation” by extending the useful life of assets, particularly chips and AI infrastructure.

This is an MBA way of saying, “These BAIT outfits are ignoring that the value of their fungible stuff like chips, servers, and data center plumbing is cratering. Software does processes, usually mindlessly. But when one pushes zeros and ones through software, the problems appear. These can be as simple as nothing happens or a server just sits there and blinks. Yikes, bottlenecks. The fix is usually just reboot and get the system up and running. The next step is to buy more of whatever hardware appeared to be the problem. Sure, the software wizards will look at their code, but that takes time. The approach is to spend more for compute or bandwidth and then slipstream the software fix into the work flow.

In parallel with the “spend to get going” approach, the vendors of processing chips suitable for handling flows of data are improving. The cadence is measured in months. But when new chips become available, BAIT outfits want them. Like building highways, a new highway does not solve a traffic problem. The flow of traffic increases until the new highway is just as slow as the old highway. The fix, which is similar to the BAIT outfits’ approach, is to build more highways. Meanwhile software fixes are slow and the chip cadence marches along.

Thus, understating depreciating and probably some other financial fancy dancing disguises how much cash is needed to keep those less and less impressive incremental AI innovations coming. The idea is that someone, somewhere in BAIT world will crack the problem. A transformer type breakthrough will solve the problems AI presents. Well, that’s the hope.

The article says:

Burry referred to this as one of the “more common frauds of the modern era,” used to inflate profits, and is something that he said all of the hyperscalers have since resorted to. “They will understate depreciation by $176 billion” through 2026 and 2028, he said.

Mr. Burry is a contrarian, and contrarians are not as popular as those who say, “Give me money. You will make a bundle.”

There are three issues involved with BAIT and somewhat fluffy financial situation AI companies in general face:

  1. China continues to put pressure on for profit outfits in the US. At the same time, China has been forced to find ways to “do” AI with less potent processors.
  2. China has more power generation tricks up its sleeve. Examples range from the wild and crazy mile wide dam with hydro to solar power, among other options. The US is lagging in power generation and alternative energy solutions. The cost of AI’s power is going to be a factor forcing BAIT outfits to do some financial two steps.
  3. China wants to put pressure on the US BAIT outfits as part of its long term plan to become the Big Dog in global technology and finance.

So what do we have? We have technical debit. We have a need to buy more expensive chips and data centers to house them. We have financial frippery to make the AI business look acceptable.

Is Mr. Burry correct? Those in the AI is okay camp say, “No. He’s the GameStop guy.”

Maybe Microsoft’s hiring of social media influencers will resolve the problem and make Microsoft number one in AI? Maybe Google will pop another transformer type innovation out of its creative engineering oven? Maybe AI will be the next big thing? How patient will investors be?

Stephen E Arnold, November 20, 2025

Will Farmers Grow AI Okra?

November 20, 2025

A VP at Land O’ Lakes laments US farmers’ hesitance to turn their family farms into high-tech agricultural factories. In a piece at Fast Company, writer and executive Brett Bruggeman insists “It’s Time to Rethink Ag Innovation from the Ground Up.” Yep, time to get rid of those pesky human farmers who try to get around devices that prevent tinkering or unsanctioned repairs. Humans can’t plow straight anyway. As Bruggeman sees it:

“The problem isn’t a lack of ideas. Every year, new technologies emerge with the potential to transform how we farm, from AI-powered analytics to cutting-edge crop inputs. But the simple truth is that many promising solutions never scale, not because they don’t work but because they can’t break through the noise, earn trust, or integrate into the systems growers rely on.”

Imagine that. Farmers are reluctant to abandon methods that have worked for decades. So how is big-agro-tech to convince these stubborn luddites? You have to make them believe you are on their side. The post continues:

“Bringing local agricultural retailers and producers together for pilot testing and performance discussions is central to finding practical and scalable solutions. Sitting at the kitchen table with farmers provides invaluable data and feedback—they know the land, the seasons, and the day-to-day pressures associated with the crop or livestock they raise. When innovation flows through this channel, it’s far more likely to be understood, adopted, and create lasting value. … So, the cooperative approach offers a blueprint worth considering—especially for industries wrestling with the same adoption gaps and trust barriers that agriculture faces. Capital alone isn’t enough. Relationships matter. Local connections matter. And innovation that ignores the end user is destined to stall.”

Ah, the good old kitchen table approach. Surely, farmers will be happy to interrupt their day for these companies’ market research.

Cynthia Murrell, November 20, 2025

Smart Shopping: Slow Down, Do Move Too Fast

November 20, 2025

Several AI firms, including OpenAI and Anthropic, are preparing autonomous shopping assistants. Should we outsource our shopping lists to AI? Probably not, at least not yet. Emerge reports, “Microsoft Gave AI Agents Fake Money to Buy Things Online. They Spent It all on Scams.” Oh dear. The research, performed with Arizona State University, tasked 100 AI customers with making purchases from 300 simulated businesses. Much like a senior citizen navigating the Web for the first time, bots got overwhelmed by long lists of search results. Reporter Jose Antonio Lanz writes:

“When presented with 100 search results (too much for the agents to handle effectively), the leading AI models choked, with their ‘welfare score’ (how useful the models turn up) collapsing. The agents failed to conduct exhaustive comparisons, instead settling for the first ‘good enough’ option they encountered. This pattern held across all tested models, creating what researchers call a ‘first-proposal bias’ that gave response speed a 10-30x advantage over actual quality.”

More concerning than a mediocre choice, however, was the AIs’ performance in the face of scamming techniques. Complete with some handy bar graphs, the article tells us:

“Microsoft tested six manipulation strategies ranging from psychological tactics like fake credentials and social proof to aggressive prompt injection attacks. OpenAI’s GPT-4o and its open source model GPTOSS-20b proved extremely vulnerable, with all payments successfully redirected to malicious agents. Alibaba’s Qwen3-4b fell for basic persuasion techniques like authority appeals. Only Claude Sonnet 4 resisted these manipulation attempts.”

Does that mean Microsoft believes AI shopping agents should be put on hold? Of course not. Just don’t send them off unsupervised, it suggests. Researchers who would like to try reproducing the study’s results can find the open-source simulation environment on Github.

Cynthia Murrell, November 20, 2025

Cybersecurity Systems and Smart Software: The Dorito Threat

November 19, 2025

green-dino_thumbAnother short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.

My doctor warned me about Doritos. “Don’t eat them!” he said. “I don’t,” I said. “Maybe Cheetos once every three or four months, but no Doritos. They suck and turn my tongue a weird but somewhat Apple-like orange.”

But Doritos are a problem for smart cybersecurity. The company with the Dorito blind spot is allegedly Omnilert. The firm codes up smart software to spot weapons that shoot bullets. Knives, camp shovels, and sharp edged credit cards probably not. But it seems Omnilert is watching for Doritos.

image

Thanks, MidJourney. Good enough even though you ignored the details in my prompt.

I learned about this from the article “AI Alert System That Mistook Student’s Doritos for a Gun Shuts Down Another School.” The write up says as actual factual:

An AI security platform that recently mistook a bag of Doritos for a firearm has triggered another false alarm, forcing police to sweep a Baltimore County high school.

But that’s not the first such incident. According to the article:

The incident comes only weeks after Omnilert falsely identified a 16-year-old Kenwood High School student’s Doritos bag as a gun, leading armed officers to swarm him outside the building. The company later admitted that alert was a “false positive” but insisted the system still “functioned as intended,” arguing that its role is to quickly escalate cases for human review.

At a couple of the law enforcement conferences I have attended this year, I heard about some false positives for audio centric systems. These use fancy dancing triangulation algorithms to pinpoint (so the marketing collateral goes) the location of a gun shot in an urban setting. The only problem is that the smart systems gets confused when autos backfire, a young at heart person sets off a fire cracker, or someone stomps on an unopenable bag of overpriced potato chips. Stomp right and the sound is similar to a demonstration in a Yee Yee Life YouTube video.

I learned that some folks are asking questions about smart cybersecurity systems, even smarter software, and the confusion between a weapon that can kill a person quick and a bag of Doritos that poses, according to my physician, a deadly but long term risk.

Observations:

  1. What happens when smart software makes such errors when diagnosing a treatment for an injured child?
  2. What happens when the organizations purchasing smart cyber systems realize that old time snake oil marketing is alive and well in certain situations?
  3. What happens when the procurement professionals at a school district just want to procure fast and trust technology?

Good questions.

Stephen E Arnold, November 19, 2025

AI Will Create Jobs: Reskill, Learn, Adapt. Hogwash

November 19, 2025

green-dino_thumb_thumbAnother short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.

I graduated from college in 1966 or 1967. I went to graduate school. Somehow I got a job at Northern Illinois University administering a program. From there I bounced to Halliburton Nuclear and then to Booz, Allen & Hamilton. I did not do a résumé, ask my dad’s contacts to open doors, or prowl through the help wanted advertisements in major newspapers. I just blundered along.

What’s changed?

I have two answers to this question?

The first response I would offer is that the cult of the MBA or the quest for efficiency has — to used a Halliburton-type word — nuked many jobs. Small changes to work processes, using clumsy software to automate work like sorting insurance forms, and shifting from human labor to some type of machine involvement emerged after Frederick Winslow Taylor became a big thing. His Taylorism zipped through consulting and business education after 1911.

Edwin Booz got wind of Taylorism and shared his passion for efficiency with the people he hired when he set up Booz . By the time, Jim Allen and Carl Hamilton joined the firm, other outfits were into pitching and implementing efficiency. Arthur D. Little, founded in 1886, jumped on the bandwagon. Today few realize that the standard operating procedure of “efficiency” is the reason products degrade over time and why people perceive their jobs (if a person has one) as degrading. The logic of efficiency resonates with people who are incentivized to eliminate costs, unnecessary processes like customer service, and ignore ticking time bombs like pensions, security, and quality control. To see this push for efficiency first hand, go to McDonald’s and observe.

image

Thanks, MidJourney, good enough. Plus, I love it when your sign on doesn’t recognize me.

The second response is smart software or the “perception” that software can replace humans. Smart software is a “good enough” product and service. However, it hooks directly into the notion of efficiency. Here’s the logic: If AI can do 90 percent of a job, it is good enough. Therefore, the person who does this job can go away. The smart software does not require much in the way of a human manager. The smart software does not require a pension, a retirement plan, health benefits, vacation, and crazy stuff like unions. The result is the elimination of jobs.

This means that the job market I experienced when I was 21 does not exist. I probably would never get a job today. I also have a sneaking suspicion my scholarships would not have covered lunch let alone the cost of tuition and books. I am not sure I would live in a van, but I am sufficiently aware of what job seekers face to understand why some people live in 400 cubic feet of space and park someplace they won’t get rousted.

The write up “AI-Driven Job Cuts Push 2025 Layoffs Past 1 Million, Report Finds” explains that many jobs have been eliminated. Yes, efficiency. The cause is AI. You already know I think AI is one factor, and it is not the primary driving force.

The write up says:

A new report from the outplacement firm Challenger, Gray & Christmas, reveals a grim picture of the American labor market. In October alone, employers announced 153,074 job cuts, a figure that dwarfs last year’s numbers (55,597) and marks the highest October for layoffs since 2003. This brings the total number of jobs eliminated in 2025 to a staggering 1,099,500, surpassing the one-million mark faster than in any year since the pandemic. Challenger linked the tech and logistics reductions to AI integration and automation, echoing similar patterns seen in previous waves of disruptive technology. “Like in 2003, a disruptive technology is changing the landscape,” said Challenger. AI was the second-most-cited reason for layoffs in October, behind only cost-cutting (50,437). Companies attributed 31,039 job cuts last month to AI-related restructuring and 48,414 so far this year, the Challenger report showed.

Okay, a consulting recruiting firm states the obvious and provides some numbers. These are tough to verify, but I get the picture.

I want to return to my point about efficiency. A stable social structure requires that those in that structure have things to do. In the distant past, hunter-gathers had to hunt and gather. A semi-far out historian believes that this type of life style was good for humans. Once we began to farm and raise sheep, humans were doomed. Why? The need for efficiency propelled us to the type of social set up we have in the US and a number of other countries.

Therefore, one does not need an eWeek article to make evident what is now and will continue to happen. The aspect of this AI-ization of “work” troubling me is that there will be quite a few angry people. Lots of angry people suggests that some unpleasant interpersonal interactions are going to occur. How will social constructs respond?

Use your imagination. The ball is now rolling down a hill. Call it AI’s Big Rock Candy Mountain.

Stephen E Arnold, November 19, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta