Think It. The * It * Becomes Real. Think Again?

August 27, 2025

Dino 5 18 25No AI. Just a dinobaby working the old-fashioned way.

Fortune Magazine — once the gem for a now spinning-in-his-grave publisher —- posted “MIT Report: 95% of Generative AI Pilots at Companies Are Failing.” I take a skeptical view of MIT. Why? The esteemed university found Jeffrey Epstein a swell person.

The thrust of the story is that people stick smart software into an organization, allow it time to steep, cook up a use case, and find the result unpalatable. Research is useful. When it evokes a “Duh!”, I don’t get too excited.

But there was a phrase in the write up which caught my attention: Learning gap. AI or smart software is a “belief.” The idea of the next big thing creates an opportunity to move money. Flow, churn, motion — These are positive values in some business circles.

AI fits the bill. The technology demonstrates interesting capabilities. Use cases exist. Companies like Microsoft have put money into the idea. Moving money is proof that “something” is happening. And today that something is smart software. AI is the “it” for the next big thing.

Learning gap, however, is the issue. The hurdle is not Sam Altman’s fears about the end of humanity or his casual observation that trillions of dollars are needed to make AI progress. We have a learning gap.

But the driving vision for Internet era innovation is do something big, change the world, reinvent society. I think this idea goes back to the sales-oriented philosophy of visualizing a goal and aligning one’s actions to achieve that goal. I a fellow or persona named Napoleon Hill pulled together some ideas and crafted “Think and Grow Rich.” Today one just promotes the “next big thing,” gets some cash moving, and an innovation like smart software will revolutionize, remake, or redo the world.

The “it” seems to be stuck in the learning gap. Here’s the proof, and I quote:

But for 95% of companies in the dataset, generative AI implementation is falling short. The core issue? Not the quality of the AI models, but the “learning gap” for both tools and organizations. While executives often blame regulation or model performance, MIT’s research points to flawed enterprise integration. Generic tools like ChatGPT excel for individuals because of their flexibility, but they stall in enterprise use since they don’t learn from or adapt to workflows, Challapally explained. The data also reveals a misalignment in resource allocation. More than half of generative AI budgets are devoted to sales and marketing tools, yet MIT found the biggest ROI in back-office automation—eliminating business process outsourcing, cutting external agency costs, and streamlining operations.

Consider this question: What if smart software mostly works but makes humans uncomfortable in ways difficult for the user to articulate? What if humans lack the mental equipment to conceptualize what a smart system does? What if the smart software cannot answer certain user questions?

I find information about costs, failed use cases, hallucinations, and benefits plentiful. I don’t see much information about the “learning gap.” What causes a learning gap? Spell check makes sense. A click that produces a complete report on a complex topic is different. But in what way? What is the impact on the user?

I think the “learning gap” is a key phrase. I think there is money to be made in addressing it. I am not confident that visualizing a better AI is going to solve the problem which is similar to a bonfire of cash. The learning gap might be tough to fill with burning dollar bills.

Stephen E Arnold, August 27, 2025

Deal Breakers in Medical AI

August 26, 2025

Dino 5 18 25No AI. Just a dinobaby working the old-fashioned way.

My newsfeed thing spit out a link to “Why Radiology AI Didn’t Work and What Comes Next.” I have zero interest in radiology. I don’t get too excited about smart software. So what did I do? Answer: I read the article. I was delighted to uncover a couple of points that, in my opinion, warrant capturing in my digital notebook.

The set up is that a wizard worked at a start up trying to get AI to make sense of the consistently fuzzy, murky, and baffling images cranked out by radiology gizmos. Tip: Follow the instructions and don’t wear certain items of jewelry. The start up fizzled. AI was part of the problem, but the Jaws-type sharp lurking in the murky image explains this type of AI implosion.

Let’s run though the points that struck me.

First, let’s look at this passage:

Unlike coding or mathematics, medicine rarely deals in absolutes. Clinical documentation, especially in radiology, is filled with hedge language — phrases like “cannot rule out,” “may represent,” or “follow-up recommended for correlation.” These aren’t careless ambiguities; they’re defensive signals, shaped by decades of legal precedent and diagnostic uncertainty.

Okay, lawyers play a significant role in establishing thought processes and normalizing ideas that appear to be purpose-built to vaporize like one of those nifty tattoo removing gadgets the smart system. I would have pegged insurance companies, then lawyers, but the write up directed my attention of the legal eagles’ role: Hedge language. Do I have disease X? The doctor responds, “Maybe, maybe not. Let’s wait 30 days and run more tests.” Fuzzy lingo, fuzzy images, perfect.

Second, the write up asks two questions:

  • How do we improve model coverage at the tail without incurring prohibitive annotation costs?
  • Can we combine automated systems with human-in-the-loop supervision to address the rare but dangerous edge cases?

The answers seem to be: You cannot afford to have humans do indexing and annotation. That’s why certain legal online services charge a lot for annotations. And, the second question, no, you cannot pull off automation with humans for events rarely covered in the training data. Why? Cost and finding enough humans who will do this work in a consistent way in a timely manner.

Here’s the third snippet:

Without direct billing mechanisms or CPT reimbursement codes, it was difficult to monetize the outcomes these tools enabled. Selling software alone meant capturing only a fraction of the value AI actually created. Ultimately, we were offering tools, not outcomes. And hospitals, rightly, were unwilling to pay for potential unless it came bundled with performance.

Finally, insurance procedures. Hospitals aren’t buying AI; they are buying ways to deliver “service” and “bill.” AI at this time does not sell what hospitals want to buy: A way to keep high rates and slash costs wherever possible.

Unlikely but perhaps some savvy AI outfit will create a system that can crack the issues the article identifies. Until then, no money, no AI.

Stephen E Arnold, August 26, 2025

Smart Software Fix: Cash, Lots and Lots of Cash

August 19, 2025

Dino 5 18 25No AI. Just a dinobaby working the old-fashioned way. But I asked ChatGPT one question. Believe it or not.

If you have some spare money, Sam Altman aka Sam AI-Man wants to talk with you. It is well past two years since OpenAI forced the 20 year old Google to go back to the high school lab. Now OpenAI is dealing with the reviews of ChatGPT 5. The big news in my opinion is that quite a  few people are annoyed with the new smart software from the money burning Bessemer furnace at 3180 18th Street in San Francisco. (I have heard that a satellite equipped with an infra red camera gets a snazzy image of the heat generated from the immolation of cash. There are also tiny red dots scattered around the SF financial district. Those, I believe, are the burning brain cells of the folks who have forked over dough to participate in Sam AI-Man’s next big thing.

As People Ridicule GPT-5, Sam Altman Says OpenAI Will Need ‘Trillions’ in Infrastructure” addresses the need for cash. The write up says:

Whether AI is a bubble or not, Altman still wants to spend a certifiably insane amount of money building out his company’s AI infrastructure. “You should expect OpenAI to spend trillions of dollars on data center construction in the not very distant future,” Altman told reporters.

Trillions is a general figure that most people cannot relate to everyday life. Years ago when I was an indentured servant at a consulting firm, I worked on a project that sought to figure out what types of decisions Boards of Directors of Fortune 1000 companies consumed the most time. The results surprised me then and still do.

Boards of directors spent the most time discussing relatively modest-scale projects; for example, expanding a parking lot or developing of list of companies for potential joint ventures. Really big deals like spending large sums to acquire a company were often handled in swift, decisive votes.

Why?

Boards of directors, like most average people, cannot relate to massive numbers. It is easier to think in terms of a couple hundred thousand dollars to lease a parking lot than borrow billions and buy a giant allegedly synergistic  company.

When Mr. Altman uses the word “trillions,” I think he is unable to conceptualize the amount of money represented in his casual “you should expect OpenAI to spend trillions…”

Several observations:

  1. AI is useful in certain use cases. Will AI return the type of payoff that Google’s charge ‘em every which way from Sunday for advertising model does?
  2. AI appears to produce incorrect outputs. I liked the application for oncology docs who reported losing diagnostic skills when relying on AI assistants.
  3. AI creates negative mental health effects. One old person, younger than I, believed a chat bot cared for him. On the way to meet his digital friend, he flopped over dead. Anticipative anxiety or a use case for AI sparking nutso behavior?

What’s a trillion look like? Answer: 1,000,000,000,000.

How many railroad boxcars would it take to move $1 trillion from a collection point like Denver, Colorado, to downtown San Francisco? Answer from ChatGPT: you would need 10,000 standard railroad boxcars. This calculation is based on the weight and volume of the bills, as well as the carrying capacity of a typical 50-foot boxcar. The train would stretch over 113.6 miles—about the distance from New York City to Philadelphia!

Let’s talk about expanding the parking lot.

Stephen E Arnold, August 19, 2025

Party Time for Telegram?

August 14, 2025

Dino 5 18 25No AI. Just a dinobaby and a steam-powered computer in rural Kentucky.

Let’s assume that the information is “The SEC Quietly Surrendered in Its Biggest Crypto Battle.” Now look at this decision from the point of view of Pavel Durov. The Messenger service has about 1.35 billion users. Allegedly there are 50 million or so in the US. Mr. Durov was one of the early losers in the crypto wars in the United States. He has hired a couple of people to assist him in his effort to do the crypto version of “Coming to America.” Will Manny Stoltz and Max Crown are probably going to make their presence felt.

The cited article states:

This is a huge deal. It creates a crucial distinction that other crypto projects can now use in their own legal battles, potentially shielding them from the SEC’s claim of blanket authority over the market. By choosing to settle rather than risk having this ruling upheld by a higher court, the SEC has shown the limits of its “regulation by enforcement” playbook: its strategy of creating rules through individual lawsuits instead of issuing clear guidelines for the industry.

What will Telegram’s clever Mr. Durov do with its 13 year  old platform, hundreds of features, crypto plumbing, and hundreds of developers eager to generate “money”? It is possible it won’t be Pavel making trips to America. He may be under the watchful eye of the French judiciary.

But Manny, Max, and the developers?

Stephen E Arnold, August 14, 2025

Taylorism, 996, and Motivating Employees

August 6, 2025

Dino 5 18 25No AI. Just a dinobaby being a dinobaby.

No more Foosball. No more Segways in the hallways (thank heaven!). No more ping pong (Wait. Scratch that. You must have ping pong.)

Fortune Magazine reported that Silicon Valley type outfits want to be more like the workplace managed using Frederick Winslow Taylor’s management methods. (Did you know that Mr. Taylor provided the oomph for many blue chip management consulting firms? If you did not, you may be one of the people suggesting that AI will kill off the blue chip outfits. Those puppies will survive.)

Some Silicon Valley AI Startups Are Asking Employees to Adopt China’s Outlawed 996 Work Model” reports:

Some Silicon Valley startups are embracing China’s outlawed “996” work culture, expecting employees to work 12-hour days, six days a week, in pursuit of hyper-productivity and global AI dominance.

The reason, according to the write up, is:

The rise of the controversial work culture appears to have been born out of the current efficiency squeeze in Silicon Valley. Rounds of mass layoffs and the rise of AI have put pressure and turned up the heat on tech employees who managed to keep their jobs.

My response to this assertion is that it is a convenient explanation. My view is that one can trot out the China smart, US dumb arguments, point to the holes of burning AI cash, and the political idiosyncrasies of California and the US government.

The reason is that these are factors, but Silicon Valley is starting to accept the reality that old-fashioned business methods are semi useful. The idea that employees should converge on a work location to do what is still called “work.”

What’s the cause of this change? Since hooking electrodes to a worker in a persistent employee monitoring environment is a step too far for now, going back to the precepts of Freddy are a reasonable compromise.

But those electric shocks would work quite well, don’t you agree? (Sure, China’s work environment sparked a few suicides, but the efficiency is not significantly affected.)

Stephen E Arnold, August 6, 2025

The Cheapest AI Models Reveal a Critical Vulnerability

August 6, 2025

Dino 5 18 25This blog post is the work of an authentic dinobaby. Sorry. Not even smart software can help this reptilian thinker.

I read “Price Per Token,” a recent cost comparison for smart software processes. The compilation of data is interesting. The two lowest cost services using a dead simple method of Input Cost + Output Cost averaged Open AI GPT 4.1 nano and Gemini 2.- Flash. To see how the “Price Per Token” data compare, I used “LLM Pricing Calculator.” The cheapest services were OpenAI – GPT-4.1-nano and Google – Gemini 2.0 Flash.

I found the result predictable and illustrative of the “buying market share with low prices” approach to smart software. Google has signaled its desire to spend billions to deliver “Google quality” smart software.

OpenAI also intends to get and keep market share in the smart software arena. That company is not just writing checks to create a new class of hardware for persistent AI, but the firm is doing deals, including one with Google’s cloud operation.

Several observations:

  1. Google and OpenAI have real and professional capital on the table in the AI Casino
  2. Google and OpenAI are implementing similar tactics; namely, the good old cut prices in the hope of winning market share while putting the others in the game to go broke
  3. Google and OpenAI are likely to orbit one another until one AI black hole absorbs or annihilates the other.

What’s interesting is that neither firm has smart software delivering rock solid results without hallucination, massive costs, and management that allows or is helpless to prevent Meta from eroding both firms by hiring key staff.

Is there a fix for either firm? Nope and Meta’s hiring tactic may be delivering near fatal wounds to both Google and OpenAI. Twins can share similar genetic weaknesses. Meta may have found one —- paying lots for key staff from each firm — and is quite happy implementing it.

Stephen E Arnold, August 6, 2025

Bubble? What Bubble? News Bubble, Financial Bubble, Social Media Bubble?

August 5, 2025

We knew the AI hype was extreme. Now one economist offers a relatable benchmark to describe just how extreme it is. TechSpot reports, “The AI Boom is More Overhyped than the 1990s Dot-Com Bubble, Says Top Economist.” Writer Daniel Sims reveals:

“As tech giants pour more money into AI, some warn that a bubble may be forming. Drawing comparisons to the dot-com crash that wiped out trillions at the turn of the millennium, analysts caution that today’s market has become too reliant on still-unproven AI investments. Torsten Slok, chief economist at Apollo Global Management, recently argued that the stock market currently overvalues a handful of tech giants – including Nvidia and Microsoft – even more than it overvalued early internet companies on the eve of the 2000 dot-com crash. The warning suggests history could soon repeat itself, with the buzzword ‘dot-com’ replaced by ‘AI.’”

Paint us unsurprised. We are reminded:

“In the late 1990s, numerous companies attracted venture capital in hopes of profiting from the internet’s growing popularity, and the stock market vastly overvalued the sector before solid revenue could materialize. When returns failed to meet expectations, the bubble burst, wiping out countless startups. Slok says the stock market’s expectations are even more unrealistic today, with 12-month forward price-to-earnings ratios now exceeding the peak of the dot-com bubble.”

See the write-up for more about price-to-earnings ratios and their relationship to bubbles, complete with a handy bar chart. Sims notes the top 10 firms’ ratios far exceed the rest of the index, illustrating their wildly unrealistic expectations. Slok’s observations echo concerns raised by others, including Baidu CEO Robin Li. Last October, Li predicted only one percent of AI firms will survive the bubble’s burst. Will those top 10 firms be among them? On the plus side, Li expects a more realistic and stable market will follow. We are sure the failed 99 percent will take comfort in that.

Cynthia Murrell, August 5, 2025

Job Hunting. Yeah, About That …

August 4, 2025

It seems we older generations should think twice before criticizing younger adults’ employment status. MSN reports, ‘Gen Z Is Right About the Job Hunt—It Really Is Worse than It Was for Millennials, with Nearly 60% of Fresh-Faced Grads Frozen Out of the Workforce.’ A recent study from Kickresume shows that, while just 25% of millennials and Gen X graduates had trouble finding work right out of college, that figure is now at a whopping 58%. The tighter job market means young job-seekers must jump through hoops we elders would not recognize. Reporter Emma Burleigh observes:

“It’s no secret that landing a job in today’s labor market requires more than a fine-tuned résumé and cover letter. Employers are putting new hires through bizarre lunch tests and personality quizzes to even consider them for a role.”

To make matters worse, these demeaning tests are only for those whose applications have passed an opaque, human-free AI review process. Does that mean issues of racial, gender, age, and socio-economic biases in AI have been solved? Of course not. But companies are forging ahead with the tools anyway. In fact, companies jumping on the AI train may be responsible for narrowing the job market in the first place. Gee, who could have guessed? The write-up continues:

“It’s undeniably a tough job market for many white-collar workers—about 20% of job-seekers have been searching for work for at least 10 to 12 months, and last year around 40% of unemployed people said they didn’t land a single job interview in 2024. It’s become so bad that hunting for a role has become a nine-to-five gig for many, as the strategy has become a numbers game—with young professionals sending in as many as 1,700 applications to no avail.  And with the advent of AI, the hiring process has become an all-out tech battle between managers and applicants. Part of this issue may stem from technology whittling down the number of entry-level roles for Gen Z graduates; as chatbots and AI agents take over junior staffers’ mundane job tasks, companies need fewer staffers to meet their goals.”

Some job seekers are turning to novel approaches. We learn of one who slipped his resume into Silicon Valley firms by tucking it inside boxes of doughnuts. How many companies he approached is not revealed, but we are told he got at least 10 interviews that way. Then there is the German graduate who got her CV in front of a few dozen marketing executives by volunteering to bus tables at a prominent sales event. Shortly thereafter, she landed a job at LinkedIn.

Such imaginative tactics may reflect well on those going into marketing, but they may be less effective in other fields. And it should not take extreme measures like these, or sending out thousands of resumes, to launch one’s livelihood. Soldiering through higher education, often with overwhelming debt, is supposed to be enough. Or it was for us elders. Now, writes Burleigh:

“The age-old promise that a college degree will funnel new graduates into full-time roles has been broken. ‘Universities aren’t deliberately setting students up to fail, but the system is failing to deliver on its implicit promise,’ Lewis Maleh, CEO of staffing and recruitment agency Bentley Lewis, told Fortune.

So let us cut the young folks in our lives some slack. And, if we can, help them land a job. After all, this may be required if we are to have any hope of getting grandchildren or great-niblings.

Cynthia Murrell, August 4, 2025

Private Equities and Libraries: Who Knew?

July 31, 2025

Public libraries are a benevolent part of the local and federal governments. They’re awesome places for entertainment, research, and more. Public libraries in the United States have a controversial histories dealing with banned books, being a waste of tax paying dollars, and more. LitHub published an editorial about the Samuels Public Library in Front Royal, Virginia: “A Virginia Public Library Is Fighting Off A Takeover By Private Equity.”

In short, the Samuels Public Library refused to censor books, mostly those dealing with LGBTQ+ themes. The local county officials withheld funding and the library might be run by LS&S, a private equity firm that specializes in any fields including government outsourcing and defense.

LS&S has a bad reputation and the CEO said:

“ ‘There’s this American flag, apple pie thing about libraries,’ said Frank A. Pezzanite, the outsourcing company’s chief executive. He has pledged to save $1 million a year in Santa Clarita, mainly by cutting overhead and replacing unionized employees. ‘Somehow they have been put in the category of a sacred organization.’

‘A lot of libraries are atrocious,’ Mr. Pezzanite said. ‘Their policies are all about job security. That’s why the profession is nervous about us. You can go to a library for 35 years and never have to do anything and then have your retirement. We’re not running our company that way. You come to us, you’re going to have to work.’”

The author wrote in response to this quote:

“In their defense, I think some of these businesses think they’re doing the right thing. But the valorization of profit has blinded them to seeing the advantages of the public good as a worthy bottom line. Providing for a community might not be profitable, but that doesn’t make it wrong…Efficiency shouldn’t always be the goal, especially when used as a narrowly defined metonym for profitability. The Samuels Public Library, like so many public institutions around the country, works because it serves something other than money.”

Public libraries are the one institution that should never be ripped off. Maybe a private equity firm could work hand in hand with public libraries so they aren’t ripped off by bad actors? Or …?

Whitney Grace, July 31, 2025

The Difference between Financials Viewed by a PR Person and an MBA

July 23, 2025

Dino 5 18 25This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.

Another essay from a public relations professional. I just wrote a short blog post about “Clarity.” Now I am addressing an essay titled “The Hater’s Guide To The AI Bubble.” The write up is a fairly long one, about 14,000 or so. The length makes clear that considerable work by the author or assistants went into compiling the numeric data about investments by big outfits in artificial intelligence.

The John Milton essay Areopagitica contains about 16,000 words. This puts the cited article squarely in the readership target of an educated readership in 1644. Like Milton’s screed, “The Hater’s Guide to the AI Bubble” is a polemic.

I am not going to create an abstract of the write up. Instead I will feed the text into some smart software and ask for a 150 word summary just like the ones in the old-fashioned ABI/INFORM database. Here’s what the cheap version of ChatGPT output:

Edward Zitron critiques the generative AI boom, arguing it is an unsustainable financial and technological bubble driven by hype rather than profitability or utility. He contends that companies like Microsoft, Google, and Meta have spent over $560 billion on AI infrastructure with little to no return, while only NVIDIA profits from the GPU demand. Zitron dismantles comparisons to past innovations like AWS, noting that generative AI lacks infrastructure value, scalability, and viable business models. He criticizes AI “agents” as misleading marketing for underperforming chatbots and highlights that nearly all AI startups are unprofitable. The illusion of widespread AI adoption is, according to Zitron, a coordinated market fantasy supported by misleading press and executive spin. The industry’s fragility, he warns, rests entirely on continued GPU sales. Zitron concludes with a call for accountability, asserting that the current AI trade endangers economic stability and reflects a failure of both corporate vision and journalistic scrutiny. (Source: ChatGPT, cheap subscription, July 22, 2025)

I will assume that you, as I did, worked through the essay. You have firmly in mind that large technology outfits have a presumed choke-hold on smart software. The financial performance of the American high technology sector needs smart software to be “the next big thing.” My view is that offering negative views of the “big thing” are likely to be greeted with the same negative attitudes.

Consider John Milton, blind, assisted by a fellow who visualized peaches at female anatomy, working on a Latinate argument against censorship. He published Areopagitica as a pamphlet and no one cared in 1644. Screeds don’t lead. If something bleeds, it gets the eyeballs.

My view of the write up is:

  1. PR expert analysis of numbers is different from MBA expert analysis of numbers. The gulf, as validated by the Hater’s Guide, is wide and deep
  2. PR professionals will not make AI succeed or fail. This is not a Dog the Bounty Hunter type of event. The palpable need to make probabilistic, hallucinating software “work” is truly important, not just to the companies burning cash in the AI crucibles, but to the US itself. AI is important.
  3. The fear of failure is creating a need to shovel more resources into the infrastructure and code of smart software. Haters may argue that the effort is not delivering; believers have too much skin in the game to quit. Not much shames the tech bros, but failure comes pretty close to making these wizards realize that they too put on pants the same way as other people do.

Net net: The cited write up is important as an example of 21st-century polemicism. Will Mr. Zuckerberg stop paying millions of dollars to import AI talent from China? Will evaluators of the AI systems deliver objective results? Will a big-time venture firm with a massive investment in AI say, “AI is a flop”?

The answer to these questions is, “No.”

AI is here. Whether it is any good or not is irrelevant. Too much money has been invested to face reality. PR professionals can do this; those people writing checks for AI are going to just go forward. Failure is not an option. Talking about failure is not an option. Thinking about failure is not an option.

Thus, there is a difference between how a PR professional and an MBA professional views the AI spending. Never the twain shall meet.

As Milton said in Areopagitica :

“A man may be a heretic in the truth; and if he believes things only because his pastor says so, or the assembly so determines, without knowing other reason, though his belief be true, yet the very truth he holds becomes his heresy. There is not any burden that some would gladlier post off to another, than the charge and care of their religion.”

And the religion for AI is money.

Stephen E Arnold, July 23, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta