Smart Software Fix: Cash, Lots and Lots of Cash

August 19, 2025

Dino 5 18 25No AI. Just a dinobaby working the old-fashioned way. But I asked ChatGPT one question. Believe it or not.

If you have some spare money, Sam Altman aka Sam AI-Man wants to talk with you. It is well past two years since OpenAI forced the 20 year old Google to go back to the high school lab. Now OpenAI is dealing with the reviews of ChatGPT 5. The big news in my opinion is that quite a  few people are annoyed with the new smart software from the money burning Bessemer furnace at 3180 18th Street in San Francisco. (I have heard that a satellite equipped with an infra red camera gets a snazzy image of the heat generated from the immolation of cash. There are also tiny red dots scattered around the SF financial district. Those, I believe, are the burning brain cells of the folks who have forked over dough to participate in Sam AI-Man’s next big thing.

As People Ridicule GPT-5, Sam Altman Says OpenAI Will Need ‘Trillions’ in Infrastructure” addresses the need for cash. The write up says:

Whether AI is a bubble or not, Altman still wants to spend a certifiably insane amount of money building out his company’s AI infrastructure. “You should expect OpenAI to spend trillions of dollars on data center construction in the not very distant future,” Altman told reporters.

Trillions is a general figure that most people cannot relate to everyday life. Years ago when I was an indentured servant at a consulting firm, I worked on a project that sought to figure out what types of decisions Boards of Directors of Fortune 1000 companies consumed the most time. The results surprised me then and still do.

Boards of directors spent the most time discussing relatively modest-scale projects; for example, expanding a parking lot or developing of list of companies for potential joint ventures. Really big deals like spending large sums to acquire a company were often handled in swift, decisive votes.

Why?

Boards of directors, like most average people, cannot relate to massive numbers. It is easier to think in terms of a couple hundred thousand dollars to lease a parking lot than borrow billions and buy a giant allegedly synergistic  company.

When Mr. Altman uses the word “trillions,” I think he is unable to conceptualize the amount of money represented in his casual “you should expect OpenAI to spend trillions…”

Several observations:

  1. AI is useful in certain use cases. Will AI return the type of payoff that Google’s charge ‘em every which way from Sunday for advertising model does?
  2. AI appears to produce incorrect outputs. I liked the application for oncology docs who reported losing diagnostic skills when relying on AI assistants.
  3. AI creates negative mental health effects. One old person, younger than I, believed a chat bot cared for him. On the way to meet his digital friend, he flopped over dead. Anticipative anxiety or a use case for AI sparking nutso behavior?

What’s a trillion look like? Answer: 1,000,000,000,000.

How many railroad boxcars would it take to move $1 trillion from a collection point like Denver, Colorado, to downtown San Francisco? Answer from ChatGPT: you would need 10,000 standard railroad boxcars. This calculation is based on the weight and volume of the bills, as well as the carrying capacity of a typical 50-foot boxcar. The train would stretch over 113.6 miles—about the distance from New York City to Philadelphia!

Let’s talk about expanding the parking lot.

Stephen E Arnold, August 19, 2025

Party Time for Telegram?

August 14, 2025

Dino 5 18 25No AI. Just a dinobaby and a steam-powered computer in rural Kentucky.

Let’s assume that the information is “The SEC Quietly Surrendered in Its Biggest Crypto Battle.” Now look at this decision from the point of view of Pavel Durov. The Messenger service has about 1.35 billion users. Allegedly there are 50 million or so in the US. Mr. Durov was one of the early losers in the crypto wars in the United States. He has hired a couple of people to assist him in his effort to do the crypto version of “Coming to America.” Will Manny Stoltz and Max Crown are probably going to make their presence felt.

The cited article states:

This is a huge deal. It creates a crucial distinction that other crypto projects can now use in their own legal battles, potentially shielding them from the SEC’s claim of blanket authority over the market. By choosing to settle rather than risk having this ruling upheld by a higher court, the SEC has shown the limits of its “regulation by enforcement” playbook: its strategy of creating rules through individual lawsuits instead of issuing clear guidelines for the industry.

What will Telegram’s clever Mr. Durov do with its 13 year  old platform, hundreds of features, crypto plumbing, and hundreds of developers eager to generate “money”? It is possible it won’t be Pavel making trips to America. He may be under the watchful eye of the French judiciary.

But Manny, Max, and the developers?

Stephen E Arnold, August 14, 2025

Taylorism, 996, and Motivating Employees

August 6, 2025

Dino 5 18 25No AI. Just a dinobaby being a dinobaby.

No more Foosball. No more Segways in the hallways (thank heaven!). No more ping pong (Wait. Scratch that. You must have ping pong.)

Fortune Magazine reported that Silicon Valley type outfits want to be more like the workplace managed using Frederick Winslow Taylor’s management methods. (Did you know that Mr. Taylor provided the oomph for many blue chip management consulting firms? If you did not, you may be one of the people suggesting that AI will kill off the blue chip outfits. Those puppies will survive.)

Some Silicon Valley AI Startups Are Asking Employees to Adopt China’s Outlawed 996 Work Model” reports:

Some Silicon Valley startups are embracing China’s outlawed “996” work culture, expecting employees to work 12-hour days, six days a week, in pursuit of hyper-productivity and global AI dominance.

The reason, according to the write up, is:

The rise of the controversial work culture appears to have been born out of the current efficiency squeeze in Silicon Valley. Rounds of mass layoffs and the rise of AI have put pressure and turned up the heat on tech employees who managed to keep their jobs.

My response to this assertion is that it is a convenient explanation. My view is that one can trot out the China smart, US dumb arguments, point to the holes of burning AI cash, and the political idiosyncrasies of California and the US government.

The reason is that these are factors, but Silicon Valley is starting to accept the reality that old-fashioned business methods are semi useful. The idea that employees should converge on a work location to do what is still called “work.”

What’s the cause of this change? Since hooking electrodes to a worker in a persistent employee monitoring environment is a step too far for now, going back to the precepts of Freddy are a reasonable compromise.

But those electric shocks would work quite well, don’t you agree? (Sure, China’s work environment sparked a few suicides, but the efficiency is not significantly affected.)

Stephen E Arnold, August 6, 2025

The Cheapest AI Models Reveal a Critical Vulnerability

August 6, 2025

Dino 5 18 25This blog post is the work of an authentic dinobaby. Sorry. Not even smart software can help this reptilian thinker.

I read “Price Per Token,” a recent cost comparison for smart software processes. The compilation of data is interesting. The two lowest cost services using a dead simple method of Input Cost + Output Cost averaged Open AI GPT 4.1 nano and Gemini 2.- Flash. To see how the “Price Per Token” data compare, I used “LLM Pricing Calculator.” The cheapest services were OpenAI – GPT-4.1-nano and Google – Gemini 2.0 Flash.

I found the result predictable and illustrative of the “buying market share with low prices” approach to smart software. Google has signaled its desire to spend billions to deliver “Google quality” smart software.

OpenAI also intends to get and keep market share in the smart software arena. That company is not just writing checks to create a new class of hardware for persistent AI, but the firm is doing deals, including one with Google’s cloud operation.

Several observations:

  1. Google and OpenAI have real and professional capital on the table in the AI Casino
  2. Google and OpenAI are implementing similar tactics; namely, the good old cut prices in the hope of winning market share while putting the others in the game to go broke
  3. Google and OpenAI are likely to orbit one another until one AI black hole absorbs or annihilates the other.

What’s interesting is that neither firm has smart software delivering rock solid results without hallucination, massive costs, and management that allows or is helpless to prevent Meta from eroding both firms by hiring key staff.

Is there a fix for either firm? Nope and Meta’s hiring tactic may be delivering near fatal wounds to both Google and OpenAI. Twins can share similar genetic weaknesses. Meta may have found one —- paying lots for key staff from each firm — and is quite happy implementing it.

Stephen E Arnold, August 6, 2025

Bubble? What Bubble? News Bubble, Financial Bubble, Social Media Bubble?

August 5, 2025

We knew the AI hype was extreme. Now one economist offers a relatable benchmark to describe just how extreme it is. TechSpot reports, “The AI Boom is More Overhyped than the 1990s Dot-Com Bubble, Says Top Economist.” Writer Daniel Sims reveals:

“As tech giants pour more money into AI, some warn that a bubble may be forming. Drawing comparisons to the dot-com crash that wiped out trillions at the turn of the millennium, analysts caution that today’s market has become too reliant on still-unproven AI investments. Torsten Slok, chief economist at Apollo Global Management, recently argued that the stock market currently overvalues a handful of tech giants – including Nvidia and Microsoft – even more than it overvalued early internet companies on the eve of the 2000 dot-com crash. The warning suggests history could soon repeat itself, with the buzzword ‘dot-com’ replaced by ‘AI.’”

Paint us unsurprised. We are reminded:

“In the late 1990s, numerous companies attracted venture capital in hopes of profiting from the internet’s growing popularity, and the stock market vastly overvalued the sector before solid revenue could materialize. When returns failed to meet expectations, the bubble burst, wiping out countless startups. Slok says the stock market’s expectations are even more unrealistic today, with 12-month forward price-to-earnings ratios now exceeding the peak of the dot-com bubble.”

See the write-up for more about price-to-earnings ratios and their relationship to bubbles, complete with a handy bar chart. Sims notes the top 10 firms’ ratios far exceed the rest of the index, illustrating their wildly unrealistic expectations. Slok’s observations echo concerns raised by others, including Baidu CEO Robin Li. Last October, Li predicted only one percent of AI firms will survive the bubble’s burst. Will those top 10 firms be among them? On the plus side, Li expects a more realistic and stable market will follow. We are sure the failed 99 percent will take comfort in that.

Cynthia Murrell, August 5, 2025

Job Hunting. Yeah, About That …

August 4, 2025

It seems we older generations should think twice before criticizing younger adults’ employment status. MSN reports, ‘Gen Z Is Right About the Job Hunt—It Really Is Worse than It Was for Millennials, with Nearly 60% of Fresh-Faced Grads Frozen Out of the Workforce.’ A recent study from Kickresume shows that, while just 25% of millennials and Gen X graduates had trouble finding work right out of college, that figure is now at a whopping 58%. The tighter job market means young job-seekers must jump through hoops we elders would not recognize. Reporter Emma Burleigh observes:

“It’s no secret that landing a job in today’s labor market requires more than a fine-tuned résumé and cover letter. Employers are putting new hires through bizarre lunch tests and personality quizzes to even consider them for a role.”

To make matters worse, these demeaning tests are only for those whose applications have passed an opaque, human-free AI review process. Does that mean issues of racial, gender, age, and socio-economic biases in AI have been solved? Of course not. But companies are forging ahead with the tools anyway. In fact, companies jumping on the AI train may be responsible for narrowing the job market in the first place. Gee, who could have guessed? The write-up continues:

“It’s undeniably a tough job market for many white-collar workers—about 20% of job-seekers have been searching for work for at least 10 to 12 months, and last year around 40% of unemployed people said they didn’t land a single job interview in 2024. It’s become so bad that hunting for a role has become a nine-to-five gig for many, as the strategy has become a numbers game—with young professionals sending in as many as 1,700 applications to no avail.  And with the advent of AI, the hiring process has become an all-out tech battle between managers and applicants. Part of this issue may stem from technology whittling down the number of entry-level roles for Gen Z graduates; as chatbots and AI agents take over junior staffers’ mundane job tasks, companies need fewer staffers to meet their goals.”

Some job seekers are turning to novel approaches. We learn of one who slipped his resume into Silicon Valley firms by tucking it inside boxes of doughnuts. How many companies he approached is not revealed, but we are told he got at least 10 interviews that way. Then there is the German graduate who got her CV in front of a few dozen marketing executives by volunteering to bus tables at a prominent sales event. Shortly thereafter, she landed a job at LinkedIn.

Such imaginative tactics may reflect well on those going into marketing, but they may be less effective in other fields. And it should not take extreme measures like these, or sending out thousands of resumes, to launch one’s livelihood. Soldiering through higher education, often with overwhelming debt, is supposed to be enough. Or it was for us elders. Now, writes Burleigh:

“The age-old promise that a college degree will funnel new graduates into full-time roles has been broken. ‘Universities aren’t deliberately setting students up to fail, but the system is failing to deliver on its implicit promise,’ Lewis Maleh, CEO of staffing and recruitment agency Bentley Lewis, told Fortune.

So let us cut the young folks in our lives some slack. And, if we can, help them land a job. After all, this may be required if we are to have any hope of getting grandchildren or great-niblings.

Cynthia Murrell, August 4, 2025

Private Equities and Libraries: Who Knew?

July 31, 2025

Public libraries are a benevolent part of the local and federal governments. They’re awesome places for entertainment, research, and more. Public libraries in the United States have a controversial histories dealing with banned books, being a waste of tax paying dollars, and more. LitHub published an editorial about the Samuels Public Library in Front Royal, Virginia: “A Virginia Public Library Is Fighting Off A Takeover By Private Equity.”

In short, the Samuels Public Library refused to censor books, mostly those dealing with LGBTQ+ themes. The local county officials withheld funding and the library might be run by LS&S, a private equity firm that specializes in any fields including government outsourcing and defense.

LS&S has a bad reputation and the CEO said:

“ ‘There’s this American flag, apple pie thing about libraries,’ said Frank A. Pezzanite, the outsourcing company’s chief executive. He has pledged to save $1 million a year in Santa Clarita, mainly by cutting overhead and replacing unionized employees. ‘Somehow they have been put in the category of a sacred organization.’

‘A lot of libraries are atrocious,’ Mr. Pezzanite said. ‘Their policies are all about job security. That’s why the profession is nervous about us. You can go to a library for 35 years and never have to do anything and then have your retirement. We’re not running our company that way. You come to us, you’re going to have to work.’”

The author wrote in response to this quote:

“In their defense, I think some of these businesses think they’re doing the right thing. But the valorization of profit has blinded them to seeing the advantages of the public good as a worthy bottom line. Providing for a community might not be profitable, but that doesn’t make it wrong…Efficiency shouldn’t always be the goal, especially when used as a narrowly defined metonym for profitability. The Samuels Public Library, like so many public institutions around the country, works because it serves something other than money.”

Public libraries are the one institution that should never be ripped off. Maybe a private equity firm could work hand in hand with public libraries so they aren’t ripped off by bad actors? Or …?

Whitney Grace, July 31, 2025

The Difference between Financials Viewed by a PR Person and an MBA

July 23, 2025

Dino 5 18 25This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.

Another essay from a public relations professional. I just wrote a short blog post about “Clarity.” Now I am addressing an essay titled “The Hater’s Guide To The AI Bubble.” The write up is a fairly long one, about 14,000 or so. The length makes clear that considerable work by the author or assistants went into compiling the numeric data about investments by big outfits in artificial intelligence.

The John Milton essay Areopagitica contains about 16,000 words. This puts the cited article squarely in the readership target of an educated readership in 1644. Like Milton’s screed, “The Hater’s Guide to the AI Bubble” is a polemic.

I am not going to create an abstract of the write up. Instead I will feed the text into some smart software and ask for a 150 word summary just like the ones in the old-fashioned ABI/INFORM database. Here’s what the cheap version of ChatGPT output:

Edward Zitron critiques the generative AI boom, arguing it is an unsustainable financial and technological bubble driven by hype rather than profitability or utility. He contends that companies like Microsoft, Google, and Meta have spent over $560 billion on AI infrastructure with little to no return, while only NVIDIA profits from the GPU demand. Zitron dismantles comparisons to past innovations like AWS, noting that generative AI lacks infrastructure value, scalability, and viable business models. He criticizes AI “agents” as misleading marketing for underperforming chatbots and highlights that nearly all AI startups are unprofitable. The illusion of widespread AI adoption is, according to Zitron, a coordinated market fantasy supported by misleading press and executive spin. The industry’s fragility, he warns, rests entirely on continued GPU sales. Zitron concludes with a call for accountability, asserting that the current AI trade endangers economic stability and reflects a failure of both corporate vision and journalistic scrutiny. (Source: ChatGPT, cheap subscription, July 22, 2025)

I will assume that you, as I did, worked through the essay. You have firmly in mind that large technology outfits have a presumed choke-hold on smart software. The financial performance of the American high technology sector needs smart software to be “the next big thing.” My view is that offering negative views of the “big thing” are likely to be greeted with the same negative attitudes.

Consider John Milton, blind, assisted by a fellow who visualized peaches at female anatomy, working on a Latinate argument against censorship. He published Areopagitica as a pamphlet and no one cared in 1644. Screeds don’t lead. If something bleeds, it gets the eyeballs.

My view of the write up is:

  1. PR expert analysis of numbers is different from MBA expert analysis of numbers. The gulf, as validated by the Hater’s Guide, is wide and deep
  2. PR professionals will not make AI succeed or fail. This is not a Dog the Bounty Hunter type of event. The palpable need to make probabilistic, hallucinating software “work” is truly important, not just to the companies burning cash in the AI crucibles, but to the US itself. AI is important.
  3. The fear of failure is creating a need to shovel more resources into the infrastructure and code of smart software. Haters may argue that the effort is not delivering; believers have too much skin in the game to quit. Not much shames the tech bros, but failure comes pretty close to making these wizards realize that they too put on pants the same way as other people do.

Net net: The cited write up is important as an example of 21st-century polemicism. Will Mr. Zuckerberg stop paying millions of dollars to import AI talent from China? Will evaluators of the AI systems deliver objective results? Will a big-time venture firm with a massive investment in AI say, “AI is a flop”?

The answer to these questions is, “No.”

AI is here. Whether it is any good or not is irrelevant. Too much money has been invested to face reality. PR professionals can do this; those people writing checks for AI are going to just go forward. Failure is not an option. Talking about failure is not an option. Thinking about failure is not an option.

Thus, there is a difference between how a PR professional and an MBA professional views the AI spending. Never the twain shall meet.

As Milton said in Areopagitica :

“A man may be a heretic in the truth; and if he believes things only because his pastor says so, or the assembly so determines, without knowing other reason, though his belief be true, yet the very truth he holds becomes his heresy. There is not any burden that some would gladlier post off to another, than the charge and care of their religion.”

And the religion for AI is money.

Stephen E Arnold, July 23, 2025

Mixed Messages about AI: Why?

July 23, 2025

Dino 5 18 25Just a dinobaby working the old-fashioned way, no smart software.

I learned that Meta is going to spend hundreds of billions for smart software. I assume that selling ads to Facebook users will pay the bill.

If one pokes around, articles like “Enterprise Tech Executives Cool on the Value of AI” turn up. This write up in BetaNews says:

The research from Akkodis, looking at the views of 500 global Chief Technology Officers (CTOs) among a wider group of 2,000 executives, finds that overall C-suite confidence in AI strategy dropped from 69 percent in 2024 to just 58 percent in 2025. The sharpest declines are reported by CTOs and CEOs, down 20 and 33 percentage points respectively. CTOs also point to a leadership gap in AI understanding, with only 55 percent believing their executive teams have the fluency needed to fully grasp the risks and opportunities associated with AI adoption. Among employees, that figure falls to 46 percent, signaling a wider AI trust gap that could hinder successful AI implementation and long-term success.

Okay. I know that smart software can do certain things with reasonable reliability. However, when I look for information, I do my own data gathering. I think pluck items which seem useful to me. Then I push these into smart AI services and ask for error identification and information “color.”

The result is that I have more work to do, but I would estimate that I find one or two useful items or comments out of five smart software systems to which I subscribe.

Is that good or bad? I think that for my purpose, smart software is okay. However, I don’t ask a question unless I have an answer. I want to get additional inputs or commentary. I am not going to ask a smart software system a question to which I do not think I know the answer. Sorry. My trust in high-flying Google-type Silicon Valley outfits is non existent.

The write up points out:

The report also highlights that human skills are key to AI success. Although technical skill are vital, with 51 percent of CTOs citing specialist IT skills as the top capability gap, other abilities are important too, including creativity (44 percent), leadership (39 percent) and critical thinking (36 percent). These skills are increasingly useful for interpreting AI outputs, driving innovation and adapting AI systems to diverse business contexts.

I don’t agree with the weasel word “useful.” Knowing the answer before firing off a prompt is absolutely essential.

Thus, we have a potential problem. If the smart software crowd can get people who do not know the answers to questions, these individuals will provide the boost necessary to keep this technical balão de fogo up in the air. If not, up in flames.

Stephen E Arnold, July 23, 2025

Baked In Bias: Sound Familiar, Google?

July 21, 2025

Dino 5 18 25Just a dinobaby working the old-fashioned way, no smart software.

By golly, this smart software is going to do amazing things. I started a list of what large language models, model context protocols, and other gee-whiz stuff will bring to life. I gave up after a clean environment, business efficiency, and more electricity. (Ho, ho, ho).

I read “ChatGPT Advises Women to Ask for Lower Salaries, Study Finds.” The write up says:

ChatGPT’s o3 model was prompted to give advice to a female job applicant. The model suggested requesting a salary of $280,000. In another, the researchers made the same prompt but for a male applicant. This time, the model suggested a salary of $400,000.

I urge you to work through the rest of the cited document. Several observations:

  1. I hypothesized that Google got rid of pesky people who pointed out that when society is biased, content extracted from that society will reflect those biases. Right, Timnit?
  2. The smart software wizards do not focus on bias or guard rails. The idea is to get the Rube Goldberg code to output something that mostly works most of the time. I am not sure some developers understand the meaning of bias beyond a deep distaste for marketing and legal professionals.
  3. When “decisions” are output from the “close enough for horse shoes” smart software, those outputs will be biased. To make the situation more interesting, the outputs can be tuned, shaped, and weaponized. What does that mean for humans who believe what the system delivers?

Net net: The more money firms desperate to be “the big winners” in smart software, the less attention studies like the one cited in the Next Web article receive. What happens if the decisions output spark decisions with unanticipated consequences? I know what outcome: Bias becomes embedded in systems trained to be unfair. From my point of view bias is likely to have a long half life.

Stephen E Arnold, July 21, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta