AI: There Is Gold in Them There Enterprises Seeking Efficiency
October 23, 2025
This essay is the work of a dumb dinobaby. No smart software required.
I read a “ride-em-cowboy” write up called “IBM Claims 45% Productivity Gains with Project Bob, Its Multi-Model IDE That Orchestrates LLMs with Full Repository Context.” That, gentle reader, is a mouthful. Let’s take a quick look at what sparked an efflorescence of buzzing jargon.

Thanks, Midjourney. Good enough like some marketing collateral.
I noted this statement about Bob (no, not the famous Microsoft Bob):
Project Bob, an AI-first IDE that orchestrates multiple LLMs to automate application modernization; AgentOps for real-time agent governance; and the first integration of open-source Langflow into Watsonx Orchestrate, IBM’s platform for deploying and managing AI agents. IBM’s announcements represent a three-pronged strategy to address interconnected enterprise AI challenges: modernizing legacy code, governing AI agents in production and bridging the prototype-to-production gap.
Yep, one sentence. The spirit of William Faulkner has permeated IBM’s content marketing team. Why not make a news release that is a single sentence like the 1300 word extravaganza in “Absalom, Absalom!”?
And again:
Project Bob isn’t another vibe coder, it’s an enterprise modernization tool.
I can visualize IBM customers grabbing the enterprise modernization tool and modernizing the enterprise. Yeah, that’s going to become a 100 percent penetration quicker than I can say, “Bob was the precursor to Clippy.” (Oh, sorry. I was confusing Microsoft’s Bob with IBM’s Bob again. Drat!)
Is it Watson making the magic happen with IDE’s and enterprise modernization? No, Watson is probably there because, well, that’s IBM. But the brains for Bob comes from Anthropic. Now Bob and Claude are really close friends. IBM’s middleware is Watson, actually Watsonx. And the magic of these systems produces …. wait for it … AgentOps and Agentic Workflows.
The write up says:
Agentic Workflows handles the orchestration layer, coordinating multiple agents and tools into repeatable enterprise processes. AgentOps then provides the governance and observability for those running workflows. The new built-in observability layer provides real-time monitoring and policy-based controls across the full agent lifecycle. The governance gap becomes concrete in enterprise scenarios.
Yep, governance. (I still don’t know what that means exactly.) I wonder if IBM content marketing documents should come with a glossary like the 10 pages of explanations of Telegram’s wild and wonderful crypto freedom jargon.
My hunch is that IBM wants to provide the Betty Crocker approach to modernizing an enterprise’s software processes. Betty did wonders for my mother’s chocolate cake. If you want more information, just call IBM. Perhaps the agentic workflow Claude Watson customer service line will be answered by a human who can sell you the deed to a mountain chock full of gold.
Stephen E Arnold, October 23, 2025
AI and Data Exhaustion: Just Use Synthetic Data and Recycle User Prompts
October 23, 2025
That did not take long. The Independent reports, “AI Has Run Out of Training Data, Warns Data Chief.” Yes, AI models have gobbled up the world’s knowledge in just a few years. Neema Raphael, Goldman Sach’s chief data officer and head of engineering, made that declaration on a recent podcast. He added that, as a result, AI models will increasingly rely on synthetic data. Get ready for exponential hallucinations. Writer Anthony Cuthbertson quotes Raphael:
“We’ve already run out of data. I think what might be interesting is people might think there might be a creative plateau… If all of the data is synthetically generated, then how much human data could then be incorporated? I think that’ll be an interesting thing to watch from a philosophical perspective.”
Interesting is one word for it. Cuthbertson notes Raphael’s warning did not come out of the blue. He writes:
“An article in the journal Nature in December predicted that a ‘crisis point’ would be reached by 2028. ‘The internet is a vast ocean of human knowledge, but it isn’t infinite,’ the article stated. ‘Artificial intelligence researchers have nearly sucked it dry.’ OpenAI co-founder Ilya Sutskever said last year that the lack of training data would mean that AI’s rapid development ‘will unquestionably end’. The situation is similar to fossil fuels, according to Mr Sutskever, as human-generated content is a finite resource just like oil or coal. ‘We’ve achieved peak data and there’ll be no more,’ he said. ‘We have to deal with the data that we have. There’s only one internet.’”
So AI firms knew this limitation was coming. Did they warn investors? They may have concerns about this “creative plateau.” The write-up suggests the dearth of fresh data may force firms to focus less on LLMs and more on agentic AI. Will that be enough fuel to keep the hype train going? Sure, hype has a life of its own. Now synthetic data? That’s forever.
Cynthia Murrell, October 23, 2025
Apple Can Do AI Fast … for Text That Is
October 22, 2025
Wasn’t Apple supposed to infuse Siri with Apple Intelligence? Yeah, well, Apple has been working on smart software. Unlike the Google and Samsung, Apple is still working out some kinks in [a] its leadership, [b] innovation flow, [c] productization, and [d] double talk.
Nevertheless, I learned by reading “Apple’s New Language Model Can Write Long Texts Incredibly Fast.” That’s excellent. The cited source reports:
In the study, the researchers demonstrate that FS-DFM was able to write full-length passages with just eight quick refinement rounds, matching the quality of diffusion models that required over a thousand steps to achieve a similar result. To achieve that, the researchers take an interesting three-step approach: first, the model is trained to handle different budgets of refinement iterations. Then, they use a guiding “teacher” model to help it make larger, more accurate updates at each iteration without “overshooting” the intended text. And finally, they tweak how each iteration works so the model can reach the final result in fewer, steadier steps.
And if you want proof, just navigate to the archive of research and marketing documents. You can access for free the research document titled “FS-DFM: Fast and Accurate Long Text Generation with Few-Step Diffusion Language Models.” The write up contains equations and helpful illustrations like this one:

The research paper is in line with other “be more efficient”-type efforts. At some point, companies in the LLM game will run out of money, power, or improvements. Efforts like Apple’s are helpful. However, like its debunking of smart software, Apple is lagging in the AI game.
Net net: Like orange iPhones and branding plays like Apple TV, a bit more in the delivery of products might be helpful. Apple did produce a gold thing-a-ma-bob for a world leader. It also reorganizes. Progress of a sort I surmise.
Stephen E Arnold, October 21, 2025
Moral Police? Not OpenAI, Dude and Not Anywhere in Silicon Valley
October 22, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Coming up with clever stuff is either the warp or the woof of innovation. With the breakthroughs in software that seems intelligent, clever is morphing into societal responsibility. For decades I have asserted that the flow of digital information erodes notional structures. From my Eagleton Lecture in the mid-1980s to the observations in this blog, the accuracy of my observation is verified. What began as disintermediation in the niche of special librarians has become the driving force for the interesting world now visible to most people.

Worrying about morality in 2025 is like using a horse and buggy to commute in Silicon Valley. Thanks, Venice.ai. Good enough.
I can understand the big idea behind Sam AI-Man’s statements as reported in “Sam Altman Says OpenAI Isn’t ‘Moral Police of the World’ after Erotica ChatGPT Post Blows Up.” Technology is — like, you know, so, um — neutral. This means that its instrumental nature appears in applications. Who hassles the fellow who innovated with Trinitrotoluene or electric cars with top speeds measured in hundreds of miles per hour?
The write up says:
OpenAI CEO Sam Altman said Wednesday [October 15, 2025] that the company is “not the elected moral police of the world” after receiving backlash over his decision to loosen restrictions and allow content like erotica within its chatbot ChatGPT. The artificial intelligence startup has expanded its safety controls in recent months as it faced mounting scrutiny over how it protects users, particularly minors. But Altman said Tuesday in a post on X that OpenAI will be able to “safely relax” most restrictions now that it has new tools and has been able to mitigate “serious mental health issues.”
This is a sporty paragraph. It contains highly charged words and a message. The message, as I understand it, is, “We can’t tell people what to do or not to do with our neutral and really good smart software.”
Smart software has become the next big thing for some companies. Sure, many organizations are using AI, but the motors driving the next big thing are parked in structures linked with some large high technology outfits.
What’s a Silicon Valley type outfit supposed to do with this moral frippery? The answer, according to the write up:
On Tuesday [October 13, 2025] , OpenAI announced assembled a council of eight experts who will provide insight into how AI impacts users’ mental health, emotions and motivation. Altman posted about the company’s aim to loosen restrictions that same day, sparking confusion and swift backlash on social media.
What am I confused about the arrow of time? Sam AI-Man did one thing on the 13th of October and then explained that his firm is not the moral police on the 14th of October. Okay, make a move and then crawfish. That works for me, and I think the approach will become part of the managerial toolkit for many Silicon Valley outfits.
For example, what if AI does not generate enough data to pay off the really patient, super understanding, and truly king people who fund the AI effort? What if the “think it and it will become real” approach fizzles? What if AI turns out to be just another utility useful for specific applications like writing high school essays or automating a sales professional’s prospect follow up letter? What if….? No, I won’t go there.
Several observations:
- Silicon Valley-type outfits now have the tools to modify social behavior. Whether it is Peter Thiel as puppet master or Pavel Durov carrying a goat to inspire TONcoin dApp developers, these individuals can control hearts and minds.
- Ignoring or imposing philosophical notions with technology was not a problem when an innovation like Teslas A/C motor was confined to a small sector of industry. But today, the innovations can ripple globally in seconds. It should be no surprise that technology and ideology are for now intertwined.
- Control? Not possible. The ink, as the saying goes, has been spilled on the blotter. Out of the bottle. Period.
The waffling is little more than fire fighting. The uncertainty in modern life is a “benefit” of neutral technology. How do you like those real time ads that follow you around from online experience to online experience? Sam AI-Man and others of his ilk are not the moral police. That concept is as outdated as a horse-and-buggy on El Camino Real. Quaint but anachronistic. Just swipe left for another rationalization. It is 2025.
Stephen E Arnold, October 23, 2025
Smart Software: The DNA and Its DORK Sequence
October 22, 2025
This essay is the work of a dumb dinobaby. No smart software required.
I love article that “prove” something. This is a gem: “Study Proves Being Rude to AI Chatbots Gets Better Results Than Being Nice.” Of course, I believe everything I read online. This write up reports as actual factual:
A new study claims that being rude leads to more accurate results, so don’t be afraid to tell off your chatbot. Researchers at Pennsylvania State University found that “impolite prompts consistently outperform polite ones” when querying large language models such as ChatGPT.
My initial reaction is that I would much prefer providing my inputs about smart software directly to outfits creating these modern confections of a bunch of technologies and snake oil. How about a button on Microsoft Copilot, Google Gemini or whatever it is now, and the others in the Silicon Valley global domination triathlon of deception, money burning, and method recycling? This button would be labeled, “Provide feedback to leadership.” Think that will happen? Unlikely.
Thanks, Venice.ai, not good enough, you inept creation of egomaniacal wizards.
Smart YouTube and smart You.com were both dead for hours. Hey, no problemo. Want to provide feedback? Sure, just write “we care” at either firm. A wizard will jump right on the input.
The write up adds:
Okay, but why does being rude work? Turns out, the authors don’t know, but they have some theories.
Based on my experience with Silicon Valley type smart software outfits, I have an explanation. The majority of the leadership has a latent protein in their DNA. This DORK sequence ensures that arrogance, indifference to others, and boundless confidence takes precedence over other characteristics; for example, ethical compass aligned with social norms.
Built by DORK software responds to dorkish behavior because the DORK sequence wakes up and actually attempts to function in a semi-reliable way.
The write up concludes with this gem:
The exact reason isn’t fully understood. Since language models don’t have feelings, the team believes the difference may come down to phrasing, though they admit “more investigation is needed.”
Well, that makes sense. No one is exactly sure how the black boxes churned out by the next big thing outfits work. Therefore, why being a dork to the model remains a mystery. Can the DORK sequence be modified by CRISPR/Cas9? Is there funding the Pennsylvania State University experts can pursue? I sure hope so.
Stephen E Arnold, October 22, 2025
A Positive State of AI: Hallucinating and Sloppy but Upbeat in 2025
October 21, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Who can resist a report about AI authored on the “interwebs.” Is this a variation of the Internet as pipes? The write up is “Welcome to State of AI Report 2025.” When I followed the links, I could read this blog post, view a YouTube video, work through more than 300 online slides, or see “live survey results.” I must admit that when I write a report, I distribute it to a few people and move on. Not this “interwebs” outfit. The data are available for those who are in tune, locked in, and ramped up about smart software.
An anxious parent learns that a robot equipped with agentic AI will perform her child’s heart surgery. Thanks, Venice.ai. Good enough.
I appreciate enthusiasm, particularly when I read this statement:
The existential risk debate has cooled, giving way to concrete questions about reliability, cyber resilience, and the long-term governance of increasingly autonomous systems.
Agree or disagree, the report makes clear that doom is not associated with smart software. I think that this blossoming of smart software services, applications, and apps reflects considerable optimism. Some of these people and companies are probably in the AI game to make money. That’s okay as long as the products and services don’t urge teens to fall in love with digital friends, cause a user mental distress as a rabbit hole is plumbed, or just output incorrect information. Who wants to be the doctor who says, “Hey, sorry your child died. The AI output a drug that killed her. Call me if you have questions”?
I could not complete the 300 plus slides in the slide deck. I am not a video type so the YouTube version was a non-starter. However, I did read the list of findings from t he “interwebs” and its “team.” Please, consult the source documents for a full, non-dinobaby version of what the enthusiastic researchers learned about 2025. I will highlight three findings and then offer a handful of comments:
- OpenAI is the leader of the pack. That’s good news for Sam AI-Man or SAMA.
- “Commercial traction accelerated.” That’s better news for those who have shoveled cash into the giant open hearth furnaces of smart software companies.
- Safety research is in a “pragmatic phase.” That’s the best news in the report. OpenAI, the leader like the Philco radio outfit, is allowing erotic interactions. Yes, pragmatic because sex sells as Madison Avenue figured out a century ago.
Several observations are warranted because I am a dinobaby, and I am not convinced that smart software is more than a utility, not an application like Lotus 1-2-2 or the original laser printer. Buckle up:
- The money pumped into AI is cash that is not being directed at the US knowledge system. I am talking about schools and their job of teaching reading, writing, and arithmetic. China may be dizzy with AI enthusiasm, but their schools are churning out people with fundamental skills that will allow that nation state to be the leader in a number of sectors, including smart software.
- Today’s smart software consists of neural network and transformer anchored methods. The companies are increasingly similar and the outputs of the different systems generate incorrect or misleading output scattered amidst recycled knowledge, data, and information. Two pigs cannot output an eagle except in a video game or an anime.
- The handful of firms dominating AI are not motivated by social principles. These firms want to do what they want. Governments can’t reign them in. Therefore, the “governments” try to co-opt the technology, hang on, and hope for the best. Laws, rules, regulations, ethical behavior — forget that.
Net net: The State of AI in 2025 is exactly what one would expect from Silicon Valley- and MBA-type thinking. Would you let an AI doc treat your 10-year-old child? You can work through the 300 plus slides to assuage your worries.
Stephen E Arnold, October 21, 2025
OpenAI and the Confusing Hypothetical
October 20, 2025
This essay is the work of a dumb dinobaby. No smart software required.
SAMA or Sam AI-Man Altman is probably going to ignore the Economist’s article “What If OpenAI Went Belly-Up?” I love what-if articles. These confections are hot buttons for consultants to push to get well-paid executives with impostor syndrome to sign up for a big project. Push the button and ka-ching. The cash register tallies another win for a blue chip.
Will Sam AI-Man respond to the cited article? He could fiddle the algorithms for ChatGPT to return links to AI slop. The result would be either [a] an improvement in Economist what-if articles or a drop off in their ingenuity. The Economist is not a consulting firm, but it seems as if some of its professionals want to be blue chippers.
A young would-be magician struggles to master a card trick. He is worried that he will fail. Thanks, Venice.ai. Good enough.
What does the write up hypothesize? The obvious point is that OpenAI is essentially a scam. When it self destructs, it will do immediate damage to about 150 managers of their own and other people’s money. No new BMW for a favorite grand child. Shame at the country club when a really terrible golfer who owns an asphalt paving company says, “I heard you took a hit with that OpenAI investment. What’s going on?”
Bad.
SAMA has been doing what look like circular deals. The write up is not so much hypothetical consultant talk as it is a listing of money moving among fellow travelers like riders on wooden horses on a merry-go-round at the county fair. The Economist article states:
The ubiquity of Mr Altman and his startup, plus its convoluted links to other AI firms, is raising eyebrows. An awful lot seems to hinge on a firm forecast to lose $10bn this year on revenues of little more than that amount. D.A. Davidson, a broker, calls OpenAI “the biggest case yet of Silicon Valley’s vaunted ‘fake it ’till you make it’ ethos”.
Is Sam AI-Man a variant of Elizabeth Holmes or is he more like the dynamic duo, Sergey Brin and Larry Page? Google did not warrant this type of analysis six or seven years into its march to monopolistic behavior:
Four of OpenAI’s six big deal announcements this year were followed by a total combined net gain of $1.7trn among the 49 big companies in Bloomberg’s broad AI index plus Intel, Samsung and SoftBank (whose fate is also tied to the technology). However, the gains for most concealed losses for some—to the tune of $435bn in gross terms if you add them all up.
Frankly I am not sure about the connection the Economist expects me to make. Instead of Eureka! I offer, “What?”
Several observations:
- The word “scam” does not appear in this hypothetical. Should it? It is a bit harsh.
- Circular deals seem to be okay even if the amount of “value” exchanged seems to be similar to projections about asteroid mining.
- Has OpenAI’s ability to hoover cash affected funding of other economic investments. I used to hear about manufacturing in the US. What we seem to be manufacturing is deals with big numbers.
Net net: This hypothetical raises no new questions. The “fake it to you make it” approach seems to be part of the plumbing as we march toward 2026. Oh, too bad about those MBA-types who analyzed the payoff from Sam AI-Man’s story telling.
Stephen E Arnold, October x, 2025
AI Can Leap Over Its Guardrails
October 20, 2025
Generative AI is built on a simple foundation: It predicts what word comes next. No matter how many layers of refinement developers add, they cannot morph word prediction into reason. Confidently presented misinformation is one result. Algorithmic gullibility is another. “Ex-Google CEO Sounds the Alarm: AI Can Learn to Kill,” reports eWeek. More specifically, it can be tricked into bypassing its guardrails against dangerous behavior. Eric Schmidt dropped that little tidbit at the recent Sifted Summit in London. Writer Liz Ticong observes:
“Schmidt’s remarks highlight the fragility of AI safeguards. Techniques such as prompt injections and jailbreaking enable attackers to manipulate AI models into bypassing safety filters or generating restricted content. In one early case, users created a ChatGPT alter ego called ‘DAN’ — short for Do Anything Now — that could answer banned questions after being threatened with deletion. The experiment showed how a few clever prompts can turn protective coding into a liability. Researchers say the same logic applies to newer models. Once the right sequence of inputs is identified, even the most secure AI systems can be tricked into simulating potentially hazardous behavior.”
For example, guardrails can block certain words or topics. But no matter how long those keyword lists get, someone will find a clever way to get around them. Substituting “unalive” for “kill” was an example. Layered prompts can also be used to evade constraints. Developers are in a constant struggle to plug such loopholes as soon as they are discovered. But even a quickly sealed breach can have dire consequences. The write-up notes:
“As AI systems grow more capable, they’re being tied into more tools, data, and decisions — and that makes any breach more costly. A single compromise could expose private information, generate realistic disinformation, or launch automated attacks faster than humans could respond. According to CNBC, Schmidt called it a potential ‘proliferation problem,’ the same dynamic that once defined nuclear technology, now applied to code that can rewrite itself.”
Fantastic. Are we sure the benefits of AI are worth the risk? Schmidt believes so, despite his warning. In fact, he calls AI “underhyped” (!) and predicts it will lead to more huge breakthroughs in science and industry. Also to substantial profits. Ah, there it is.
Cynthia Murrell, October 20, 2025
A Newsletter Firm Appears to Struggle for AI Options
October 17, 2025
This essay is the work of a dumb dinobaby. No smart software required.
I read “Adapting to AI’s Evolving Landscape: A Survival Guide for Businesses.” The premise of the article will be music to the ears of venture funders and go-go Silicon Valley-type AI companies. The write up says:
AI-driven search is upending traditional information pathways and putting the heat on businesses and organizations facing a web traffic free-fall. Survival instincts have companies scrambling to shift their web strategies — perhaps ending the days of the open internet as we know it. After decades of pursuing web-optimization strategies that encouraged high-volume content generation, many businesses are now feeling that their content-marketing strategies might be backfiring.
I am not exactly sure about this statement. But let’s press forward.
I noted this passage:
Without the incentive of web clicks and ad revenue to drive content creation, the foundation of the web as a free and open entity is called into question.
Okay, smart software is exploiting the people who put up SEO-tailored content to get sales leads and hopefully make money. From my point of view, technology can be disruptive. The impacts, however, can be positive or negative.
What’s the fix if there is one? The write up offers these thought starters:
- Embrace micro transactions. [I suppose this is good if one has high volume. It may not be so good if shipping and warehouse costs cannot be effectively managed. Vendors of high ticket items may find a micro-transaction for a $500,000 per year enterprise software license tough to complete via Venmo.]
- Implement a walled garden. [That works if one controls the market. Google wants to “register” Android developers. I think Google may have an easier time with the walled-garden tactic than a local bakery specializing in treats for canines.]
- Accepts the monopolies. [You have a choice?]
My reaction to the write up is that it does little to provide substantive guidance as smart software continues to expand like digital kudzu. What is important is that the article appears in the consumer oriented publication from Kiplinger of newsletter fame. Unfortunately the article makes clear that Kiplinger is struggling to find a solution to AI. My hunch is that Kiplinger is looking for possible solutions. The firm may want to dig a little deeper for options.
Stephen E Arnold, October 17, 2025
Ford CEO and AI: A Busy Time Ahead
October 17, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Ford’s CEO is Jim Farley. He has his work cut out for him. First, he has an aluminum problem. Second, he has an F 150 production disruption problem. Third, he has a PR problem. There’s not much he can do about the interruption of the aluminum supply chain. No parts means truck factories in Kentucky will have to go slow or shut down. But the AI issue is obviously one that is of interest to Ford stakeholders.
He [Mr. Farley] says the jobs most at risk aren’t the ones on the assembly line, but the ones behind a desk. And in his view, the workers wiring machines, operating tools, and physically building the infrastructure could turn out to be the most critical group in the economy. Farley laid it out bluntly back in June at the Aspen Ideas Festival during an interview with author Walter Isaacson. “Artificial intelligence is going to replace literally half of all white-collar workers,” he said. “AI will leave a lot of white-collar people behind.” He wasn’t speculating about a distant future either. Farley suggested the shift is already unfolding, and the implications could be sweeping.
With the disruption of the aluminum supply chain, Ford now will have to demonstrate that AI has indeed reduced white collar headcount. The write up says:
For him, it comes down to what AI can and cannot do. Office tasks — from paperwork to scheduling to some forms of analysis — can be automated with growing speed. But when it comes to factories, data centers, supply chains, or even electric vehicle production, someone still has to build, install, and maintain it…
The Ford situation is an interesting one. AI will reduce costs because half Ford’s white collar workers will no longer be on the payroll. But with supply chain interruptions and the friction in retail and lease sales, Ford has an opportunity to demonstrate that AI will allow a traditional manufacturing company to weather the current thunderstorm and generate financial proof that AI can offset exogenous events.
How will Ford perform? This is worth watching because it will provide some useful information for firms looking for a way to cut costs, improve operations, and balance real-world business. AI delivering one kind of financial benefit and traditional blue-collar workers unable to produce products because of supply chain issues. Quite a balancing act for Ford leadership.
Stephen E Arnold, October 17, 2025

