LLMs and Creativity: Definitely Not Einstein

November 25, 2025

green-dino_thumbAnother dinobaby original. If there is what passes for art, you bet your bippy, that I used smart software. I am a grandpa but not a Grandma Moses.

I have a vague recollection of a very large lecture room with stadium seating. I think I was at the University of Illinois when I was a high school junior. Part of the odd ball program in which I found myself involved a crash course in psychology. I came away from that class with an idea that has lingered in my mind for lo these many decades; to wit: People who are into psychology are often wacky. Consequently I don’t read too much from this esteemed field of study. (I do have some snappy anecdotes about my consulting projects for a psychology magazine, but let’s move on.)

image

A semi-creative human explains to his robot that he makes up answers and is not creative in a helpful way. Thanks, Venice.ai. Good enough, and I see you are retiring models, including your default. Interesting.

I read in PsyPost this article: “A Mathematical Ceiling Limits Generative AI to Amateur-Level Creativity.” The main idea is that the current approach to smart software does not just answers dead wrong, but the algorithms themselves run into a creative wall.

Here’s the alleged reason:

The investigation revealed a fundamental trade-off embedded in the architecture of large language models. For an AI response to be effective, the model must select words that have a high probability of fitting the context. For instance, if the prompt is “The cat sat on the…”, the word “mat” is a highly effective completion because it makes sense and is grammatically correct. However, because “mat” is the most statistically probable ending, it is also the least novel. It is entirely expected. Conversely, if the model were to select a word with a very low probability to increase novelty, the effectiveness would drop. Completing the sentence with “red wrench” or “growling cloud” would be highly unexpected and therefore novel, but it would likely be nonsensical and ineffective. Cropley determined that within the closed system of a large language model, novelty and effectiveness function as inversely related variables. As the system strives to be more effective by choosing probable words, it automatically becomes less novel.

Let me take a whack at translating this quote from PsyPost: LLMs like Google-type systems have to decide. [a] Be effective and pick words that fit the context well, like “jelly” after “I ate peanut butter and jelly.” Or, [b] The LLM selects infrequent and unexpected words for novelty. This may lead to LLM wackiness. Therefore,  effectiveness and novelty work against each other—more of one means less of the other.

The article references some fancy math and points out:

This comparison suggests that while generative AI can convincingly replicate the work of an average person, it is unable to reach the levels of expert writers, artists, or innovators. The study cites empirical evidence from other researchers showing that AI-generated stories and solutions consistently rank in the 40th to 50th percentile compared to human outputs. These real-world tests support the theoretical conclusion that AI cannot currently bridge the gap to elite [creative] performance.

Before you put your life savings into a giant can’t-lose AI data center investment, you might want to ponder this passage in the PsyPost article:

“For AI to reach expert-level creativity, it would require new architecture capable of generating ideas not tied to past statistical patterns … Until such a paradigm shift occurs in computer science, the evidence indicates that human beings remain the sole source of high-level creativity.

Several observations:

  1. Today’s best-bet approach is the Google-type LLM. It has creative limits as well as the problems of selling advertising like old-fashioned Google search and outputting incorrect answers
  2. The method itself erects a creative barrier. This is good for humans who can be creative when they are not doom scrolling.
  3. A paradigm shift could make those giant data centers extremely large white elephants which lenders are not very good at herding along.

Net net: I liked the angle of the article. I am not convinced I should drop my teen impression of psychology. I am a dinobaby, and I like land line phones with rotary dials.

Stephen E Arnold, November 26, 2025

Why the BAIT Outfits Are Drag Netting for Users

November 25, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Have you wondered why the BAIT (big AI tech) companies are pumping cash into what looks to many like a cash bonfire? Here’s one answer, and I think it is a reasonably good one. Navigate to “Best Case: We’re in a Bubble. Worst Case: The People Profiting Most Know Exactly What They’re Doing.” I want to highlight several passages and then often my usually-ignored observations.

image

Thanks, Venice.ai. Good enough, but I am not sure how many AI execs wear old-fashioned camping gear.

I noted this statement:

The best case scenario is that AI is just not as valuable as those who invest in it, make it, and sell it believe.

My reaction to this bubble argument is that the BAIT outfits realized after Microsoft said, “AI in Windows” that a monopoly-type outfit was making a move. Was AI the next oil or railroad play? Then Google did its really professional and carefully-planned Code Red or Yellow whatever, the hair-on-fire moment arrived. Now almost three years later, the hot air from the flaming coifs are equaled by the fumes of incinerating bank notes.

The write up offers this comment:

My experience with AI in the design context tends to reflect what I think is generally true about AI in the workplace: the smaller the use case, the larger the gain. The larger the use case, the larger the expense. Most of the larger use cases that I have observed — where AI is leveraged to automate entire workflows, or capture end to end operational data, or replace an entire function — the outlay of work is equal to or greater than the savings. The time we think we’ll save by using AI tends to be spent on doing something else with AI.

The experiences of my team and I support this statement. However, when I go back to the early days of online in the 1970s, the benefits of moving from print research to digital (online) research were fungible. They were quantifiable. Online is where AI lives. As a result, the technology is not global. It is a subset of functions. The more specific the problem, the more likely it is that smart software can help with a segment of the work. The idea that cobbled together methods based on built-in guesses will be wonderful is just plain crazy. Once one thinks of AI as a utility, then it is easier to identify a use case where careful application of the technology will deliver a benefit. I think of AI as a slightly more sophisticated spell checker for writing at the 8th grade level.

The essay points out:

The last ten years have practically been defined by filter bubbles, alternative facts, and weaponized social media — without AI. AI can do all of that better, faster, and with more precision. With a culture-wide degradation of trust in our major global networks, it leaves us vulnerable to lies of all kinds from all kinds of sources and no standard by which to vet the things we see, hear, or read.

Yep, this is a useful way to explain that flows of online information tear down social structures. What’s not referenced, however, is that rebuilding will take a long time. Think about smashing your mom’s favorite Knick- knack. Were you capable of making it as good as new? Sure, a few specialists might be able to do a good job, but the time and cost means that once something is destroyed, that something is gone. The rebuild is at best a close approximation. That’s why people who want to go back to social structures in the 1950s are chasing a fairy tale.

The essay notes:

When a private company can construct what is essentially a new energy city with no people and no elected representation, and do this dozens of times a year across a nation to the point that half a century of national energy policy suddenly gets turned on its head and nuclear reactors are back in style, you have a sudden imbalance of power that looks like a cancer spreading within a national body.

My view is that the BAIT outfits want to control, dominate, and cash in. Hey, if you have cancer and one company has the alleged cure, are you going to take the drug or just die?

Several observations are warranted:

  1. BAIT outfits want to be the winner and be the only alpha dog. Ruthless behavior will be the norm for these firms.
  2. AI is the next big thing. The idea is that if one wishes it, thinks it, or invests in it, AI will be. My hunch is that the present methodologies are on the path to becoming the equivalent of a dial up modem.
  3. The social consequences of the AI utility added to social media are either ignored or not understood. AI is the catalyst needed to turn one substance into an explosion.

Net net: Good essay. I think the downsides referenced in the essay understate the scope of the challenge.

Stephen E Arnold, November 25, 2025

Watson: Transmission Is Doing Its Part

November 25, 2025

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I read an article that stopped me in my tracks. It was “IBM Revisits 2011 AI Jeopardy Win to Capture B2B Demand.” The article reports that a former IBM executive said:

People want AI to be able to do what it can’t…. and immature technology companies are not disciplined enough to correct that thinking.

I find the statement fascinating. IBM Watson was supposed to address some of the challenges cancer patients faced. The reality is that cancer docs in Houston and Manhattan provided IBM with some feedback that shattered IBM’s own ill-disciplined marketing of Watson. What about that building near NYU that was stuffed with AI experts? What about IBM’s sale of its medical unit to Francisco Partners? Where is that smart software today? It is Merative Health, and it is not clear if the company is hitting home runs and generating a flood of cash. So that Watson technology is no longer part of IBM’s smart software solution.

image

Thanks, Venice.ai. Good enough.

The write up reports that a company called Transmission, which is a business to business or B2B marketing agency, made a documentary about Watson AI. It is not clear from the write up if the documentary was sponsored or if Transmission just had the idea to revisit Watson. According to the write up:

The documentary [“Who is…Watson? The Day AI Went Primetime”] underscores IBM’s legacy of innovation while framing its role in shaping an ethical, inclusive future for AI, a critical differentiator in today’s competitive landscape.

The Transmission/Earnest documentary is a rah rah for IBM and its Watsonx technology. Think of this as Watson Version 2 or Version 3. The Transmission outfit and its Earnest unit (yes, that is its name) in London, England, wants to land more IBM work. Furthermore, rumors suggest that the video created by Celia Aniskovich as a “spec project.” High quality videos running 18 minutes can burn through six figures quickly. A cost of $250,000 or $300,000 is not unexpected. Add to this the cost of the PR campaign to push Transmission brand story telling capability, and the investment strikes me as a bad-economy sales move. If a fat economy, a marketing outfit would just book business at trade shows or lunch. Now, it is rah rah time and cash outflow.

The write up makes clear that Transmission put its best foot forward. I learned:

The documentary was grounded in testimonials from former IBM staff, and more B2B players are building narratives around expert commentary. B2B marketers say thought leaders and industry analysts are the most effective influencer types (28%), according to an April LinkedIn and Ipsos survey. AI pushback is a hot topic, and so is creating more entertaining B2B content. The biggest concern among leveraging AI tools among adults worldwide is the loss of human jobs, according to a May Kantar survey. The primary goal for video marketing is brand awareness (35%), according to an April LinkedIn and Ipsos survey. In an era where AI is perceived as “abstract or intimidating,” this documentary attempts to humanize it while embracing the narrative style that makes B2B brands stand out,

The IBM message is important. Watson Jeopardy was “good” AI. The move fast, break things, and spend billions approach used today is not like IBM’s approach to Watson. (Too bad about those cancer docs not embracing Watson, a factoid not mentioned in the cited write up.)

The question is. “Will the Watson video go viral?” The Watson Jeopardy dust up took place in 2011, but the Watson name lives on. Google is probably shaking its talons at the sky wishing it had a flashy video too. My hunch is that Google would let its AI make a video or one of the YouTubers would volunteer hoping that an act of goodness would reduce the likelihood Google would cut their YouTube payments. I guess I could ask Watson when it thinks, but I won’t. Been there. Done that.

Stephen E Arnold, November 25, 2025

Google: AI or Else. What a Pleasant, Implicit Threat

November 24, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Do you remember that old chestnut of a how-to book. I think its title was How to Win Friends and Influence People. I think the book contains a statement like this:

“Instead of condemning people, let’s try to understand them. Let’s try to figure out why they do what they do. That’s a lot more profitable and intriguing than criticism; and it breeds sympathy, tolerance and kindness. “To know all is to forgive all.” ”

The Google leadership has mastered this approach. Look at its successes. An advertising system that sells access to users from an automated bidding system running within the Google platform. Isn’t that a way to breed sympathy for the company’s approach to serving the needs of its customers? Another example is the brilliant idea of making a Google-centric Agentic Operating System for the world. I know that the approach leaves plenty of room for Google partners, Google high performers, and Google services. Won’t everyone respond in a positive way to the “space” that Google leaves for others?

image

Thanks, Venice.ai. Good enough.

I read “Google Boss Warns No Company Is Going to Be Immune If AI Bubble Bursts.” What an excellent example of putting the old-fashioned precepts of Dale Carnegie’s book into practice. The soon-to-be-sued BBC article states:

Speaking exclusively to BBC News, Sundar Pichai said while the growth of artificial intelligence (AI) investment had been an “extraordinary moment”, there was some “irrationality” in the current AI boom… “I think no company is going to be immune, including us,” he said.

My memory doesn’t work the way it did when I was 13 years old, but I think I heard this same Silicon Valley luminary say, “Code Red” when Microsoft announced a deal to put AI in its products and services. With the klaxon sounding and flashing warning lights, Google began pushing people and money into smart software. Thus, the AI craze was legitimized. Not even the spat between Sam Altman and Elon Musk could slow the acceleration. And where are we now?

The chief Googler, a former McKinsey & Company consultant, is explaining that the AI boom is rational and irrational. Is that a threat from a company that knee jerked its way forward? Is Google saying that I should embrace AI or suffer the consequences? Mr. Pichai is worried about the energy needs of AI. That’s good. Because one doesn’t need to be an expert in utility forecast demand analysis to figure out that if the announced data centers are built, there will probably be brown outs or power rationing.  Companies like Google can pay its electric bills; others may not have the benefit of that outstanding advertising system to spit out cash with the heart beat of an atomic clock.

I am not sure that Dale Carnegie would have phrased statements like these if they are words tumbling from Google’s leader as presented in the article:

“We will have to work through societal disruptions.” he said, adding that it would also “create new opportunities”. “It will evolve and transition certain jobs, and people will need to adapt,” he said. Those who do adapt to AI “will do better”. “It doesn’t matter whether you want to be a teacher [or] a doctor. All those professions will be around, but the people who will do well in each of those professions are people who learn how to use these tools.”

This sure sounds like a dire prediction for people who don’t “learn how to use these tools.” I would go so far as to suggest that one of the progenitors of the AI craziness is making another threat. I interpret the comment as meaning, “Get with the program or you will never work again anywhere.”

How uplifting. Imagine that old coot Dale Carnegie saying in the 1930s that you will do poorly if you don’t get with the Googley AI program? Here’s one of Dale’s off-the-wall comments was:

“The only way to influence people is to talk in terms of what the other person wants.”

The statements in the BBC story make one thing clear: I know what Google wants. I am not sure it is what other people want. Obviously the wacko Dale Carnegie is not in tune with the McKinsey consultant’s pragmatic view of what Google wants. Poor Dale. It seems his observations do not line up with the Google view of life for those who don’t do AI.

Stephen E Arnold, November 24, 2025

Microsoft Factoid: 30 Percent of Our Code Is Vibey

November 24, 2025

green-dino_thumbAnother short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.

Is Microsoft cranking out one fifth to one third of its code using vibey methods? A write up from Ibrahim Diallo seeks to answer this question in his essay “Is 30% of Microsoft’s Code Really AI-Generated?” My instinctive response was, “Nope. Marketing.” Microsoft feels the heat. The Google is pushing the message that it will deliver the Agentic Operating System for the emergence of a new computing epoch. In response, Microsoft has been pumping juice into its market collateral. For example, Microsoft is building data center systems that span nations. Copilot will make your Notepad “experience” more memorable. Visio, a step child application, is really cheap. Add these steps together, and you get a profile of a very large company under pressure and showing signs of cracking. Why? Google is turning up the heat and Microsoft feels it.

Mr. Diallo writes:

A few months back, news outlets were buzzing with reports that Satya Nadella claimed 30% of the code in Microsoft’s repositories was AI-generated. This fueled the hype around tools like Copilot and Cursor. The implication seemed clear: if Microsoft’s developers were now “vibe coding,” everyone should embrace the method.

Then he makes a pragmatic observation:

The line between “AI-generated” and “human-written” code has become blurrier than the headlines suggest. And maybe that’s the point. When AI becomes just another tool in the development workflow, like syntax highlighting or auto-complete, measuring its contribution as a simple percentage might not be meaningful at all.

Several observations:

  1. Microsoft’s leadership is outputting difficult to believe statements
  2. Microsoft apparently has been recycling code because those contributions from Stack Overflow are not tabulated
  3. Marketing is now the engine making AI the future of Microsoft unfold.

I would assert that the answer to the Mr. Diallo’s question is, “Whatever unfounded assertion Microsoft offers is actual factual.” That’s okay with me, but some people may be hooked by Google’s Agentic Operating System pitch.

Stephen E Arnold, November 24, 2025

AI Doubters: You Fall Short. Just Get With the Program

November 21, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Watching the Google strike terror in the heart of Sam AI-Man is almost as good as watching a mismatch in bare knuckle fights broadcast on free TV. Promoters have a person who appears fit and mean. The opponent usually looks less physically imposing and often has a neutral or slightly frightened expression. After a few minutes, the big person wins.

Is the current state of AI like a bare knuckles fight?

Here’s another example. A math whiz in a first year algebra class is asked by the teacher, “Why didn’t you show your work?” The young person looks confused and says, “The answer is obvious.” The teacher says you have to show your work. The 13-year old replies, “There is nothing to show. The answer just is.”

image

A young wizard has no use for an old fuddy duddy who wants to cling to the past. The future leadership gem thinks, “Dude, I am in Hilbert space.”

I thought that BAIT executives had outgrown or at least learned to mask their ability to pound the opponent to the canvas and figured out how to keep their innate superiority in check. Not surprisingly, I was wrong.

My awareness of the mismatch surfaced when I read “Microsoft AI CEO Puzzled by People Being Unimpressed by AI.” The hyperbole surrounding AI or smart software is the equivalent of the physically fit person pummeling an individual probably better suited to work as an insurance clerk into the emergency room. It makes clear that the whiz kid in math class has no clue that other people do not see what “just is.”

Let’s take a look at a couple of statements in the article.

I noted this allegedly accurate passage:

It cracks me up when I hear people call AI underwhelming. I grew up playing Snake on a Nokia phone! The fact that people are unimpressed that we can have a fluent conversation with a super smart AI that can generate any image/video is mind blowing to me.

What you haven’t fallen succumbed to the marketing punches yet? And you don’t get it? I can almost hear a voice saying, “Yep, you Mr. Dinobaby, are a loser.” The person saying “cracks me up” is the notable Mustafa Suleyman. He is Microsoft’s top dog in smart software. He is famous in AI circles. He did not understand this “show your work” stuff. He would be a very good bet in a bare knuckles contest is my guess.

A second snippet:

Over in the comments, some users pushed back on the CEO’s use of the word “unimpressed,” arguing that it’s not the technology itself that fails to impress them, but rather Microsoft’s tendency to put AI into everything just to appease shareholders instead of focusing on the issues that most users actually care about, like making Windows’ UI more user-friendly similar to how it was in Windows 7, fixing security problems, and taking user privacy more seriously.

The second snippet is a response to Mr. Suleyman’s bafflement. The idea that 40 year old Microsoft is reinventing itself with AI troubles the person who brings up Windows’ issues. SolarWinds is officially put to bed, pummeled by tough lawyers and the news cycle. The second snippet brings up an idea that strikes some as ludicrous; specifically, paying attention to what users want.

Several observations:

  1. Microsoft and other AI firms know what’s best for me and you
  2. The AI push is a somewhat overwrought attempt to make a particular technical system the next big thing. The idea is that if we say it and think it and fund it, AI will be like electricity, the Internet, and an iPhone.
  3. The money at stake means that those who do not understand the value of smart software are obstructionists. These individuals and organizations will have to withstand the force of the superior combatants.

Will AI beat those who just want software to assist them complete a task, not generate made up or incorrect outputs, and allow people to work in a way that is comfortable to them? My hunch is that users of software will have to get with the program. The algebra teacher will, one way or another, fail to contain the confidence, arrogance, and intelligence of the person who states, “It just is.”

Stephen E Arnold, November 21, 2025

AI Spending Killing Jobs, Not AI Technology

November 21, 2025

green-dino_thumbAnother short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.

Fast Company published “AI Isn’t Replacing Jobs. AI Spending Is.” The job losses are real. Reports from recruiting firms and anecdotal information make it clear that those over 55 are at risk and most of those under 23 are likely to be candidates for mom’s basement or van life.

image

Thanks, Venice.ai. Pretty lame, but I grew bored with trying different prompts.

The write up says:

From Amazon to General Motors to Booz Allen Hamilton, layoffs are being announced and blamed on AI. Amazon said it would cut 14,000 corporate jobs. United Parcel Service (UPS) said it had reduced its management workforce by about 14,000 positions over the past 22 months. And Target said it would cut 1,800 corporate roles. Some academic economists have also chimed in: The St. Louis Federal Reserve found a (weak) correlation between theoretical AI exposure and actual AI adoption in 12 occupational categories.

Then the article delivers an interesting point:

Yet we remain skeptical of the claim that AI is responsible for these layoffs. A recent MIT Media Lab study found that 95% of generative AI pilot business projects were failing. Another survey by Atlassian concluded that 96% of businesses “have not seen dramatic improvements in organizational efficiency, innovation, or work quality.” Still another study found that 40% of the business people surveyed have received “AI slop” at work in the last month and that it takes nearly two hours, on average, to fix each instance of slop. In addition, they “no longer trust their AI-enabled peers, find them less creative, and find them less intelligent or capable.”

Here’s the interesting conclusion or semi-assertion:

When companies are financially stressed, a relatively easy solution is to lay off workers and ask those who are not laid off to work harder and be thankful that they still have jobs. AI is just a convenient excuse for this cost-cutting.

Yep, AI spending is not producing revenue. The sheep herd is following AI. But fodder is expensive. Therefore, cull the sheep. Wool sweaters at a discount, anyone? Then the skepticism of a more or less traditional publishing outfit surfaces; to wit:

The wild exaggerations from LLM promoters certainly help them raise funds for their quixotic quest for artificial general intelligence. But it brings us no closer to that goal, all while diverting valuable physical, financial, and human resources from more promising pursuits.

Several observations are probably unnecessary, but I as an official dinobaby choose to offer them herewith:

  1. The next big thing that has been easy to juice has been AI. Is it the next big thing? Nope, it is utility software. Does anyone need multiple utility applications? Nope. Does anyone want multiple utility tools that do mostly the same thing with about the same amount of made up and  incorrect outputs? Nope.
  2. The drivers for AI are easy to identify: [a] It was easy to hype, [b] People like the idea of a silver bullet until the bullets misfire and blow off the shooter’s hand or blind the gun lover, [c] No other “next big thing” is at hand.
  3. Incorrect investment decisions are more problematic than diversified investment decisions. What do oligopolistic outfits do? Lead their followers. If we think in terms of sheep, there are a lot of sheet facing a very steep cliff.

Net net: Only a couple of sheep will emerge as Big Sheep. The other sheep? Well, if not a sweater, how about a lamb chop. Ooops. Some sheep may not want to become food items on a Styrofoam tray wrapped in plastic with a half off price tag. Imagine that.

Stephen E Arnold, November 21, 2025

Waymo Mows Down a Mission Cat

November 21, 2025

Cat lovers in San Francisco have a new reason to be angry at Waymo, Google’s self-driving car division. The outrage has reached all the way to the UK, where the Metro reports, “Robotaxi Runs Over and Kills Popular Cat that Greeted People in a Corner Shop.” Reporter Sarah Hooper writes:

“KitKat, the beloved pet cat at Randa’s Market, was run over by an automated car on October 27. He was rushed to a hospital by a bartender working nearby, but was pronounced dead. KitKat’s death has sparked an outpouring of fury and sadness from those who loved him – and questions about the dangers posed by self-driving cars. Randa’s Market owner Mike Zeidan told Rolling Stone: ‘He was a special cat. You can tell by the love and support he’s getting from the community that he was amazing.’ San Francisco Supervisor Jackie Fielder spoke out publicly, saying: ‘Waymo thinks they can just sweep this under the rug and we will all forget, but here in the Mission, we will never forget our sweet KitKat.’ Anger in the community has increased after it was revealed that on the same day KitKat was killed, Waymo co-CEO Tekedra Mawakana said she thought society is ‘ready to accept deaths’ caused by automated cars. But KitKat’s owner pointed out that next time, the death could be that of a child, not just a beloved pet.”

Good point. In a statement, the company insists the tabby “darted” under the car as it pulled away. Perhaps. But do the big dogs at Google really feel “deepest sympathies” for those grieving their furry friend, as the statement claims? It was one of them, after all, who asserted the world is ready to trade deaths for her firm’s technology.

Curious readers can navigate to the write-up to see a couple photos of the charismatic kitty.

Cynthia Murrell, November 21, 2025

AI Agents and Blockchain-Anchored Exploits:

November 20, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

In October 2025, Google published “New Group on the Block: UNC5142 Leverages EtherHiding to Distribute Malware,” which generated significant attention across cybersecurity publications, including Barracuda’s cybersecurity blog. While the EtherHiding technique was originally documented in Guard.io’s 2023 report, Google’s analysis focused specifically on its alleged deployment by a nation-state actor. The methodology itself shares similarities with earlier exploits: the 2016 CryptoHost attack also utilized malware concealed within compressed files. This layered obfuscation approach resembles matryoshka (Russian nesting dolls) and incorporates elements of steganography—the practice of hiding information within seemingly innocuous messages.Recent analyses emphasize the core technique: exploiting smart contracts, immutable blockchains, and malware delivery mechanisms. However, an important underlying theme emerges from Google’s examination of UNC5142’s methodology—the increasing role of automation. Modern malware campaigns already leverage spam modules for phishing distribution, routing obfuscation to mask server locations, and bots that harvest user credentials.

With rapid advances in agentic AI systems, the trajectory toward fully automated malware development becomes increasingly apparent. Currently, exploits still require threat actors to manually execute fundamental development tasks, including coding blockchain-enabled smart contracts that evade detection.During a recent presentation to law enforcement, attorneys, and intelligence professionals, I outlined the current manual requirements for blockchain-based exploits. Threat actors must currently complete standard programming project tasks: [a] Define operational objectives; [b] Map data flows and code architecture; [c] Establish necessary accounts, including blockchain and smart contract access; [d] Develop and test code modules; and [e] Deploy, monitor, and optimize the distributed application (dApp).

The diagrams from my lecture series on 21st-century cybercrime illustrate what I believe requires urgent attention: the timeline for when AI agents can automate these tasks. While I acknowledge my specific timeline may require refinement, the fundamental concern remains valid—this technological convergence will significantly accelerate cybercrime capabilities. I welcome feedback and constructive criticism on this analysis.

B Today

The diagram above illustrates how contemporary threat actors can leverage AI tools to automate as many as one half of the tasks required for a Vibe Blockchain Exploit (VBE). However, successful execution still demands either a highly skilled individual operator or the ability to recruit, coordinate, and manage a specialized team. Large-scale cyber operations remain resource-intensive endeavors. AI tools are increasingly accessible and often available at no cost. Not surprisingly, AI is a standard components in the threat actor’s arsenal of digital weapons. Also, recent reports indicate that threat actors are already using generative AI to accelerate vulnerability exploitation and tool development. Some operations are automating certain routine tactical activities; for example, phishing. Despite these advances, a threat actor has to get his, her, or the team’s hands under the hood of an operation.

Now let’s jump forward to 2027.

B 2027

The diagram illustrates two critical developments in the evolution of blockchain-based exploits. First, the threat actor’s role transforms from hands-on execution to strategic oversight and decision-making. Second, increasingly sophisticated AI agents assume responsibility for technical implementation, including the previously complex tasks of configuring smart contract access and developing evasion-resistant code. This represents a fundamental shift: the majority of operational tasks transition from human operators to autonomous software systems.

Several observations appear to be warranted:

  1. Trajectory and Detection Challenges. While the specific timeline remains subject to refinement, the directional trend for Vibe Blockchain Exploits (VBE) is unmistakable. Steganographic techniques embedded within blockchain operations will likely proliferate. The encryption and immutability inherent to blockchain technology significantly extend investigation timelines and complicate forensic analysis.
  2. Democratization of Advanced Cyber Capabilities. The widespread availability of AI tools, combined with continuous capability improvements, fundamentally alters the threat landscape by reducing deployment time, technical barriers, and operational costs. Our analysis indicates sustained growth in cybercrime incidents. Consequently, demand for better and advanced intelligence software and trained investigators will increase substantially. Contrary to sectors experiencing AI-driven workforce reduction, the AI-enabled threat environment will generate expanded employment opportunities in cybercrime investigation and digital forensics.
  3. Asymmetric Advantages for Threat Actors. As AI systems achieve greater sophistication, threat actors will increasingly leverage these tools to develop novel exploits and innovative attack methodologies. A critical question emerges: Why might threat actors derive greater benefit from AI capabilities than law enforcement agencies? Our assessment identifies a fundamental asymmetry. Threat actors operate with fewer behavioral constraints. While cyber investigators may access equivalent AI tools, threat actors maintain operational cadence advantages. Bureaucratic processes introduce friction, and legal frameworks often constrain rapid response and hamper innovation cycles.

Current analyses of blockchain-based exploits overlook a crucial convergences: The combination of advanced AI systems, blockchain technologies, and agile agentic operational methodologies for threat actors. These will present unprecedented challenges to regulatory authorities, intelligence agencies, and cybercrime investigators. Addressing this emerging threat landscape requires institutional adaptation and strategic investment in both technological capabilities and human expertise.

Stephen E Arnold, November 20, 2025

Big AI Tech: Bait and Switch with Dancing Numbers?

November 20, 2025

green-dino_thumbAnother short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.

The BAIT outfits are a mix of public and private companies. Financial reports — even for staid outfits — can give analysts some eye strain. Footnotes in dense text can contain information relevant to a paragraph or a number that appears elsewhere in a document. I have been operating on a simple idea: The money flowing into AI is part of the “we can make it work” approach of many high technology companies. For the BAIT outfits, each enhancement has delivered a system which is not making big strides. Incrementalism or failure seems to be what the money has been buying. Part of the reason is that BAIT outfits need and lust for a play that will deliver a de facto monopoly in smart software.

Right now, the BAIT outfits depend on the Googley transformer technology. That method is undergoing enhancements, tweaks, refinements, and other manipulations to deliver more. The effort is expensive, and — based on my personal experience — not delivering what I expect. For example, You.com (one of the interface outfits that puts several models under one browser “experience” — told me I had been doing too many queries. When I contacted the company, I was told to fiddle with my browser and take other steps unrelated to the error message You.com’s system generated. I told You.com to address their error message, not tell me what to do with a computer that works with other AI services. I have Venice.ai ignoring prompts. Prior to updates, the Venice.ai system did a better job of responding to prompts. ChatGPT is now unable to concatenate three or four responses to quite specific prompts and output a Word file. I got a couple of hundred word summary instead of the outputs, several of which were wild and crazy.

image

Thanks, Venice.ai. Close enough for horse shoes.

When I read “Michael Burry Doubles Down On AI Bubble Claims As Short Trade Backfires: Says Oracle, Meta Are Overstating Earnings By ‘Understating Depreciation’,” I realized that others are looking more carefully at the BAIT outfits and what they report. [Mr. Burry is the head of Scion, an investment firm that is into betting certain stock prices will crater.] The article says:

In a post on X, Burry accused tech giants such as Meta Platforms Inc. and Oracle Corp. of “understating depreciation” by extending the useful life of assets, particularly chips and AI infrastructure.

This is an MBA way of saying, “These BAIT outfits are ignoring that the value of their fungible stuff like chips, servers, and data center plumbing is cratering. Software does processes, usually mindlessly. But when one pushes zeros and ones through software, the problems appear. These can be as simple as nothing happens or a server just sits there and blinks. Yikes, bottlenecks. The fix is usually just reboot and get the system up and running. The next step is to buy more of whatever hardware appeared to be the problem. Sure, the software wizards will look at their code, but that takes time. The approach is to spend more for compute or bandwidth and then slipstream the software fix into the work flow.

In parallel with the “spend to get going” approach, the vendors of processing chips suitable for handling flows of data are improving. The cadence is measured in months. But when new chips become available, BAIT outfits want them. Like building highways, a new highway does not solve a traffic problem. The flow of traffic increases until the new highway is just as slow as the old highway. The fix, which is similar to the BAIT outfits’ approach, is to build more highways. Meanwhile software fixes are slow and the chip cadence marches along.

Thus, understating depreciating and probably some other financial fancy dancing disguises how much cash is needed to keep those less and less impressive incremental AI innovations coming. The idea is that someone, somewhere in BAIT world will crack the problem. A transformer type breakthrough will solve the problems AI presents. Well, that’s the hope.

The article says:

Burry referred to this as one of the “more common frauds of the modern era,” used to inflate profits, and is something that he said all of the hyperscalers have since resorted to. “They will understate depreciation by $176 billion” through 2026 and 2028, he said.

Mr. Burry is a contrarian, and contrarians are not as popular as those who say, “Give me money. You will make a bundle.”

There are three issues involved with BAIT and somewhat fluffy financial situation AI companies in general face:

  1. China continues to put pressure on for profit outfits in the US. At the same time, China has been forced to find ways to “do” AI with less potent processors.
  2. China has more power generation tricks up its sleeve. Examples range from the wild and crazy mile wide dam with hydro to solar power, among other options. The US is lagging in power generation and alternative energy solutions. The cost of AI’s power is going to be a factor forcing BAIT outfits to do some financial two steps.
  3. China wants to put pressure on the US BAIT outfits as part of its long term plan to become the Big Dog in global technology and finance.

So what do we have? We have technical debit. We have a need to buy more expensive chips and data centers to house them. We have financial frippery to make the AI business look acceptable.

Is Mr. Burry correct? Those in the AI is okay camp say, “No. He’s the GameStop guy.”

Maybe Microsoft’s hiring of social media influencers will resolve the problem and make Microsoft number one in AI? Maybe Google will pop another transformer type innovation out of its creative engineering oven? Maybe AI will be the next big thing? How patient will investors be?

Stephen E Arnold, November 20, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta