What Cyber Security Professionals “Fear”

August 21, 2025

Dino 5 18 25This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.

My colleague Robert David Steele (now deceased) loved to attend Black Hat. He regaled me with the changing demographics of the conference, the reaction to his often excitement-inducing presentations, and the interesting potential “resources” he identified. I was content to stay in my underground office in rural Kentucky and avoid the hacking and posturing.

I still keep up (sort of but not too enthusiastically) with Black Hat events by reading articles like “Black Hat 2025: What Keeps Cyber Experts Up at Night?” The write up explains that:

“Machines move faster than humans.”

Okay, that makes sense. The write up then points out:

“Tools like generative AI are fueling faster, more convincing phishing and social engineering campaigns.”

I concluded that cyber security professionals fear fast computers and smart software. When these two things are combined, the write up states:

The speed of AI innovation is stretching security management to its limits.

My conclusion is that the wide availability of smart software is the big “fear.”

I interpret the information in the write up from a slightly different angle. Let me explain.

First, cyber security companies have to make money to stay in business. I could name one Russian outfit that gets state support, but I don’t want to create waves. Let’s go with money is the driver of cyber security. In order to make money, the firms have to come up with fancy ways of explaining DNS analysis, some fancy math, or yet another spin on the Maltego graph software. I understand.

Second, cyber security companies are by definition reactive. So far the integration of smart software into the policeware and intelware systems I track adds some workflow enhancements; for example, grouping information and in some cases generating a brief paragraph, thus saving time. Proactive perimeter defense systems and cyber methods designed to spot insider attacks are in what I call “sort of helpful” mode. These systems can easily overwhelm the person monitoring the data signals. Firms respond by popping up a level with another layer of abstraction. Those using the systems are busy, of course, and it is not clear if more work gets done or if time is bled off to do busy-work. Cyber security firms, therefore, are usually not in proactive mode except for marketing.

Third, cyber security firms are consolidating. I think about outfits like Pala Alto or the private equity roll ups. The result is that bureaucratic friction is added to the technology development these firms must do. Just figuring out how to snag data from the latest and greatest Dark Web secret forum and actually getting access to a Private Channel on Telegram disseminating content that is illegal in many jurisdictions takes time. With smart software, bad actors can experiment. The self-appointed gatekeepers do little to filter these malware activities because some bad actors are customers of the gatekeepers. (No, I won’t name firms. I don’t want to talk to lawyers or inflamed cyber security firms’ leadership.) My point is that consolidation creates bureaucratic work. That activity puts the foot on the fast moving cyber firm’s brakes. Reaction time slows.

What does this mean?

I think the number one fear for cyber security professionals may be the awareness that bad actors with zero bureaucratic, technical, or financial limits can use AI to make old wine new again. Recently a major international law enforcement organization announced the shutdown of particular stealer software. Unfortunately that stealer is currently being disseminated via Web search systems with live links to the Telegram-centric vendor pumping the malware into thousands of unsuspecting Telegram users each month.

What happens when that “old school” stealer is given some new capabilities by one of the smart software tools? The answer is, “Cyber security firms may have to hype their capabilities to an even greater degree than they now do. Behind the scenes, the stage is now set for developer burn out and churn.

The fear, then, is a nagging sense that bad guys may be getting a tool kit to punch holes in what looks like a slam dunk business. I am probably wrong because I am a dinobaby. I don’t go to many conferences. I don’t go to sales meetings. I don’t meet with private equity people. I just look at how AI makes asymmetric cyber warfare into a tough game. One should not take a squirt gun to a shoot out with a bad actor working without bureaucratic and financial restraints armed with an AI system.

Stephen E Arnold, August 21, 2025

The Risks of Add-On AI: Apple, Telegram, Are You Paying Attention?

August 20, 2025

Dino 5 18 25No AI. Just a dinobaby and a steam-powered computer in rural Kentucky.

Name three companies trying to glue AI onto existing online services? Here’s my answer:

  • Amazon
  • Apple
  • Telegram.

There are others, but each of these has a big “tech rep” and command respect from other wizards. We know that Tim Apple suggested that the giant firm had AI pinned to the mat and whimpering, “Let me be Siri.” Telegram mumbled about Nikolai working on AI. And Amazon? That company has flirted with smart software with its Sagemaker announcements years ago. Now it has upgraded Alexa, the device most used as a kitchen timer.

Amazon’s Rocky Alexa+ Launch Might Justify Apple’s Slow Pace with Next-Gen Siri” ignores Telegram (of course. Who really cares?) and uses Amazon’s misstep to apologize for Apple’s goofs. The write up says:

Apple has faced a similar technical challenge in its own next-generation Siri project. The company once aimed to merge Siri’s existing deterministic systems with a new generative AI layer but reportedly had to scrap the initial attempt and start over. … Apple’s decision to delay shipping may be frustrating for those of us eager for a more AI-powered Siri, but Amazon’s rocky launch is a reminder of the risks of rushing a replacement before it’s actually ready.

Why does this matter?

My view is that Apple’s and Amazon’s missteps make clear that bolting on, fitting in, and snapping on smart software is more difficult than it seemed. I also believe that the two firms over-estimated their technical professionals’ ability to just “do” AI. Plus, both US companies appear to be falling behind in the “AI race.”

But what about Telegram? That company is in the same boat. Its AI innovations are coming from its third party developers who have been using Telegram’s platform as a platform. Telegram itself has missed opportunities to reduce the coding challenge for its developers with it focus on old-school programming languages, not AI assisted coding.

I think that it is possible that these three firms will get their AI acts together. The problem is that AI native solutions for the iPhone, the Telegram “community,” and Amazon’s own hardware products. The fumbles illustrate a certain weakness in each firm. Left unaddressed, these can be debilitating in an uncertain economic environment.

But the mantra go fast or the jargon accelerate is not in line with the actions of these three companies.

Stephen E Arnold, August 20, 2025

Inc. Magazine May Find that Its MSFT Software No Longer Works

August 20, 2025

Dino 5 18 25_thumb[3]No AI. Just a dinobaby and a steam-powered computer in rural Kentucky.

I am not sure if anyone else has noticed that one must be very careful about making comments. A Canadian technology dude found himself embroiled with another Canadian technology dude. To be frank, I did not understand why the Canadian tech dudes were squabbling, but the dust up underscores the importance of the language, tone, rhetoric, and spin one puts on information.

An example of a sharp-toothed article which may bite Inc. Magazine on the ankle is the story “Welcome to the Weird New Empty World of LinkedIn: Just When Exactly Did the World’s Largest Business Platform Turn into an Endless Feed of AI-Generated Slop?” My teeny tiny experience as a rental at the world’s largest software firm taught me three lessons:

  1. Intelligence is defined many ways. I asked a group of about 75 listening to one of my lectures, “Who is familiar with Kolmogorov?” The answer was for that particular sampling of Softies was exactly zero. Subjective impression: Rocket scientists? Not too many.
  2. Feistiness. The fellow who shall remain nameless dragged me to a weird mixer thing in one of the buildings on the “campus.” One person (whose name and honorifics I do not remember) said, “Let me introduce you  to Mr. X. He is driving the Word project.” I replied with a smile. We walked to the fellow, were introduced, and I asked, “Will Word fix up its autonumbering?” The Word Softie turned red, asked the fellow who introduced me to him, “Who is this guy?” The Word Softie stomped away and shot deadly sniper eyes at me until we left after about 45 minutes of frivolity. Subjective impression: Thin skin. Very thin skin.
  3. Insecurity. At a lunch with a person whom I had met when I was a contractor at Bell Labs and several other Softies, the subject of enterprise search came up. I had written the Enterprise Search Report, and Microsoft had purchased copies. Furthermore, I wrote with Susan Rosen “Managing Electronic Information Projects.” Ms. Rosen was one of the senior librarians at Microsoft. While waiting for the rubber chicken, a Softie asked me about Fast Search & Transfer, which Microsoft had just purchased. The question posed to me was, “What do you think about Fast Search as a technology for SharePoint?” I said, “Fast Search was designed to index Web sites. The enterprise search functions were add ons. My hunch is that getting the software to handle the data in SharePoint will be quite difficult?” The response was, “We can do it.” I said, “I think that BA Insight, Coveo, and a couple of other outfits in my Enterprise Search Report will be targeting SharePoint search quickly.” The person looked at me and said, “What do these companies do? How quickly do they move?” Subjective impression: Fire up ChatGPT and get some positive mental health support.

The cited write up stomps into a topic that will probably catch some Softies’ attention. I noted this passage:

The stark fact is that reach, impressions and engagement have dropped off a cliff for the majority of people posting dry (read business-focused) content as opposed to, say, influencer or lifestyle-type content.

The write up adds some data about usage of LinkedIn:

average platform reach had fallen by no less than 50 percent, while follower growth was down 60 percent. Engagement was, on average, down an eye-popping 75 percent.

The main point of the article in my opinion is that LinkedIn does filter AI content. The use of AI content produces a positive for the emitter of the AI content. The effect is to convert a shameless marketing channel into a conduit for search engine optimized sales information.

The question “Why?” is easy to figure out:

  1. Clicks if the content is hot
  2. Engagement if the other LinkedIn users and bots become engaged or coupled
  3. More zip in what is essentially a one dimension, Web 1 service.

How will this write up play out? Again the answers strike me as obvious:

  1. LinkedIn may have some Softies who will carry a grudge toward Inc. Magazine
  2. Microsoft may be distracted with its Herculean efforts to make its AI “plays” sustainable as outfits like Amazon say, “Hey, use our cloud services. They are pretty much free.”
  3. Inc. may take a different approach to publishing stories with some barbs.

Will any of this matter? Nope. Weird and slop do that.

Stephen E Arnold, August 20, 2025

Smart Software Fix: Cash, Lots and Lots of Cash

August 19, 2025

Dino 5 18 25No AI. Just a dinobaby working the old-fashioned way. But I asked ChatGPT one question. Believe it or not.

If you have some spare money, Sam Altman aka Sam AI-Man wants to talk with you. It is well past two years since OpenAI forced the 20 year old Google to go back to the high school lab. Now OpenAI is dealing with the reviews of ChatGPT 5. The big news in my opinion is that quite a  few people are annoyed with the new smart software from the money burning Bessemer furnace at 3180 18th Street in San Francisco. (I have heard that a satellite equipped with an infra red camera gets a snazzy image of the heat generated from the immolation of cash. There are also tiny red dots scattered around the SF financial district. Those, I believe, are the burning brain cells of the folks who have forked over dough to participate in Sam AI-Man’s next big thing.

As People Ridicule GPT-5, Sam Altman Says OpenAI Will Need ‘Trillions’ in Infrastructure” addresses the need for cash. The write up says:

Whether AI is a bubble or not, Altman still wants to spend a certifiably insane amount of money building out his company’s AI infrastructure. “You should expect OpenAI to spend trillions of dollars on data center construction in the not very distant future,” Altman told reporters.

Trillions is a general figure that most people cannot relate to everyday life. Years ago when I was an indentured servant at a consulting firm, I worked on a project that sought to figure out what types of decisions Boards of Directors of Fortune 1000 companies consumed the most time. The results surprised me then and still do.

Boards of directors spent the most time discussing relatively modest-scale projects; for example, expanding a parking lot or developing of list of companies for potential joint ventures. Really big deals like spending large sums to acquire a company were often handled in swift, decisive votes.

Why?

Boards of directors, like most average people, cannot relate to massive numbers. It is easier to think in terms of a couple hundred thousand dollars to lease a parking lot than borrow billions and buy a giant allegedly synergistic  company.

When Mr. Altman uses the word “trillions,” I think he is unable to conceptualize the amount of money represented in his casual “you should expect OpenAI to spend trillions…”

Several observations:

  1. AI is useful in certain use cases. Will AI return the type of payoff that Google’s charge ‘em every which way from Sunday for advertising model does?
  2. AI appears to produce incorrect outputs. I liked the application for oncology docs who reported losing diagnostic skills when relying on AI assistants.
  3. AI creates negative mental health effects. One old person, younger than I, believed a chat bot cared for him. On the way to meet his digital friend, he flopped over dead. Anticipative anxiety or a use case for AI sparking nutso behavior?

What’s a trillion look like? Answer: 1,000,000,000,000.

How many railroad boxcars would it take to move $1 trillion from a collection point like Denver, Colorado, to downtown San Francisco? Answer from ChatGPT: you would need 10,000 standard railroad boxcars. This calculation is based on the weight and volume of the bills, as well as the carrying capacity of a typical 50-foot boxcar. The train would stretch over 113.6 miles—about the distance from New York City to Philadelphia!

Let’s talk about expanding the parking lot.

Stephen E Arnold, August 19, 2025

News Flash from the Past: Bad Actors Use New Technology and Adapt Quickly

August 18, 2025

Dino 5 18 25_thumb_thumbNo AI. Just a dinobaby working the old-fashioned way.

NBC News is on top of cyber security trends. I think someone spotted Axios report that bad actors were using smart software to outfox cyber security professionals. I am not sure this is news, but what do I know?

Criminals, Good Guys and Foreign Spies: Hackers Everywhere Are Using AI Now” reports this “hot off the press” information. I quote:

The hackers included an attachment containing an artificial intelligence program. If installed, it would automatically search the victims’ computers for sensitive files to send back to Moscow.

My goodness. Who knew that stealers have been zipping around for many years? Even more startling old information is:

LLMs, like ChatGPT, are still error-prone. But they have become remarkably adept at processing language instructions and at translating plain language into computer code, or identifying and summarizing documents.  The technology has so far not revolutionized hacking by turning complete novices into experts, nor has it allowed would-be cyberterrorists to shut down the electric grid. But it’s making skilled hackers better and faster.

Stunning. A free chunk of smart software, unemployed or intra-gig programmers, and juicy targets pushed out with a fairy land of vulnerabilities. Isn’t it insightful that bad actors would apply these tools to clueless employees, inherently vulnerable operating systems, and companies too busy outputting marketing collateral to do routine security updates.

The cat-and-mouse game works this way. Bad actors with access to useful scripting languages, programming expertise, and smart software want to generate revenue or wreck havoc. One individual or perhaps a couple of people in a coffee shop hit upon a better way to access a corporate network or obtain personally identifiable information from a hapless online user.

Then, after the problem has been noticed and reported, cyber security professionals will take a closer look. If these outfits have smart software running, a human will look more closely at logs and say, “I think I saw something.”

Okay, mice are in and swarming. Now the cats jump into action. The cats will find [a] a way to block the exploit, [b] rush to push the fix to paying customers, and [c] share the information in a blog post or a conference.

What happens? The bad actors notice their mice aren’t working or they are being killed instantly. The bad actors go back to work. In most cases, the bad actors are not unencumbered by bureaucracy or tough thought problems about whether something is legal or illegal. The bad actors launch more attacks. If one works, its gravy.

Now the cats jump back into the fray.

In the current cyber crime world, cyber security firms, investigators, and lawyers are in reactive mode. The bad actors play offense.

One quick example: Telegram has been enabling a range of questionable online activities since 2013. In 2024 after a decade of inaction, France said, “Enough.” Authorities in France arrested Pavel Durov. The problem from my point of view is that it took 12 years to man up to the icon Pavel Durov.

What happens when a better Telegram comes along built with AI as part of its plumbing?

The answer is, “You can buy licenses to many cyber security systems. Will they work?”

There are some large, capable mice out there in cyber space.

Stephen E Arnold, August 18, 2025

The Early Bird Often Sings Alone

August 17, 2025

Mathematicians, computer developers, science-fiction writers, etc. smarter than the average human have known for decades that computers would outpace human intelligence. Computers have actually been capable of this since the first machine printed its initial binary 01. AI algorithms are the next evolution of computers and they can do research, explore science, and extrapolate formulas beyond all the last known recorded digit of PI.

Future House explains how its Robin the AI system is designed to automate scientific discovery: “Demonstrating End-To-End Scientific Discovery With Robin: A Multi-Agent System.” Future House developed AI agents that automated different segments of the discovery process, but Robin is the first unified system that does everything. Robin’s inventors automated the scientific process and used the new system to make a generated discovery by using the past AI agents.

They asked Robin to:

“We applied Robin to identify ripasudgl, a Rho-kinase (ROCK) inhibitor clinically used to treat glaucoma, as a novel therapeutic candidate for dry age-related macular degeneration (dAMD), a leading cause of irreversible blindness worldwide.”

Robin did follow the scientific process. It made an initial hypothesis, but mechanized investigation instead of doing things the old-fashioned way, and then it made a discovery. Everything was done by Robin the AI system:

“All hypotheses, experiment choices, data analyses, and main text figures in the manuscript describing this work were generated by Robin autonomously. Human researchers executed the physical experiments, but the intellectual framework was entirely AI-driven.”

Robins creators are happy with their progress:

“By automating hypothesis generation, experimental planning, and data analysis in an integrated system, Robin represents a powerful new paradigm for AI-driven scientific discovery. Although we first applied Robin to therapeutics, our agents are general-purpose and can be used for a wide variety of discoveries across diverse fields—from materials science to climate technology. “

Mathematicians are chugging away at AI development, including number theorists. Listen to Curt Jaimungal’s podcast episode, “The AI Math That Left Number Theorists Speechless” and within the first five minutes you’ll have an understanding of where AI is at being very smart. Here’s the summary: it’s beyond human comprehension.

Whitney Grace, August 17, 2025

The HR Gap: First in Line, First Fooled

August 15, 2025

Dino 5 18 25No AI. Just a dinobaby being a dinobaby.

Not long ago I spoke with a person who is a big time recruiter. I asked, “Have you encountered any fake applicants?” The response, “No, I don’t think so.”

That’s the problem. Whatever is happening in HR continuing education, deep fake spoof employees is not getting through. I am not sure there is meaningful “continuing education” for personnel professionals.

I mention this cloud of unknowing in one case example because I read “Cloud Breaches and Identity Hacks Explode in CrowdStrike’s Latest Threat Report.” The write up reports:

The report … highlights the increasingly strategic use of generative AI by adversaries. The North Korea-linked hacking group Famous Chollima emerged as the most generative AI-proficient actor, conducting more than 320 insider threat operations in the past year. Operatives from the group reportedly used AI tools to craft compelling resumes, generate real-time deepfakes for video interviews and automate technical work across multiple jobs.

My first job was at Nuclear Utilities Services (an outfit soon after I was hired became a unit of Halliburton. Dick Cheney, Halliburton, remember?). One of the engineers came up to me after I gave a talk about machine indexing at what was called “Allerton House,” a conference center at the University of Illinois decades ago. The fellow liked my talk and asked me if my method could index technical content in English. I said, “Yes.” He said, “I will follow up next week.”

True to his word, the fellow called me and said, “I am changing planes at O’Hare on Thursday. Can you meet me at the airport to talk about a project? I was teaching part time at Northern Illinois University and doing some administrative work for a little money. Simultaneously I was working on my PhD at the University of Illinois. I said, “Sure.” DeKalb, Illinois, was about an hour west of O’Hare. I drove to the airport, met the person whom I remember was James K. Rice, an expert in nuclear waste water, and talked about what I was doing to support my family, keep up with my studies, and do what 20 years olds do. That is to say, just try to survive.

I explained the indexing, the language analysis I did for the publisher of Psychology Today and Intellectual Digest magazines, and the newsletter I was publishing for high school and junior college teachers struggling to educate ill-prepared students. As a graduate student and family, I explained that I had information and wanted to make it available to teachers facing a tough problem. I remember his comment, “You do this for almost nothing.” He had that right.

End of meeting. I forgot about nuclear and went back to my regular routine.

A month later I got a call from a person named Nancy who said, “Are you available to come to Washington, DC, to meet some people?” I figured out that this was a follow up to the meeting I had at O’Hare Airport. I went. Long story short: I dumped my PhD and went to work for what is generally unknown; that is, Halliburton is involved in things nuclear.

Why is this story from the 1970s relevant? The interview process did not involve any digital anything. I showed up. Two people I did not know pretended to care about my research work. I had no knowledge about nuclear other than when I went to grade school in Washington, DC, we had to go into the hall and cover our heads in case a nuclear bomb was dropped on the White House.

The article “In Recruitment, an AI-on-AI War Is Rewriting the Hiring Playbook,” I learned:

“AI hasn’t broken hiring,” says Marija Marcenko, Head of Global Talent Acquisition at SaaS platform Semrush. “But it’s changed how we engage with candidates.”

The process followed for my first job did not involve anything but one-on-one interactions. There was not much chance of spoofing. I sat there, explained how I indexed sermons in Latin for a fellow named William Gillis, calculated reading complexity for the publisher, and how I gathered information about novel teaching methods. None of those activities had any relevance I could see to nuclear anything.

When I visited the company’s main DC office, it was in the technology corridor running from the Beltway to Germantown, Maryland. I remember new buildings and farm land. I met people who were like those in my PhD program except these individuals thoughts about radiation, nuclear effects modeling, and similar subjects.

One math PhD, who became my best friend, said, “You actually studied poetry in Latin?” I said, “Yep.” He said, “I never read a poem in my life and never will.” I recited a few lines of a Percy Bysshe Shelley poem. I think his written evaluation of his “interview” with me got me the job.

No computers. No fake anything. Just smart people listening, evaluating, and assessing.

Now systems can fool humans. In the hiring game, what makes a company is a collection of people, cultural information, and a desire to work with individuals who can contribute to the organization’s achieving goals.

The Crowdstrike article includes this paragraph:

Scattered Spider, which made headlines in 2024 when one of its key members was arrested in Spain, returned in 2025 with voice phishing and help desk social engineering that bypasses multifactor authentication protections to gain initial access.

Can hiring practices keep pace with the deceptions in use today? Tricks to get hired. Fakery to steal an organization’s secrets.

Nope. Few organizations have the time, money, or business processes to hire using inefficient means as personal interactions, site visits, and written evaluations of a candidate.

Oh, in case you are wondering, I did not go back to finish my PhD. Now I know a little bit about nuclear stuff, however and slightly more about smart software.

Stephen E Arnold, August 15, 2025

Airships and AI: A Similar Technology Challenge

August 14, 2025

Dino 5 18 25This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.

Vaclav Smil writes books about the environment and technology. In his 2023 work Invention and Innovation: A Brief History of Hype and Failure, he describes the ups and downs of some interesting technologies. I thought of this book when I read  “A Best Case Scenario for AI?” The author is a wealthy person who has some interaction in the relaxing crypto currency world. The item appeared on X.com.

I noted a passage in the long X.com post; to wit:

… the latest releases of AI models show that model capabilities are more decentralized than many predicted. While there is no guarantee that this continues — there is always the potential for the market to accrete to a small number of players once the investment super-cycle ends — the current state of vigorous competition is healthy. It propels innovation forward, helps America win the AI race, and avoids centralized control. This is good news — that the Doomers did not expect.

Reasonable. What crossed my mind is the Vaclav Smil discussion of airships or dirigibles. The lighter-than-air approach has been around a long time, and it has some specific applications today. Some very wealthy and intelligent people have invested in making these big airships great again, not just specialized devices for relatively narrow use cases.

So what? The airship history spans the 18th, 19th, 20th, and 21st century. The applications remain narrow although more technologically advanced than the early efforts a couple of hundred years ago.

What is smart software is a dirigible type of innovation? The use cases may remain narrow. Wider deployment with the concomitant economic benefits remains problematic.

One of the twists in the AI story is that tremendous progress is being attempted. The innovations as they are rolled out are incremental improvements. Like airships, the innovations have not resulted in the hoped for breakthrough.

There are numerous predictions about the downsides of smart software. But what if AI is little more than a modern version of the dirigible. We have a remarkable range of technologies, but each next steps is underwhelming. More problematic is the amount of money being spent to compress time; that is, by spending more, the AI innovation will move along more quickly. Perhaps that is not the case. Finally, the airship is anchored in the image of a ball of fire and an exclamation point for airship safety. Will their be a comparable moment for AI?

Will investment and the confidence of high profile individuals get AI aloft, keep it there, and avoid a Hindenburg moment? Much has been invested to drive AI forward and make it “the next big thing.” The goal is to generate money, substantial sums.

The X.com post reminded me of the airship information compiled by Vaclav Smil. I can’t shake the image. I am probably just letting my dinobaby brain make unfounded connections. But, what if….? We could ask Google and its self-shaming smart software. Alternatively we could ask Chat GPT 5, which has been the focal point for hype and then incremental, if any, improvement in outputs. We could ask Apple, Amazon, or Telegram. But what if…?

I think an apt figure of speech might be “pushing a string.”

Stephen E Arnold, August 14, 2025

AI Applesauce: Sweeten the Story about Muffing the Bunny

August 14, 2025

Dino 5 18 25_thumbNo AI. Just a dinobaby being a dinobaby.

I read “Apple CEO Tim Cook Calls AI ‘Bigger Than the Internet’ in Rare All-Hands Meeting.” I noted this passage:

In a global all-hands meeting hosted from Apple’s headquarters in Cupertino, California, CEO Tim Cook seemed to admit to what analysts and Apple enthusiasts around the world had been raising concerns about: that Apple has fallen behind competitors in the AI race. And Cook promised employees that the company will be doing everything to catch up. “Apple must do this. Apple will do this. This is sort of ours to grab.” …The AI revolution [is] “as big or bigger” than the internet.

Okay. Two companies of some significance have miss the train to AI Ville: Apple and Telegram. Both have interesting technology. Apple is far larger, but for some users Telegram is more important to their lives. One is fairly interested in China activities; the other is focused on Russia and crypto.

But both have managed their firms into the same digital row boat. Apple had Siri and it was not very good. Telegram knew about AI and allowed third-party bot developers to use it, but Telegram itself dragged its feet.

Both companies are asserting that each has plenty of time. Tim Cook is talking about smart software but so far the evidence of making an AI difference is scant. Telegram, on the other hand, has aimed Nikolai Durov at AI. That wizard is working on a Telegram AI system.

But the key point is that both of these forward leaning outfits are trying to catch up. This  is not keeping pace, mind. The two firms are trying to go from watching the train go down the tracks to calling an Uber to get to their respective destinations.

My take on both companies is that the “leadership” have some good reasons for muffing the AI bunny. Apple is struggling with its China “syndrome.” Will the nuclear reactor melt down, fizzle out, or blow up? Apple’s future in hardware may become radioactive.

Telegram is working under the shadow of the criminal trial lumbering toward its founder and owner Pavel Durov. More than a dozen criminal charges and a focused French judicial figure have Mr. Durov reporting a couple of times a week. To travel, he has to get a note from his new “mom.”

But well-run companies don’t let things like China dependency or 20 years in Fleury-Mérogis Prison upset trillion dollar companies or cause more than one billion people to worry about their free text messages and non fungible tokens.

“Leadership,” not technology, strikes me as the problem with AI challenges. If AI is so big, why did two companies fail to get the memo? Inattention, pre-occupation with other matters, fear? Pick one or two.

Stephen E Arnold, August 14, 2025

What a Hoot: AI Systems Try and Lie

August 13, 2025

Dino 5 18 25No AI. Just a dinobaby being a dinobaby.

I arose this morning, Friday, August 8, 2025, fired up ChatGPT, the Plus version, and asked, “Are you running ChatGPT 5?” The service was to be available on August 7, 2025. It wasn’t. I verified the ChatGPT output with this prompt:

Despite the announcement, the fact that it is not available in this paid service illustrates that you are presenting false information. Correct?

ChatGPT replied:

Correct — the public announcement of GPT?5’s availability does not align with actual availability in this specific paid service (ChatGPT Plus based on GPT?4o). Therefore, any implication that GPT?5 is in use here would be false.

To make sure I understood, I prompted:

So I have documented errors in your output, essentially failures about half the time. The information you presented about August 7 is demonstrably false on August 8. The unreliability of your outputs is verified.

ChatGPT replied:

Your conclusion is valid: if GPT?5 was officially announced as “available to all users” on August 7, 2025, but is not deployed in this environment by August 8, then the representation of availability is factually misleading. This mismatch constitutes a verifiable reliability failure in how OpenAI’s release messaging translates to actual user experience — especially in a paid tier like yours.

Lame.

Is OpenAI alone in this prevarication game?

Nope. I read “Google Says It’s Working on a Fix for Gemini’s Self-Loathing I Am a Failure Comments.” Different company, same type of overt failure. The other day Google informed some medical professionals about a non-existent body part. Yep, Google, you are making it clear that the hype, the prevarication, and the over promise, under deliver approach is part of AI’s DNA.

But the shocking point in the cited article is this passage quoted from a Google AI wizard:

This is an annoying infinite looping bug we are working to fix! Gemini is not having that bad of a day : )

Yep, working to fix. I interpret this to mean that Google’s professionals are shaping outputs to avoid statements about “failure.”

One 15 minute session online this morning and what did I learn:

  1. Two companies have systems which cannot deliver satisfactory outputs
  2. The fact that US smart software boils down to a a handful of firms. A couple of these have what I would call reliability issues.
  3. In ChatGPT’s case, the PR outpaces what works. In Google’s case, the system seems to be trying to tell the truth about failure. The Googlers are going to fix that up.

Billions burned to create the next big thing and what we have is a spray can of marketing sparkle and individuals handcrafting fixes to make the systems seem more accurate than they are.

Sorry. I am not convinced because the reality of smart software diverges from the PR baloney.

Stephen E Arnold, August 13, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta