Deal Breakers in Medical AI
August 26, 2025
No AI. Just a dinobaby working the old-fashioned way.
My newsfeed thing spit out a link to “Why Radiology AI Didn’t Work and What Comes Next.” I have zero interest in radiology. I don’t get too excited about smart software. So what did I do? Answer: I read the article. I was delighted to uncover a couple of points that, in my opinion, warrant capturing in my digital notebook.
The set up is that a wizard worked at a start up trying to get AI to make sense of the consistently fuzzy, murky, and baffling images cranked out by radiology gizmos. Tip: Follow the instructions and don’t wear certain items of jewelry. The start up fizzled. AI was part of the problem, but the Jaws-type sharp lurking in the murky image explains this type of AI implosion.
Let’s run though the points that struck me.
First, let’s look at this passage:
Unlike coding or mathematics, medicine rarely deals in absolutes. Clinical documentation, especially in radiology, is filled with hedge language — phrases like “cannot rule out,” “may represent,” or “follow-up recommended for correlation.” These aren’t careless ambiguities; they’re defensive signals, shaped by decades of legal precedent and diagnostic uncertainty.
Okay, lawyers play a significant role in establishing thought processes and normalizing ideas that appear to be purpose-built to vaporize like one of those nifty tattoo removing gadgets the smart system. I would have pegged insurance companies, then lawyers, but the write up directed my attention of the legal eagles’ role: Hedge language. Do I have disease X? The doctor responds, “Maybe, maybe not. Let’s wait 30 days and run more tests.” Fuzzy lingo, fuzzy images, perfect.
Second, the write up asks two questions:
- How do we improve model coverage at the tail without incurring prohibitive annotation costs?
- Can we combine automated systems with human-in-the-loop supervision to address the rare but dangerous edge cases?
The answers seem to be: You cannot afford to have humans do indexing and annotation. That’s why certain legal online services charge a lot for annotations. And, the second question, no, you cannot pull off automation with humans for events rarely covered in the training data. Why? Cost and finding enough humans who will do this work in a consistent way in a timely manner.
Here’s the third snippet:
Without direct billing mechanisms or CPT reimbursement codes, it was difficult to monetize the outcomes these tools enabled. Selling software alone meant capturing only a fraction of the value AI actually created. Ultimately, we were offering tools, not outcomes. And hospitals, rightly, were unwilling to pay for potential unless it came bundled with performance.
Finally, insurance procedures. Hospitals aren’t buying AI; they are buying ways to deliver “service” and “bill.” AI at this time does not sell what hospitals want to buy: A way to keep high rates and slash costs wherever possible.
Unlikely but perhaps some savvy AI outfit will create a system that can crack the issues the article identifies. Until then, no money, no AI.
Stephen E Arnold, August 26, 2025
Leave No Data Unslurped: A New Google T Shirt Slogan?
August 25, 2025
No AI. Just a dinobaby working the old-fashioned way.
That mobile phone is the A Number One surveillance device ever developed. Not surprisingly, companies have figured out how to monetize the data flowing through the device. Try explaining the machinations of those “Accept Defaults” to a clutch of 70-something bridge players. Then try explaining the same thing to the GenAI type of humanoid. One group looks at you with a baffled work on their faces. The other group stares into the distance and says, “Whatever.”
Now the Google wants more data, fresh information, easily updated. Because why not? “Google Expands AI-Based Age Verification System for Search Platform.” The write up says:
Google has begun implementing an artificial intelligence-based age verification system not only on YouTube but also on Google Search … Users in the US are reporting pop-ups on Google Search saying, “We’ve changed some of your settings because we couldn’t verify that you’re of legal age.” This is a sign of new rules in Google’s Terms of Service.
Why the scope creep from YouTube to “search” with its AI wonderfulness? The write up says:
The new restrictions could be another step in re-examining the balance between usability and privacy.
Wrong. The need for more data to stuff into the assorted AI “learning” services provide a reasonable rationale. Tossing in the “prevent harm” angle is just cover.
My view of the matter is:
- Mobile is a real time service. Capturing more information of a highly-specific nature is something that is an obvious benefit to the Google.
- Users have zero awareness of how the data interactions work and most don’t want to know to try to understand cross correlation.
- Google’s goals are not particularized. This type of “fingerprint” just makes sense.
The motto could be “Leave no data unslurped.” What’s this mean? Every Google service will require verification. The more one verifies, the fresher the identify information and the items that tag along and can be extracted. I think of this as similar to the process of rendering slaughtered livestock. The animal is dead, so what’s the harm.
None, of course. Google is busy explaining how little its data centers use to provide those helpful AI overview things.
Stephen E Arnold, August x, 2025
Stephen E Arnold, August 25, 2025
Learning Is Hard Work: AI Is Not Part of My Game Plan
August 25, 2025
No AI. Just a dinobaby working the old-fashioned way.
Dinobaby here—a lifetime of unusual education packed into a single childhood. I kicked off in a traditional Illinois kindergarten, then traded finger painting for experimental learning at a “new-idea” grade school in Maryland after a family move near DC. Soon, Brazil called: I landed in Campinas, but with zero English spoken, I lasted a month. Fifth through seventh grade became a solo mission—Calvert Course worksheets, a jungle missionary who mailed my work to Baltimore, and eventually, after the tutor died, pure self-guided study from thousands of miles away. I aced my assignments, but no one in Maryland had any idea of my world. My Portuguese tutor mixed French and German with local lingo; ironically, her English rocketed while my Portuguese crawled.
Back in the States, I dove into “advanced” classes and spent a high school semester at the University of Illinois—mainly reading, testing, and reading. A scholarship sent me to Bradley, a few weeks removed from a basketball cheating inquiry. A professor hooked me on coding in the library, building Latin sermon indexes using the school’s IBM. That led to a Duquesne fellowship; then the University of Arkansas wanted me for their PhD program. But I returned to Illinois, wrote code for Milton texts instead of Latin under Arthur Barker’s mentorship, and gave talks that landed me a job offer. One conference center chat brought me to DC and into the nuclear division at Halliburton. That’s my wild educational ride.
Notice that it did not involve much traditional go-to-class activity. I have done okay despite my somewhat odd educational journey. Most important: No smart software.
Now why did I provide this bit of biographical trivia? I read “AI in the Classroom Is Important for Real-World Skills, College Professors Say.” I did not have access to “regular” school through grade school, high school, and college. I am not sure how many high school students took classes at the U of I when they were 15 years old, but that experience was not typical among my high school class.
I did start working with computers and software in 1962, but there wasn’t much smart software floating around then. The trick for me has been my ability to read quickly, recognize what’s important, and remember information. Again there was no AI. Today, as I finish my Telegram Labyrinth monograph, AI has not been of any importance. Most of the source material is in Russian language documents. The English information is not thoroughly indexed by Telegram nor by the Web search engines. The LLM content suckers are not doing too much with information outside the English speaking world. Maybe China is pushing forward, but my tests with Chinese language Web search engines did not provide much, if any, information my team and I already had reviewed.
Obviously I don’t think AI is something that fits into my “real world skills.” The write up says:
“If integrated well, AI in the classroom can strengthen the fit between what students learn and what students will see in the workforce and world around them,” argued Victor Lee, associate professor at Stanford’s Graduate School of Education. GenAI companies are certainly doing their part to lure students into using their tools by offering new learning and essay-writing features. Google has gone so far as to offer Gemini free for one year, and OpenAI late last month introduced “Study Mode” to help students “work through problems step by step instead of just getting an answer,” the company said in a blog post.
Maybe.
My personal approach to learning involves libraries, for fee online databases, Web research, and more reading. I still take notes on 4×6 notecards just as I did when I was trying to index those Latin sermons. Once I process the “note”, I throw it away. I am lucky because once I read, write, and integrate the factoid into something I am writing — I remember the information. I don’t use digital calendars. I don’t use integrated to do lists. I just do what has been old fashioned information acquisition work.
The computer is wonderful for writing, Web research, and cooking up PowerPoint pablum. But the idea that using a tool that generates incorrect information strikes me as plain crazy.
The write up says:
Longji Cuo, an associate professor at the University of Colorado, in Boulder, teaches a course on AI and machine learning to help mechanical engineering students learn to use the technology to solve real-world engineering problems. Cuo encourages students to use AI as an agent to help with teamwork, projects, coding, and presentations in class. “My expectation on the quality of the work is much higher,” Cuo said, adding that students need to “demonstrate creativity on the level of a senior-level doctoral student or equivalent.”
Maybe. I am not convinced. Engineering issues are cascading across current and new systems. AI doesn’t seem to stem the tide. What about AI cyber security? Yeah, it’s working great. What about coding assistants? Yeah, super. I just uninstalled another Microsoft Windows 11 update. This one can kill my data storage devices. Copilot? Yeah, wonderful.
The write up concludes with this assertion from an “expert”:
one day, AI agents will be able to work with students on their personalized education needs. “Rather than having one teacher for 30 students, you’ll have one AI agent personalized to each student that will guide them along.”
Learning is hard work. The silliness of computer aid instruction, laptops, iPads, mobile phones, etc. makes one thing clear, learning is not easy. A human must focus, develop discipline, refine native talents, demonstrate motivate, curiosity, and an ability to process information into something more useful than remembering the TikTok icon’s design.
I don’t buy this. I am glad I am old.
Stephen E Arnold, August 25, 2025
Is Reading Necessary, Easy, and Fun? Sure
August 25, 2025
No AI. Just a dinobaby working the old-fashioned way.
The GenAI service person answered my questions this way:
- Is reading necessary? Answer: Not really
- Is reading easy? No, not for me
- Is reading fun? For me, no.
Was I shocked? No. I almost understand. Note: I said “almost.” The idea that the mental involvement associated with reading is, for my same of one, is not on the radar.
“Reading for Pleasure in Freefall: Research Finds 40% Drop Over Two Decades” presents information that caught my attention for two reasons:
- The decline appears to be gradual; that is, freefall. The time period in terms of my dinobaby years is wildly inaccurate.
- The inclusion of a fat round number like 40 percent strikes me as understatement
On what basis do I make these two observations about the headline? I have what I call a Barnes & Noble toy ratio. A bookstore is now filled with toys, knick-knacks and Temu-type products. That’s it. Book stores are tough to find. When one does locate a book store, it often is a toy store.
The write up is much more scientific than my toy algorithm. I noted this passage from a study conducted by two universities I view as anchors of opposite ends of the academic spectrum: The University of Florida and University College London. Here’s the passage:
the study analyzed data from over 236,000 Americans who participated in the American Time Use Survey between 2003 and 2023. The findings suggest a fundamental cultural shift: fewer people are carving out time in their day to read for enjoyment. This is not just a small dip—it’s a sustained, steady decline of about 3% per year…
I don’t want to be someone who criticizes the analysis of two esteemed institutions. I would suggest that the decrease is going to take much less time than a couple of centuries if the three percent erosion continues. I acknowledge that in the US print book sales in 2024 reached about 700 million (depending on whom one believes), an increase of over 2023. These data do not reflect books generated by smart software.
But book sales does not mean more people are reading material that requires attention. Old people read more books than a grade school student or a young person who is not in what a dinobaby would call a school. Hats off to the missionary who teaches one young person in a tough spot to read and provides the individual with access to books.
I want to acknowledge this statement in the write up:
The researchers also noted some more promising findings, including that reading with children did not change over the last 20 years. However, reading with children was a lot less common than reading for pleasure, which is concerning given that this activity is tied to early literacy development, academic success and family bonding….
I interpreted the two stellar institutions as mostly getting at these points:
- Some people read and read voraciously. Pleasure or psychological problem? Who knows. It happens.
- Many people don’t value books, don’t read books, and won’t by choice or mental set up can’t read books.
- The distractions of just existing today make reading a lower priority for some than other considerations; for example, not getting hit by a kinetic in a war zone, watching TikTok, creating YouTube videos about big thoughts, a mobile phone, etc.
Let’s go back to the book store and toys. The existence of toys in a book store make clear that selling books is a very tough business. To stay open, Temu-type stuff has to be pushed. If reading were “fun,” the book stores would be as plentiful as unsold bourbon in Kentucky.
Net net: We have reached a point at which the number of readers is a equivalent to the count of snow leopards. Reading for fun marks an individual as one at risk of becoming a fur coat.
Stephen E Arnold, August 29, 2025
Copilot, Can You Crash That Financial Analysis?
August 22, 2025
No AI. Just a dinobaby working the old-fashioned way.
The ever-insouciant online service The Verge published a story about Microsoft, smart software, and Excel. “Microsoft Excel Adds Copilot AI to Help Fill in Spreadsheet Cells” reports:
Microsoft Excel is testing a new AI-powered function that can automatically fill cells in your spreadsheets, which is similar to the feature that Google Sheets rolled out in June.
Okay, quite specific intentionality: Fill in cells. And a dash of me-too. I like it.
However, the key statement in my opinion is:
The COPILOT function comes with a couple of limitations, as it can’t access information outside your spreadsheet, and you can only use it to calculate 100 functions every 10 minutes. Microsoft also warns against using the AI function for numerical calculations or in “high-stakes scenarios” with legal, regulatory, and compliance implications, as COPILOT “can give incorrect responses.”
I don’t want to make a big deal out of this passage, but I will do it anyway. First, Microsoft makes clear that the outputs can be incorrect. Second, don’t use it too much because I assume one will have to pay to use a system that “can give incorrect results.” In short, MSFT is throttling Excel’s Copilot. Doesn’t everyone want to explore numbers with an addled Copilot known to flub numbers in a jet aircraft at 0.8 Mach?
I want to quote from “It Took Many Years And Billions Of Dollars, But Microsoft Finally Invented A Calculator That Is Wrong Sometimes”:
Think of it. Forty-five hundred years ago, if you were a Sumerian scribe, while your calculations on the world’s first abacus might have been laborious, you could be assured they’d be correct. Four hundred years ago, if you were palling around with William Oughtred, his new slide rule may have been a bit intimidating at first, but you could know its output was correct. In the 1980s, you could have bought the cheapest, shittiest Casio-knockoff calculator you could find, and used it exclusively, for every day of the rest of your life, and never once would it give anything but a correct answer. You could use it today! But now we have Microsoft apparently determining that “unpredictability” was something that some number of its customers wanted in their calculators.
I know that I sure do. I want to use a tool that is likely to convert “high-stakes scenarios” into an embarrassing failure. I mean who does not want this type of digital Copilot?
Why do I find this Excel with Copilot software interesting?
- It illustrates that accuracy has given way to close enough for horseshoes. Impressive for a company that can issue an update that could kill one’s storage devices.
- Microsoft no longer dances around hallucinations. The company just says, “The outputs can be wrong.” But I wonder, “Does Microsoft really mean it?” What about Red Bull-fueled MBAs handling one’s retirement accounts? Yeah, those people will be really careful.
- The article does not come and and say, “Looks like the AI rocket ship is losing altitude.”
- I cannot imagine sitting in a meeting and observing the rationalizations offered to justify releasing a product known to make NUMERICAL errors.
Net net: We are learning about the quality of [a] managerial processes at Microsoft, [b] the judgment of employees, and [c] the sheer craziness that an attorney said, “Sure, release the product just include an upfront statement that it will make mistakes.” Nothing builds trust more than a company anchored in customer-centric values.
Stephen E Arnold, August 22, 2025
So Much AI and Now More Doom and Gloom
August 22, 2025
No AI. Just a dinobaby and a steam-powered computer in rural Kentucky.
Amidst the hype about OpenAI’s ChatGPT 5, I have found it difficult to identify some quiet but to me meaningful signals. One, in my opinion, appears in “Sam Altman Sounds Alarm on AI Crisis That Even He Finds Terrifying.” I was hoping that the article would provide some color on the present negotiations between Sam and Microsoft. For a moment, I envisioned Sam in a meeting with the principals of the five biggest backers of OpenAI. The agenda had one item on the agenda, “When do we get our money back with a payoff, Mr. Altman?”
But no. The signal is that smart software will enable fast-moving, bureaucracy-free bad actors to apply smart software to online fraud. The write up says:
[Mr.] Altman fears that the current AI-fraud crisis will expand beyond voice cloning attacks, deepfake video call scams and phishing emails. He warns that in the future, FaceTime or video fakes may become indistinguishable from reality. The alarming abilities of current AI-technology in the hands of bad faith actors is already terrifying. Scammers can now use AI to create fake identification documents, explicit photos, and headshots for social media profiles.
Okay, he is on the money, but he overlooks one use case for smart software. A bad actor can use different smart software systems and equip existing malware with more interesting features. At some point, a clever bad actor will use AI to build a sophisticated money laundering mechanism that uses the numerous new crypto currencies and their attendant blockchain systems to make the wizards at Huione Guarantee look pretty pathetic.
Can this threat be neutralized. I don’t think it can be in the short term. The reason is that AI is here and has been available for more than a year. Code generation is getting easier. A skilled bad actor can, just like a Google-type engineer, become more productive. In the mid-term, the cyber security companies will roll out AI tools that, according to one outfit whose sales pitch I listened to last wee, will “predict the future.” Yeah, sure. News flash: Once a breach has been discovered, then the cyber security firms kick into action. If the predictive stuff were reliable, these outfits would be betting on horse races and investing in promising start ups, not trying to create such a company.
Mr. Altman captures significant media attention. His cyber fraud message is a faint signal amidst the cacophony of the AI marketing blasts. By the way, cyber fraud is booming, and our research into outfits like Telegram suggest that AI is a contributing factor.
With three new Telegram-type services in development at this time, the future for bad actors looks as bright and the future for cyber security firms looks increasingly reactive. For investors and those with retirement funds, the forecast is less cheery.
Stephen E Arnold, August 22, 2025
Another Google Apology Coming? Sure, It Is Just Medical Info. Meh
August 22, 2025
No AI. Just a dinobaby and a steam-powered computer in rural Kentucky.
Another day and more surprising Mad Magazine type of smart software stories. I noted this essay as a cocktail party anecdote particularly when doctors are chatting with me: “Doctors Horrified After Google’s Healthcare AI Makes Up a Body Part That Does Not Exist in Humans.”
Okay, guys like Leonardo da Vinci and Michelangelo dissected cadavers in order to get a first-hand, hands on and hands in sense of what was in a human body. However, Google’s smart software does not require any of that visceral human input. The much hyped systems developed by Google’s wizards just use fancy math and predict what it knows and what a human needs to answer a question. Simple, eh.
The cited write up says:
One glaring error proved so persuasive that it took over a year to be caught. In their May 2024 research paper introducing a healthcare AI model, dubbed Med-Gemini, Google researchers showed off the AI analyzing brain scans from the radiology lab for various conditions. It identified an “old left basilar ganglia infarct,” referring to a purported part of the brain — “basilar ganglia” — that simply doesn’t exist in the human body. Board-certified neurologist Bryan Moore flagged the issue to The Verge, highlighting that Google fixed its blog post about the AI — but failed to revise the research paper itself.
Big deal or not? The write up points out:
… in a hospital setting, those kinds of slip-ups could have devastating consequences. While Google’s faux pas more than likely didn’t result in any danger to human patients, it sets a worrying precedent, experts argue.
Several observations:
- Smart software will just improve. Look at ChatGPT 5, it is doing wonders even though rumor has it that OpenAI is going to make ChatGPT4o available again. Progress.
- Google will apologize and rework the system so it does not make this specific medical misstep again. Yep, rules based smart software. How tenable is that? Just consider how that worked for AskJeeves years ago.
- Ask yourself the question, “Do I want Google-infused smart software to replace my harried personal physician?”
Net net: Great anecdote for a cocktail party. I bet those doctors will find me very amusing.
Stephen E Arnold, August 22, 2025
News Flash: Google Does Not Care about Publishers
August 21, 2025
No AI. Just a dinobaby working the old-fashioned way.
I read another Google is bad story. This one is titled “Google Might Not Believe It, But Its AI Summaries Are Bad News for Publishers.” The “news” service reports that a publishing industry group spokesperson said:
“We must ensure that the same AI ‘answers’ users see at the top of Google Search don’t become a free substitute for the original work they’re based on.”
When this sentence was spoken was the industry representative’s voice trembling? Were there tears in his or her eyes? Did the person sniff to avoid the embarrassment of a runny nose?
No idea.
The issue is that Google looks at its metrics, fiddles with its knobs and dials on its ad sales system, and launches AI summaries. Those clicks that used to go to individual sites now provide the “summary space” which is a great place for more expensive, big advertising accounts to slap their message. Yep, it is the return to the go-go days of television. Google is the only channel and one of the few places to offer a deal.
What does Google say? Here’s a snip from the “news” story:
"Overall, total organic click volume from Google Search to websites has been relatively stable year-over-year," Liz Reid, VP and Head of Google Search, said earlier this month. "Additionally, average click quality has increased, and we’re actually sending slightly more quality clicks to websites than a year ago (by quality clicks, we mean those where users don’t quickly click back — typically a signal that a user is interested in the website). Reid suggested that reports like the ones from Pew and DCN are "often based on flawed methodologies, isolated examples, or traffic changes that occurred prior to the rollout of AI features in Search."
Translation: Haven’t you yokels figured out after 20 years of responding to us, we are in control now. We don’t care about you. If we need content, we can [a] pay people to create it, [b] use our smart software to write it, and [c] offer inducements to non profits, government agencies, and outfits with lots of writers desperate for recognition a deal. TikTok has changed video, but TikTok just inspired us to do our own TikTok. Now publishers can either get with the program or get out.
PC News apparently does not know how to translate Googlese.
It’s been 20 plus years and Google has not changed. It is doing more of the game plan. Adapt or end up prowling LinkedIn for work.
Stephen E Arnold, August 21, 2025
What Cyber Security Professionals “Fear”
August 21, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
My colleague Robert David Steele (now deceased) loved to attend Black Hat. He regaled me with the changing demographics of the conference, the reaction to his often excitement-inducing presentations, and the interesting potential “resources” he identified. I was content to stay in my underground office in rural Kentucky and avoid the hacking and posturing.
I still keep up (sort of but not too enthusiastically) with Black Hat events by reading articles like “Black Hat 2025: What Keeps Cyber Experts Up at Night?” The write up explains that:
“Machines move faster than humans.”
Okay, that makes sense. The write up then points out:
“Tools like generative AI are fueling faster, more convincing phishing and social engineering campaigns.”
I concluded that cyber security professionals fear fast computers and smart software. When these two things are combined, the write up states:
The speed of AI innovation is stretching security management to its limits.
My conclusion is that the wide availability of smart software is the big “fear.”
I interpret the information in the write up from a slightly different angle. Let me explain.
First, cyber security companies have to make money to stay in business. I could name one Russian outfit that gets state support, but I don’t want to create waves. Let’s go with money is the driver of cyber security. In order to make money, the firms have to come up with fancy ways of explaining DNS analysis, some fancy math, or yet another spin on the Maltego graph software. I understand.
Second, cyber security companies are by definition reactive. So far the integration of smart software into the policeware and intelware systems I track adds some workflow enhancements; for example, grouping information and in some cases generating a brief paragraph, thus saving time. Proactive perimeter defense systems and cyber methods designed to spot insider attacks are in what I call “sort of helpful” mode. These systems can easily overwhelm the person monitoring the data signals. Firms respond by popping up a level with another layer of abstraction. Those using the systems are busy, of course, and it is not clear if more work gets done or if time is bled off to do busy-work. Cyber security firms, therefore, are usually not in proactive mode except for marketing.
Third, cyber security firms are consolidating. I think about outfits like Pala Alto or the private equity roll ups. The result is that bureaucratic friction is added to the technology development these firms must do. Just figuring out how to snag data from the latest and greatest Dark Web secret forum and actually getting access to a Private Channel on Telegram disseminating content that is illegal in many jurisdictions takes time. With smart software, bad actors can experiment. The self-appointed gatekeepers do little to filter these malware activities because some bad actors are customers of the gatekeepers. (No, I won’t name firms. I don’t want to talk to lawyers or inflamed cyber security firms’ leadership.) My point is that consolidation creates bureaucratic work. That activity puts the foot on the fast moving cyber firm’s brakes. Reaction time slows.
What does this mean?
I think the number one fear for cyber security professionals may be the awareness that bad actors with zero bureaucratic, technical, or financial limits can use AI to make old wine new again. Recently a major international law enforcement organization announced the shutdown of particular stealer software. Unfortunately that stealer is currently being disseminated via Web search systems with live links to the Telegram-centric vendor pumping the malware into thousands of unsuspecting Telegram users each month.
What happens when that “old school” stealer is given some new capabilities by one of the smart software tools? The answer is, “Cyber security firms may have to hype their capabilities to an even greater degree than they now do. Behind the scenes, the stage is now set for developer burn out and churn.
The fear, then, is a nagging sense that bad guys may be getting a tool kit to punch holes in what looks like a slam dunk business. I am probably wrong because I am a dinobaby. I don’t go to many conferences. I don’t go to sales meetings. I don’t meet with private equity people. I just look at how AI makes asymmetric cyber warfare into a tough game. One should not take a squirt gun to a shoot out with a bad actor working without bureaucratic and financial restraints armed with an AI system.
Stephen E Arnold, August 21, 2025
The Risks of Add-On AI: Apple, Telegram, Are You Paying Attention?
August 20, 2025
No AI. Just a dinobaby and a steam-powered computer in rural Kentucky.
Name three companies trying to glue AI onto existing online services? Here’s my answer:
-
Amazon
-
Apple
-
Telegram.
There are others, but each of these has a big “tech rep” and command respect from other wizards. We know that Tim Apple suggested that the giant firm had AI pinned to the mat and whimpering, “Let me be Siri.” Telegram mumbled about Nikolai working on AI. And Amazon? That company has flirted with smart software with its Sagemaker announcements years ago. Now it has upgraded Alexa, the device most used as a kitchen timer.
“Amazon’s Rocky Alexa+ Launch Might Justify Apple’s Slow Pace with Next-Gen Siri” ignores Telegram (of course. Who really cares?) and uses Amazon’s misstep to apologize for Apple’s goofs. The write up says:
Apple has faced a similar technical challenge in its own next-generation Siri project. The company once aimed to merge Siri’s existing deterministic systems with a new generative AI layer but reportedly had to scrap the initial attempt and start over. … Apple’s decision to delay shipping may be frustrating for those of us eager for a more AI-powered Siri, but Amazon’s rocky launch is a reminder of the risks of rushing a replacement before it’s actually ready.
Why does this matter?
My view is that Apple’s and Amazon’s missteps make clear that bolting on, fitting in, and snapping on smart software is more difficult than it seemed. I also believe that the two firms over-estimated their technical professionals’ ability to just “do” AI. Plus, both US companies appear to be falling behind in the “AI race.”
But what about Telegram? That company is in the same boat. Its AI innovations are coming from its third party developers who have been using Telegram’s platform as a platform. Telegram itself has missed opportunities to reduce the coding challenge for its developers with it focus on old-school programming languages, not AI assisted coding.
I think that it is possible that these three firms will get their AI acts together. The problem is that AI native solutions for the iPhone, the Telegram “community,” and Amazon’s own hardware products. The fumbles illustrate a certain weakness in each firm. Left unaddressed, these can be debilitating in an uncertain economic environment.
But the mantra go fast or the jargon accelerate is not in line with the actions of these three companies.
Stephen E Arnold, August 20, 2025