Deal Breakers in Medical AI
August 26, 2025
No AI. Just a dinobaby working the old-fashioned way.
My newsfeed thing spit out a link to “Why Radiology AI Didn’t Work and What Comes Next.” I have zero interest in radiology. I don’t get too excited about smart software. So what did I do? Answer: I read the article. I was delighted to uncover a couple of points that, in my opinion, warrant capturing in my digital notebook.
The set up is that a wizard worked at a start up trying to get AI to make sense of the consistently fuzzy, murky, and baffling images cranked out by radiology gizmos. Tip: Follow the instructions and don’t wear certain items of jewelry. The start up fizzled. AI was part of the problem, but the Jaws-type sharp lurking in the murky image explains this type of AI implosion.
Let’s run though the points that struck me.
First, let’s look at this passage:
Unlike coding or mathematics, medicine rarely deals in absolutes. Clinical documentation, especially in radiology, is filled with hedge language — phrases like “cannot rule out,” “may represent,” or “follow-up recommended for correlation.” These aren’t careless ambiguities; they’re defensive signals, shaped by decades of legal precedent and diagnostic uncertainty.
Okay, lawyers play a significant role in establishing thought processes and normalizing ideas that appear to be purpose-built to vaporize like one of those nifty tattoo removing gadgets the smart system. I would have pegged insurance companies, then lawyers, but the write up directed my attention of the legal eagles’ role: Hedge language. Do I have disease X? The doctor responds, “Maybe, maybe not. Let’s wait 30 days and run more tests.” Fuzzy lingo, fuzzy images, perfect.
Second, the write up asks two questions:
- How do we improve model coverage at the tail without incurring prohibitive annotation costs?
- Can we combine automated systems with human-in-the-loop supervision to address the rare but dangerous edge cases?
The answers seem to be: You cannot afford to have humans do indexing and annotation. That’s why certain legal online services charge a lot for annotations. And, the second question, no, you cannot pull off automation with humans for events rarely covered in the training data. Why? Cost and finding enough humans who will do this work in a consistent way in a timely manner.
Here’s the third snippet:
Without direct billing mechanisms or CPT reimbursement codes, it was difficult to monetize the outcomes these tools enabled. Selling software alone meant capturing only a fraction of the value AI actually created. Ultimately, we were offering tools, not outcomes. And hospitals, rightly, were unwilling to pay for potential unless it came bundled with performance.
Finally, insurance procedures. Hospitals aren’t buying AI; they are buying ways to deliver “service” and “bill.” AI at this time does not sell what hospitals want to buy: A way to keep high rates and slash costs wherever possible.
Unlikely but perhaps some savvy AI outfit will create a system that can crack the issues the article identifies. Until then, no money, no AI.
Stephen E Arnold, August 26, 2025
Leave No Data Unslurped: A New Google T Shirt Slogan?
August 25, 2025
No AI. Just a dinobaby working the old-fashioned way.
That mobile phone is the A Number One surveillance device ever developed. Not surprisingly, companies have figured out how to monetize the data flowing through the device. Try explaining the machinations of those “Accept Defaults” to a clutch of 70-something bridge players. Then try explaining the same thing to the GenAI type of humanoid. One group looks at you with a baffled work on their faces. The other group stares into the distance and says, “Whatever.”
Now the Google wants more data, fresh information, easily updated. Because why not? “Google Expands AI-Based Age Verification System for Search Platform.” The write up says:
Google has begun implementing an artificial intelligence-based age verification system not only on YouTube but also on Google Search … Users in the US are reporting pop-ups on Google Search saying, “We’ve changed some of your settings because we couldn’t verify that you’re of legal age.” This is a sign of new rules in Google’s Terms of Service.
Why the scope creep from YouTube to “search” with its AI wonderfulness? The write up says:
The new restrictions could be another step in re-examining the balance between usability and privacy.
Wrong. The need for more data to stuff into the assorted AI “learning” services provide a reasonable rationale. Tossing in the “prevent harm” angle is just cover.
My view of the matter is:
- Mobile is a real time service. Capturing more information of a highly-specific nature is something that is an obvious benefit to the Google.
- Users have zero awareness of how the data interactions work and most don’t want to know to try to understand cross correlation.
- Google’s goals are not particularized. This type of “fingerprint” just makes sense.
The motto could be “Leave no data unslurped.” What’s this mean? Every Google service will require verification. The more one verifies, the fresher the identify information and the items that tag along and can be extracted. I think of this as similar to the process of rendering slaughtered livestock. The animal is dead, so what’s the harm.
None, of course. Google is busy explaining how little its data centers use to provide those helpful AI overview things.
Stephen E Arnold, August x, 2025
Stephen E Arnold, August 25, 2025
Copilot, Can You Crash That Financial Analysis?
August 22, 2025
No AI. Just a dinobaby working the old-fashioned way.
The ever-insouciant online service The Verge published a story about Microsoft, smart software, and Excel. “Microsoft Excel Adds Copilot AI to Help Fill in Spreadsheet Cells” reports:
Microsoft Excel is testing a new AI-powered function that can automatically fill cells in your spreadsheets, which is similar to the feature that Google Sheets rolled out in June.
Okay, quite specific intentionality: Fill in cells. And a dash of me-too. I like it.
However, the key statement in my opinion is:
The COPILOT function comes with a couple of limitations, as it can’t access information outside your spreadsheet, and you can only use it to calculate 100 functions every 10 minutes. Microsoft also warns against using the AI function for numerical calculations or in “high-stakes scenarios” with legal, regulatory, and compliance implications, as COPILOT “can give incorrect responses.”
I don’t want to make a big deal out of this passage, but I will do it anyway. First, Microsoft makes clear that the outputs can be incorrect. Second, don’t use it too much because I assume one will have to pay to use a system that “can give incorrect results.” In short, MSFT is throttling Excel’s Copilot. Doesn’t everyone want to explore numbers with an addled Copilot known to flub numbers in a jet aircraft at 0.8 Mach?
I want to quote from “It Took Many Years And Billions Of Dollars, But Microsoft Finally Invented A Calculator That Is Wrong Sometimes”:
Think of it. Forty-five hundred years ago, if you were a Sumerian scribe, while your calculations on the world’s first abacus might have been laborious, you could be assured they’d be correct. Four hundred years ago, if you were palling around with William Oughtred, his new slide rule may have been a bit intimidating at first, but you could know its output was correct. In the 1980s, you could have bought the cheapest, shittiest Casio-knockoff calculator you could find, and used it exclusively, for every day of the rest of your life, and never once would it give anything but a correct answer. You could use it today! But now we have Microsoft apparently determining that “unpredictability” was something that some number of its customers wanted in their calculators.
I know that I sure do. I want to use a tool that is likely to convert “high-stakes scenarios” into an embarrassing failure. I mean who does not want this type of digital Copilot?
Why do I find this Excel with Copilot software interesting?
- It illustrates that accuracy has given way to close enough for horseshoes. Impressive for a company that can issue an update that could kill one’s storage devices.
- Microsoft no longer dances around hallucinations. The company just says, “The outputs can be wrong.” But I wonder, “Does Microsoft really mean it?” What about Red Bull-fueled MBAs handling one’s retirement accounts? Yeah, those people will be really careful.
- The article does not come and and say, “Looks like the AI rocket ship is losing altitude.”
- I cannot imagine sitting in a meeting and observing the rationalizations offered to justify releasing a product known to make NUMERICAL errors.
Net net: We are learning about the quality of [a] managerial processes at Microsoft, [b] the judgment of employees, and [c] the sheer craziness that an attorney said, “Sure, release the product just include an upfront statement that it will make mistakes.” Nothing builds trust more than a company anchored in customer-centric values.
Stephen E Arnold, August 22, 2025
News Flash: Google Does Not Care about Publishers
August 21, 2025
No AI. Just a dinobaby working the old-fashioned way.
I read another Google is bad story. This one is titled “Google Might Not Believe It, But Its AI Summaries Are Bad News for Publishers.” The “news” service reports that a publishing industry group spokesperson said:
“We must ensure that the same AI ‘answers’ users see at the top of Google Search don’t become a free substitute for the original work they’re based on.”
When this sentence was spoken was the industry representative’s voice trembling? Were there tears in his or her eyes? Did the person sniff to avoid the embarrassment of a runny nose?
No idea.
The issue is that Google looks at its metrics, fiddles with its knobs and dials on its ad sales system, and launches AI summaries. Those clicks that used to go to individual sites now provide the “summary space” which is a great place for more expensive, big advertising accounts to slap their message. Yep, it is the return to the go-go days of television. Google is the only channel and one of the few places to offer a deal.
What does Google say? Here’s a snip from the “news” story:
"Overall, total organic click volume from Google Search to websites has been relatively stable year-over-year," Liz Reid, VP and Head of Google Search, said earlier this month. "Additionally, average click quality has increased, and we’re actually sending slightly more quality clicks to websites than a year ago (by quality clicks, we mean those where users don’t quickly click back — typically a signal that a user is interested in the website). Reid suggested that reports like the ones from Pew and DCN are "often based on flawed methodologies, isolated examples, or traffic changes that occurred prior to the rollout of AI features in Search."
Translation: Haven’t you yokels figured out after 20 years of responding to us, we are in control now. We don’t care about you. If we need content, we can [a] pay people to create it, [b] use our smart software to write it, and [c] offer inducements to non profits, government agencies, and outfits with lots of writers desperate for recognition a deal. TikTok has changed video, but TikTok just inspired us to do our own TikTok. Now publishers can either get with the program or get out.
PC News apparently does not know how to translate Googlese.
It’s been 20 plus years and Google has not changed. It is doing more of the game plan. Adapt or end up prowling LinkedIn for work.
Stephen E Arnold, August 21, 2025
The Risks of Add-On AI: Apple, Telegram, Are You Paying Attention?
August 20, 2025
No AI. Just a dinobaby and a steam-powered computer in rural Kentucky.
Name three companies trying to glue AI onto existing online services? Here’s my answer:
-
Amazon
-
Apple
-
Telegram.
There are others, but each of these has a big “tech rep” and command respect from other wizards. We know that Tim Apple suggested that the giant firm had AI pinned to the mat and whimpering, “Let me be Siri.” Telegram mumbled about Nikolai working on AI. And Amazon? That company has flirted with smart software with its Sagemaker announcements years ago. Now it has upgraded Alexa, the device most used as a kitchen timer.
“Amazon’s Rocky Alexa+ Launch Might Justify Apple’s Slow Pace with Next-Gen Siri” ignores Telegram (of course. Who really cares?) and uses Amazon’s misstep to apologize for Apple’s goofs. The write up says:
Apple has faced a similar technical challenge in its own next-generation Siri project. The company once aimed to merge Siri’s existing deterministic systems with a new generative AI layer but reportedly had to scrap the initial attempt and start over. … Apple’s decision to delay shipping may be frustrating for those of us eager for a more AI-powered Siri, but Amazon’s rocky launch is a reminder of the risks of rushing a replacement before it’s actually ready.
Why does this matter?
My view is that Apple’s and Amazon’s missteps make clear that bolting on, fitting in, and snapping on smart software is more difficult than it seemed. I also believe that the two firms over-estimated their technical professionals’ ability to just “do” AI. Plus, both US companies appear to be falling behind in the “AI race.”
But what about Telegram? That company is in the same boat. Its AI innovations are coming from its third party developers who have been using Telegram’s platform as a platform. Telegram itself has missed opportunities to reduce the coding challenge for its developers with it focus on old-school programming languages, not AI assisted coding.
I think that it is possible that these three firms will get their AI acts together. The problem is that AI native solutions for the iPhone, the Telegram “community,” and Amazon’s own hardware products. The fumbles illustrate a certain weakness in each firm. Left unaddressed, these can be debilitating in an uncertain economic environment.
But the mantra go fast or the jargon accelerate is not in line with the actions of these three companies.
Stephen E Arnold, August 20, 2025
Inc. Magazine May Find that Its MSFT Software No Longer Works
August 20, 2025
No AI. Just a dinobaby and a steam-powered computer in rural Kentucky.
I am not sure if anyone else has noticed that one must be very careful about making comments. A Canadian technology dude found himself embroiled with another Canadian technology dude. To be frank, I did not understand why the Canadian tech dudes were squabbling, but the dust up underscores the importance of the language, tone, rhetoric, and spin one puts on information.
An example of a sharp-toothed article which may bite Inc. Magazine on the ankle is the story “Welcome to the Weird New Empty World of LinkedIn: Just When Exactly Did the World’s Largest Business Platform Turn into an Endless Feed of AI-Generated Slop?” My teeny tiny experience as a rental at the world’s largest software firm taught me three lessons:
-
Intelligence is defined many ways. I asked a group of about 75 listening to one of my lectures, “Who is familiar with Kolmogorov?” The answer was for that particular sampling of Softies was exactly zero. Subjective impression: Rocket scientists? Not too many.
-
Feistiness. The fellow who shall remain nameless dragged me to a weird mixer thing in one of the buildings on the “campus.” One person (whose name and honorifics I do not remember) said, “Let me introduce you to Mr. X. He is driving the Word project.” I replied with a smile. We walked to the fellow, were introduced, and I asked, “Will Word fix up its autonumbering?” The Word Softie turned red, asked the fellow who introduced me to him, “Who is this guy?” The Word Softie stomped away and shot deadly sniper eyes at me until we left after about 45 minutes of frivolity. Subjective impression: Thin skin. Very thin skin.
-
Insecurity. At a lunch with a person whom I had met when I was a contractor at Bell Labs and several other Softies, the subject of enterprise search came up. I had written the Enterprise Search Report, and Microsoft had purchased copies. Furthermore, I wrote with Susan Rosen “Managing Electronic Information Projects.” Ms. Rosen was one of the senior librarians at Microsoft. While waiting for the rubber chicken, a Softie asked me about Fast Search & Transfer, which Microsoft had just purchased. The question posed to me was, “What do you think about Fast Search as a technology for SharePoint?” I said, “Fast Search was designed to index Web sites. The enterprise search functions were add ons. My hunch is that getting the software to handle the data in SharePoint will be quite difficult?” The response was, “We can do it.” I said, “I think that BA Insight, Coveo, and a couple of other outfits in my Enterprise Search Report will be targeting SharePoint search quickly.” The person looked at me and said, “What do these companies do? How quickly do they move?” Subjective impression: Fire up ChatGPT and get some positive mental health support.
The cited write up stomps into a topic that will probably catch some Softies’ attention. I noted this passage:
The stark fact is that reach, impressions and engagement have dropped off a cliff for the majority of people posting dry (read business-focused) content as opposed to, say, influencer or lifestyle-type content.
The write up adds some data about usage of LinkedIn:
average platform reach had fallen by no less than 50 percent, while follower growth was down 60 percent. Engagement was, on average, down an eye-popping 75 percent.
The main point of the article in my opinion is that LinkedIn does filter AI content. The use of AI content produces a positive for the emitter of the AI content. The effect is to convert a shameless marketing channel into a conduit for search engine optimized sales information.
The question “Why?” is easy to figure out:
-
Clicks if the content is hot
-
Engagement if the other LinkedIn users and bots become engaged or coupled
-
More zip in what is essentially a one dimension, Web 1 service.
How will this write up play out? Again the answers strike me as obvious:
-
LinkedIn may have some Softies who will carry a grudge toward Inc. Magazine
-
Microsoft may be distracted with its Herculean efforts to make its AI “plays” sustainable as outfits like Amazon say, “Hey, use our cloud services. They are pretty much free.”
-
Inc. may take a different approach to publishing stories with some barbs.
Will any of this matter? Nope. Weird and slop do that.
Stephen E Arnold, August 20, 2025
The Bubbling Pot of Toxic Mediocrity? Microsoft LinkedIn. Who Knew?
August 19, 2025
No AI. Just a dinobaby working the old-fashioned way.
Microsoft has a magic touch. The company gets into Open Source; the founder “gits” out. Microsoft hires a person from Intel. Microsoft hires garners an engineer, asks some questions, and the new hire is whipped with a $34,000 fine and two years of mom looking in his drawers.
Now i read “Sunny Days Are Warm: Why LinkedIn Rewards Mediocrity.” The write up includes an outstanding metaphor in my opinion: Toxic Mediocrity. The write up says:
The vast majority of it falls into a category I would describe as Toxic Mediocrity. It’s soft, warm and hard to publicly call out but if you’re not deep in the bubble it reads like nonsense. Unlike it’s cousins ‘Toxic Positivity’ and ‘Toxic Masculinity’ it isn’t as immediately obvious. It’s content that spins itself as meaningful and insightful while providing very little of either. Underneath the one hundred and fifty words is, well, nothing. It’s a post that lets you know that sunny days are warm or its better not to be a total psychopath. What is anyone supposed to learn from that?
When I read a LinkedIn post it is usually referenced in an article I am reading. I like to follow these modern slippery footnotes. (If you want slippery, try finding interesting items about Pavel Durov in certain Russian sources.)
Here’s what I learn:
- A “member” makes clear that he or she has information of value. I must admit. Once in a while a useful post will turn up. Not often, but it has happened. I do know the person believes something about himself or herself. Try asking a GenAI about their personal “beliefs.” Let me know how that works.
- Members in a specific group with an active moderator often post items of interest. Instead of writing my unread blog, these individuals identify an item and use LinkedIn as a “digital bulletin board” for people who shop at the same sporting goods store in rural Kentucky. (One sells breakfast items and weapons.)
- I get a sense of the jargon people use to explain their expertise. I work alone. I am writing a book. I don’t travel to conferences or client locations now. I rely on LinkedIn as the equivalent of going to a conference mixer and listening to the conversations.
That useful. I have a person who interacts on LinkedIn for me. I suppose my “experience” is therefore different from someone who visits the site, posts, and follows the antics of LinkedIn’s marketers as they try to get the surrogate me to pay to do what I do. (Guess what? I don’t pay.)
I noted this statement in the essay:
Honestly, the best approach is to remember that LinkedIn is a website owned by Microsoft, trying to make money for Microsoft, based on time spent on the site. Nothing you post there is going to change your career. Doing work that matters might. Drawing attention to that might. Go for depth over frequency.
I know that many people rely on LinkedIn to boost their self confidence. One of the people who worked for me moved to another city. I suggested that she give LinkedIn a whirl. She wrote interesting short items about her interests. She got good feedback. Her self confidence ticked up, and she landed a successful job. So there’s a use case for you.
You should be able to find a short item that a new post appears on my blog. Write me and my surrogate will write you back and give you instructions about how to contact me. Why don’t I conduct conversations on LinkedIn? Have you checked out the telemetry functions in Microsoft software?
Stephen E Arnold, August 19, 2025
A Baloney Blizzard: What Is Missing? Oh, Nothing, Just Security
August 19, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
I do not know what a CVP is. I do know a baloney blizzard when I see one. How about these terms: Ambient, pervasive, and multi-modal. I interpret ambient as meaning temperature or music like the tunes honked in Manhattan elevators. Pervasive I view as surveillance; that is, one cannot escape the monitoring. What a clever idea. Who doesn’t want Microsoft Windows to be inescapable? And multi-modal sparks in me thoughts of a cave painting and a shaman. I like the idea of Windows intermediating for me.
Where did I get these three odd ball words? I read “Microsoft’s Windows Lead Says the Next Version of Windows Will Be More Ambient, Pervasive, and Multi-Modal As AI Redefines the Desktop Interface.” The source of this write up is an organization that absolutely loves Microsoft products and services.
Here’s a passage I noted:
Davuluri confirms that in the wake of AI, Windows is going to change significantly. The OS is going to become more ambient and multi-modal, capable of understanding the content on your screen at all times to enable context-aware capabilities that previously weren’t possible. Davuluri continues, “you’ll be able to speak to your computer while you’re writing, inking, or interacting with another person. You should be able to have a computer semantically understand your intent to interact with it.”
Very sci-fi. However, I don’t want to speak to my computer. I work in silence. My office is set up do I don’t have people interrupting, chattering, or asking me to go to get donuts. My view is, “Send me an email or a text. Don’t bother me.” Is that why in many high-tech companies people wear earbuds? It is. They don’t want to talk, interact, or discuss Netflix. These people want to “work” or what they think is “work.”
Does Microsoft care? Of course not. Here’s a reasonably clear statement of what Microsoft is going to try and force upon me:
It’s clear that whatever is coming next for Windows, it’s going to promote voice as a first class input method on the platform. In addition to mouse and keyboard, you will be able to ambiently talk to Windows using natural language while you work, and have the OS understand your intent based on what’s currently on your screen.
Several observations:
- AI is not reliable
- Microsoft is running a surveillance operation in my opinion
- This is the outfit which created Bob and Clippy.
But the real message in this PR marketing content essay: Security is not mentioned. Does a secure operation want people talking about their work?
Stephen E Arnold, August 19, 2025
Remember the Metaverse
August 17, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
The “Metaverse” was Mark Zuckerberg’s swing and a miss in the virtual world video game. Alphabet is rebooting the failed world says Ars Technica, “Meta’s “AI Superintelligence” Effort Sounds Just Like Its Failed ‘Metaverse.’” Zuckerberg released a memo in which he hyped the new Meta Superintelligence Labs. He described it as “the beginning of a new era for humanity.” It sounds like Zuckerberg is described his Metaverse from a 2021 keynote address.
The Metaverse exists but not many people use it outside of Meta employees who actively avoid using certain features. It’s possible that the public hasn’t given Zuckerberg enough time to develop the virtual world. But when augmented reality uses a pair of ugly coke bottle prototype glasses that cost $10000, the average person isn’t going to log in. To quote the article:
“Today, those kinds of voices of internal skepticism seem in short supply as Meta sets itself up to push AI in the same way it once backed the metaverse. Don’t be surprised, though, if today’s promise that we’re at "the beginning of a new era for humanity" ages about as well as Meta’s former promises about a metaverse where "you’re gonna be able to do almost anything you can imagine."
Zuckerberg is blah blah-ing and yada yada-ing about the future of AI and how it will change society. Society won’t either adapt, can’t afford the changes, or the technology is too advanced to replicate on a large scale. But there is Apple with its outstanding google-headset thing.
One trick ponies do one trick. Yep. Big glasses.
Whitney Grace, August 17, 2025
Google! Manipulating Search Results? No Kidding
August 15, 2025
The Federal Trade Commission has just determined something the EU has been saying (and litigating) for years. The International Business Times tells us, “Google Manipulated Search Results to Bolster Own Products, FTC Report Finds.” Writer Luke Villapaz reports:
“For Internet searches over the past few years, if you typed ‘Google’ into Google, you probably got the exact result you wanted, but if you were searching for products or services offered by Google’s competitors, chances are those offerings were found further down the page, beneath those offered by Google. That’s what the U.S. Federal Trade Commission disclosed on Thursday, in an extensive 160-page report, which was obtained by the Wall Street Journal as part of a Freedom of Information Act request. FTC staffers found evidence that Google’s algorithm was demoting the search results of competing services while placing its own higher on the search results page, according to excerpts from the report. Among the websites affected: shopping comparison, restaurant review and travel.”
Villapaz notes Yelp has made similar allegations, estimating Google’s manipulation of search results may have captured some 20% of its potential users. So, after catching the big tech firm red handed, what will the FTC do about it? Nothing, apparently. We learn:
“Despite the findings, the FTC staffers tasked with investigating Google did not recommend that the commission issue a formal complaint against the company. However, Google agreed to some changes to its search result practices when the commission ended its investigation in 2013.”
Well OK then. We suppose that will have to suffice.
Cynthia Murrell, August 15, 2025

