First, Let Us Kill Relevance for Once and For All. Second, Just Use Google
September 9, 2025
Just a dinobaby sharing observations. No AI involved. My apologies to those who rely on it for their wisdom, knowledge, and insights.
In the long distant past, Danny Sullivan was a search engine optimization-oriented journalist. I think we was involved with an outfit called Search Engine Land. He gave talks and had an animated dinosaur as his cursor. I recall liking the dinosaur. On August 29, 2025, Search Engine Land published a story unthinkable years ago when Google was the one and only game in town.
The article “ChatGPT, AI Tools Gain Traction as Google Search Slips: Survey” says:
“AI tool use is accelerating in everyday search, with ChatGPT use nearly tripling while Google’s share slips, survey of US users finds.”
But Google just sold the US government at $0.47 per head the Gemini system. How can these procurement people have gone off track? The write up says:
Google’s role in everyday information seeking is shrinking, while AI tools – particularly ChatGPT – are quickly gaining ground. That’s according to a new Higher Visibility survey of 1,500 U.S. users.
And here’s another statement that caught my eye:
Search behavior is fractured, which means SEOs cannot rely on Google Search alone (though, to be clear, SEO for Google remains as critical as ever). Therefore, SEO/GEO strategies now must account for visibility across multiple AI platforms.
I wonder if relevant search results will return? Of course not, one must optimize content for the new world of multiple AI platforms.
A couple of questions:
- If AI is getting uptake, won’t that uptake help out Google too?
- Who are the “users” in the survey sample? Is the sample valid? Are the data reliable?
- Is the need for SEO an accurate statement? SEO helped destroy relevance in search results. Aren’t these folks satisfied with their achievement to date?
I think I know the answers to these questions. But I am content to just believe everything Search Engine Land says. I mean marketing SEO and eliminating relevance when seeking answers online is undergoing change. Change means many things. Some of these issues are beyond the ken of the big thinkers at Search Engine Land in my opinion. But that’s irrelevant and definitely not SEO.
Stephen E Arnold, September 10, 2025
Google and Its Reality Dictating Machine: What Is a Fact?
September 9, 2025
I’m not surprised by this. I don’t understand why anyone would be surprised by this story from Neoscope: “Doctors Horrified After Google’s Healthcare AI Makes Up A Body Part That Does Not Exist In Humans.” Healthcare professional are worried about their industry’s over the widespread use of AI tools. These tools are error prone and chock full of bugs. In other words, these bots are creating up facts and lies and making them seem convincing.
It’s called hallucinating.
A recent example of an AI error involves Google’s Med-Gemini and it took an entire year before anyone discovered it. The false information was published in a May 2024 research paper from Google that ironically discussed the promises of AI Med-Gemini analyzing brain scans. The AI “identified” the “old left basilar ganglia infarct” in the scans, but that doesn’t exist in the human body. Google never fixed its research paper.
Hallucinations are dangerous in humans but they’re much worse in AI because they won’t be confined to a single source.
“It’s not just Med-Gemini. Google’s more advanced healthcare model, dubbed MedGemma, also led to varying answers depending on the way questions were phrased, leading to errors some of the time. ‘Their nature is that [they] tend to make up things, and it doesn’t say ‘I don’t know,’ which is a big, big problem for high-stakes domains like medicine,’ Judy Gichoya, Emory University associate professor of radiology and informatics, told The Verge.
Other experts say we’re rushing into adapting AI in clinical settings — from AI therapists, radiologists, and nurses to patient interaction transcription services — warranting a far more careful approach.”
A wise fictional character once said, “Take risks! Make mistakes! Get messy! In other words, say “I don’t know!” Could this quick kill people? Duh.
Whitney Grace, September 9, 2025
Dr. Bob Clippy Will See You Now
September 8, 2025
I cannot wait for AI to replace my trusted human physician whom I’ve been seeing for years. “Microsoft Claims its AI Tool Can Diagnose Complex Medical Cases Four Times More Accurately than Doctors,” Fortune reports. The company made this incredible claim in a recent blog post. How did it determine this statistic? By taking the usual resources away from human doctors it pitted against its AI. Senior Reporter Alexa Mikhail tells us:
“The team at Microsoft noted the limitations of this research. For one, the physicians in the study had between five and 20 years of experience, but were unable to use textbooks, coworkers, or—ironically—generative AI for their answers. It could have limited their performance, as these resources may typically be available during a complex medical situation.”
You don’t say? Additionally, the study did not include everyday cases. You know, the sort doctors do not need to consult books or coworkers to diagnose. Seems legit. Microsoft says it sees the tool as a complement to doctors, not a replacement for them. That sounds familiar.
Mikahil notes AI already permeates healthcare: Most of us have looked up symptoms with AI-assisted Web searches. ChatGPT is actively being used as a psychotherapist (sometimes for better, often for worse). Many healthcare executives are eager to take this much, much further. So are about half of US patients and 63% of clinicians, according to the 2025 Philips Future Health Index (FHI), who expect AI to improve health outcomes. We hope they are correct, because there may be no turning back now.
Cynthia Murrell, September 8, 2025
AI Can Be Your Food Coach… Well, Perhaps Not
September 5, 2025
Is this better or worse than putting glue on pizza? TechSpot reveals yet another severe consequence of trusting AI: “Man Develops Rare 19th-Century Psychiatric Disorder After Following ChatGPT’s Diet Advice.” Writer Rob Thubron tells us:
“The case involved a 60-year-old man who, after reading reports on the negative impact excessive amounts of sodium chloride (common table salt) can have on the body, decided to remove it from his diet. There were plenty of articles on reducing salt intake, but he wanted it removed completely. So, he asked ChatGPT for advice, which he followed. After being on his new diet for three months, the man admitted himself to hospital over claims that his neighbor was poisoning him. His symptoms included new-onset facial acne and cherry angiomas, fatigue, insomnia, excessive thirst, poor coordination, and a rash. He also expressed increasing paranoia and auditory and visual hallucinations, which, after he attempted to escape, ‘resulted in an involuntary psychiatric hold for grave disability.’”
Yikes! It was later learned ChatGPT suggested he replace table salt with sodium bromide. That resulted, unsurprisingly, in this severe case of bromism. That malady has not been common since the 1930s. Maybe ChatGPT confused the user with a spa/hot tub or an oil and gas drill. Or perhaps its medical knowledge is just a bit out of date. Either way, this sad incident illustrates what a mistake it is to rely on generative AI for important answers. This patient was not the only one here with hallucinations.
Cynthia Murrell, September 5, 2025
Fabulous Fakes Pollute Publishing: That AI Stuff Is Fatuous
September 4, 2025
New York Times best selling author David Baldacci testified before the US Congress about regulating AI. Medical professionals are worried about false information infiltrating medical knowledge like the scandal involving Med-Gemini and an imaginary body part. It’s getting worse says ZME Science: “A Massive Fraud Ring Is Publishing Thousands of Fake Studies and the Problem is Exploding. ‘These Networks Are Essentially Criminal Organizations.’”
Bad actors in scientific publishing used to be a small group, but now it’s a big posse:
“What we are seeing is large networks of editors and authors cooperating to publish fraudulent research at scale. They are exploiting cracks in the system to launder reputations, secure funding, and climb academic ranks. This isn’t just about the occasional plagiarized paragraph or data fudged to fool reviewers. This is about a vast and resilient system that, in some cases, mimics organized crime. And it’s infiltrating the very core of science.”
Luís Amaral discovered in a study he conducted that analyzed five million papers across 70,000 scientific journals that there is a fraudulent paper mill for publishing. You’ve heard of paper mill colleges where students can buy so-called degrees. This is similar except the products are authorship slots and journal placements from artificial research and compromised editors.
Outstanding, AI champions!
This is a way for bad actors to pad their resumes and gain undeserved creditability.
Fake science has always been a problem but it’s outpacing fact-based science. It’s cheaper to produce fake science than legitimate truth. The article then waxes poetic about the need for respectability, the dangerous consequences of false science, and how the current tools aren’t enough. It’s devastating but the expected cultural shift needed to be more respectful of truth and hard facts is not equipped to deal with the new world. Thanks, AI.
Whitney Grace, September 4, 2025
Derailing Smart Software with Invisible Prompts
September 3, 2025
Just a dinobaby sharing observations. No AI involved. My apologies to those who rely on it for their wisdom, knowledge, and insights.
The Russian PCNews service published “Visual Illusion: Scammers Have Learned to Give Invisible Instructions to Neural Networks.” Note: The article is in Russian.
The write up states:
Attackers can embed hidden instructions for artificial intelligence (AI) into the text of web pages, letters or documents … For example, CSS (a style language for describing the appearance of a document) makes text invisible to humans, but quite readable to a neural network.
The write up includes examples like these:
… Attackers can secretly run scripts, steal data, or encrypt files. The neural network response may contain social engineering commands [such as] “download this file,” “execute a PowerShell command,” or “open the link,” … At the same time, the user perceives the output as trusted … which increases the chance of installing ransomware or stealing data. If data [are] “poisoned” using hidden prompts [and] gets into the training materials of any neural network, [the system] will learn to give “harmful advice” even when processing “unpoisoned” content in future use….
Examples of invisible information have been identified in the ArXiv collection of pre-printed journal articles.
Stephen E Arnold, September 3, 2025
Bending Reality or Creating a Question of Ownership and Responsibility for Errors
September 3, 2025
No AI. Just a dinobaby working the old-fashioned way.
The Google has may busy digital beavers working in the superbly managed organization. The BBC, however, seems to be agitated about what may be a truly insignificant matter: Ownership of substantially altered content and responsibility for errors introduced into digital content.
“YouTube secretly used AI to Edit People’s Videos. The Results Could Bend Reality” reports:
In recent months, YouTube has secretly used artificial intelligence (AI) to tweak people’s videos without letting them know or asking permission.
The BBC ignores a couple of issues that struck me as significant if — please, note the “if” — the assertion about YouTube altering content belonging to another entity. I will address these after some more BBC goodness.
I noted this statement:
the company [Google] has finally confirmed it is altering a limited number of videos on YouTube Shorts, the app’s short-form video feature.
Okay, the Google digital beavers are beavering away.
I also noted this passage attributed to Samuel Woolley, the Dietrich chair of disinformation studies at the University of Pittsburgh:
“You can make decisions about what you want your phone to do, and whether to turn on certain features. What we have here is a company manipulating content from leading users that is then being distributed to a public audience without the consent of the people who produce the videos…. “People are already distrustful of content that they encounter on social media. What happens if people know that companies are editing content from the top down, without even telling the content creators themselves?”
What about those issues I thought about after reading the BBC’s write up:
- If Google’s changes (improvements, enhancements, AI additions, whatever), will Google “own” the resulting content? My thought is that if Google can make more money by using AI to create a “fair use” argument, it will. How long will it take a court (assuming these are still functioning) to figure out if Google’s right or the individual content creator is the copyright holder?
- When, not if, Google’s AI introduces some type of error, is Google responsible or is it the creator’s problem? My hunch is that Google’s attorneys will argue that it provides a content creator with a free service. See the Terms of Service for YouTube and stop complaining.
- What if a content creator hits a home run and Google’s AI “learns” then outputs content via its assorted AI processes? Will Google be able to deplatform the original creator and just use it as a way to make money without paying the home-run hitting YouTube creator?
Perhaps the BBC would like to consider how these tiny “experiments” can expand until they shift the monetization methods further in favor of the Google. Maybe one reason is that BBC doesn’t think these types of thoughts. The Google, based on my experience, is indeed thinking these types of “what if” talks in a sterile room with whiteboards and brilliant Googlers playing with their mobile devices or snacking on goodies.
Stephen E Arnold, September 3, 2025
Deadbots. Many Use Cases, Including Advertising
September 2, 2025
No AI. Just a dinobaby working the old-fashioned way.
I like the idea of deadbots, a concept explained by the ever-authoritative NPR in “AI Deadbots Are Persuasive — and Researchers Say, They’re Primed for Monetization.” The write up reports in what I imagine as a resonant, somewhat breathy voice:
AI avatars of deceased people – or “deadbots” – are showing up in new and unexpected contexts, including ones where they have the power to persuade.
Here’s a passage I thought was interesting:
Researchers are now warning that commercial use is the next frontier for deadbots. “Of course it will be monetized,” said Lindenwood University AI researcher James Hutson. Hutson co-authored several studies about deadbots, including one exploring the ethics of using AI to reanimate the dead. Hutson’s work, along with other recent studies such as one from Cambridge University, which explores the likelihood of companies using deadbots to advertise products to users, point to the potential harms of such uses. “The problem is if it is perceived as exploitative, right?” Hutson said.
Not surprisingly, some sticks in the mud see a downside to deadbots:
Quinn [a wizard a Authetic Interactions Inc.] said companies are going to try to make as much money out of AI avatars of both the dead and the living as possible, and he acknowledges there could be some bad actors. “Companies are already testing things out internally for these use cases,” Quinn said, with reference to such uses cases as endorsements featuring living celebrities created with generative AI that people can interactive with. “We just haven’t seen a lot of the implementations yet.”
I wonder if any philosophical types will consider how an interaction with a dead person’s avatar can be an “authetic interaction.”
I started thinking of deadbots I would enjoy coming to life on my digital devices; for example:
- My first boss at a blue chip consulting firm who encouraged rumors that his previous wives accidently met with boating accidents
- My high school English teacher who took me to the assistant principal’s office for writing a poem about the spirit of nature who looked to me like a Playboy bunny
- The union steward who told me that I was working too fast and making other workers look like they were not working hard
- The airline professional who told me our flight would be delayed when a passenger died during push back from the gate. (The fellow was sitting next to me. Airport food did it I think.)
- The owner of an enterprise search company who insisted, “Our enterprise information retrieval puts all your company’s information at an employee’s fingertips.”
You may have other ideas for deadbots. How would you monetize a deadbot, Google- and Meta-type companies? Will Hollywood do deadbot motion pictures? (I know the answer to that question.)
Stephen E Arnold, September 2, 2025
AI Will Not Have a Negative Impact on Jobs. Knock Off the Negativity Now
September 2, 2025
No AI. Just a dinobaby working the old-fashioned way.
The word from Goldman Sachs is parental and well it should be. After all, Goldman Sachs is the big dog. PC Week’s story “Goldman Sachs: AI’s Job Hit Will Be Brief as Productivity Rises” makes this crystal clear or almost. In an era of PR and smart software, I am never sure who is creating what.
The write up says:
AI will cause significant, but ultimately temporary, disruption. The headline figure from the report is that widespread adoption of AI could displace 6-7% of the US workforce. While that number sounds alarming, the firm’s economists, Joseph Briggs and Sarah Dong, argue against the narrative of a permanent “jobpocalypse.” They remain “skeptical that AI will lead to large employment reductions over the next decade.”
Knock of the complaining already. College graduates with zero job offers. Just do the van life thing for a decade or become an influencer.
The write up explains history just like the good old days:
“Predictions that technology will reduce the need for human labor have a long history but a poor track record,” they write. The report highlights a stunning fact: Approximately 60% of US workers today are employed in occupations that didn’t even exist in 1940. This suggests that over 85% of all employment growth in the last 80 years has been fueled by the creation of new jobs driven by new technologies. From the steam engine to the internet, innovation has consistently eliminated some roles while creating entirely new industries and professions.
Technology and brilliant management like that at Goldman Sachs makes the economy hum along. And the write up proves it, and I quote:
Goldman Sachs expects AI to follow this pattern.
For those TikTok- and YouTube-type videos revealing that jobs are hard to obtain or the fathers whining about sending 200 job applications each month for six months, knock it off. The sun will come up tomorrow. The financial engines will churn and charge a service fee, of course. The flowers will bloom because that baloney about global warming is dead wrong. The birds will sing (well, maybe not in Manhattan) but elsewhere because windmills creating power are going to be shut down so the birds won’t be decapitated any more.
Everything is great. Goldman Sachs says this. In Goldman we trust or is it Goldman wants your trust… fund that is.
Stephen E Arnold, September 2, 2025
More about AI and Peasants from a Xoogler Too
September 1, 2025
A former Googler predicts a rough ride ahead for workers. And would-be workers. Yahoo News shares “Ex-Google Exec’s Shocking Warning: AI Will Create 15 Years of ‘Hell’—Starting Sooner than We Think.” Only 15 years? Seems optimistic. Mo Gawdat issued his prophesy on the “Diary of a CEO” podcast. He expects “the end of white-collar work” to begin by the end of this decade. Indeed, the job losses have already begun. But the cascading effects could go well beyond high unemployment. Reporter Ariel Zilber writes:
“Without proper government oversight, AI technology will channel unprecedented wealth and influence to those who own or control these systems, while leaving millions of workers struggling to find their place in the new economy, according to Gawdat. Beyond economic concerns, Gawdat anticipates serious social consequences from this rapid transformation. Gawdat said AI will trigger significant ‘social unrest’ as people grapple with losing their livelihoods and sense of purpose — resulting in rising rates of mental health problems, increased loneliness and deepening social divisions. ‘Unless you’re in the top 0.1%, you’re a peasant,’ Gawdat said. ‘There is no middle class.’”
That is ominous. But, to hear Gawdat tell it, there is a bright future on the other side of those hellish 15 years. He believes those who survive past 2040 can look forward to a “utopian” era free from tedious, mundane tasks. This will free us up to focus on “love, community, and spiritual development.” Sure. But to get there, he warns, we must take certain steps:
“Gawdat said that it is incumbent on governments, individuals and businesses to take proactive measures such as the adoption of universal basic income to help people navigate the transition. ‘We are headed into a short-term dystopia, but we can still decide what comes after that,’ Gawdat told the podcast, emphasizing that the future remains malleable based on choices society makes today. He argued that outcomes will depend heavily on decisions regarding regulation, equitable access to technology, and what he calls the ‘moral programming’ of AI algorithms.”
We are sure government and Big Tech will get right on that. Totally doable in our current political and business climates. Meanwhile, Mo Gawdat is working on an “AI love coach.” I am not sure Mr. Gawdat is connected to the bureaucratic and management ethos of 2025. Is that why he is a Xoogler?
Cynthia Murrell, September 1, 2025

