Traditional Publishers Hallucinate More Than AI Systems

May 28, 2025

Dino 5 18 25

Just the dinobaby operating without Copilot or its ilk.

I sincerely hope that the information presented in “Major Papers Publish AI-Hallucinated Summer Reading List Of Nonexistent Books.” The components of this “real” news story are:

  1. A big time newspaper syndicator
  2. A “real” journalist / writer allegedly named Marco Buscaglia
  3. Smart software bubbling with the type of AI goodness output by Google-type outfits desperate to make their big bets on smart software pay off
  4. Humans who check “facts”— real or hallucinated.

Blend these together in an information process like that operated at the Sun-Times in the city with big shoulders and what do you get:

In an embarrassing episode that will help aggravate society’s uneasy relationship with artificial intelligence, the Chicago Sun-Times, Philadelphia Inquirer and other newspapers around the country published a summer-reading list where most of the books were entirely made up by ChatGPT. The article was licensed content provided by King Features Syndicate, a subsidiary of Hearst Newspapers. Initial reporting of the bogus list focused on the Sun-Times, which two months earlier announced that 20% of its staff had accepted buyouts as the paper staggers under a dying business model. However, several other newspapers also ran the syndicated article, which was part of a package of summer-themed content called "Heat Index." 

What happened? The editorial process and the “real” journalist did their work. The editorial process involved using smart software to create a list of must-read books. The real journalist converted the raw list into a formatted presentation of books you, gentle reader, must consume whilst reclining in a beach lounger or crunched into a customer-first airplane seat.

The cited write up explains the clip twixt cup and lip or lips:

As the scandal quickly made waves across traditional and social media, the Sun-Times — which not-so-accurately bills itself as "The Hardest-Working Paper in America"raced to apologize while also trying to distance itself from the work. “This is licensed content that was not created by, or approved by, the Sun-Times newsroom, but it is unacceptable for any content we provide to our readers to be inaccurate,” a spokesperson said. In a separate post to its website, the paper said, "This should be a learning moment for all of journalism.” Meanwhile, the Inquirer’s CEO Lisa Hughes told The Atlantic, "Using artificial intelligence to produce content, as was apparently the case with some of the Heat Index material, is a violation of our own internal policies and a serious breach.” 

The kindergarten smush up inspires me to offer several observations:

  1. Editorial processes require editors who pay attention, know or check facts, and think about how to serve their readers
  2. Writers need to do old-fashioned work like read books, check with sources likely to be sort of correct, and invest time in their efforts
  3. Readers need to recognize that this type of information baloney can be weaponized. Shaping will do far more harm than give me a good laugh.

Outstanding. My sources tell me that the “real” news about this hallucinating shirk off is mostly accurate.

Stephen E Arnold, May 28, 2025

China Slated To Overtake US In AI Development. How about Bypass?

May 28, 2025

China was scheduled to become the world’s top performing economy by now. This was predicted in the early 2000s, but the Middle Kingdom has experienced some roadblocks. Going through all of them would require an entire class on world history and economics. We don’t have time for that because SCMP says, “China To Harness Nation’s Resources To AI Self-Reliance Ambitions."

Winnie the Pooh a.k.a. President Xi Jinping told the Communist Party’s inner circle that he plans to stimulate AI theory and core technologies. Xi wants to leverage his country’s “new whole national system” to repair bottlenecks like high end chips. The “new whole national system” is how the Community Party describes directing resources towards national strategic goals.

Xi is desperate for China to overtake the US in AI development. This pipe dream was crushed when the US placed tariffs on Chinese goods. While the tariff war is on hiatus for a few months, it doesn’t give China a desperate leg up.

Xi said:

“‘We must acknowledge the technological gap, redouble our efforts to comprehensively push forward technological innovation, industrial development and applications, and the AI regulatory system,’ state news agency Xinhua quoted Xi as saying. ‘[China should] continue to strengthen basic research, and concentrate on conquering core technologies such as high-end chips and basic software, so as to build an independent, controllable, and collaborative AI basic software and hardware system. ‘[We should then] use AI to lead the paradigm shift in scientific research and accelerate scientific and technological innovation breakthroughs in various fields.’”

So said Winnie the Pooh. He’s searching for that irresistible pot of honey while dealing with US and Trump bumblebees. Maybe if he disguises himself as a little black raincloud instead of a “weather balloon” he might advance further in AI? However, some tension in the military may lead to a bit of choppy weather in what is supposed to be a smooth, calm sea of agreement.

Let’s ask Deepseek.

Whitney Grace, May 28, 2025

SEO Dead? Nope, Just Wounded But Will Survive Unfortunately

May 27, 2025

SEO or search engine optimization is one of the forces that killed old fashioned precision and recall. Precision morphed from presenting on point sources to smashing a client’s baloney content into a searcher’s face. Recall went from a metric indicating that a query was passed across available processed content. Now it means, “Buy, believe, and baloney information.”

The write up “The Future of SEO As the Future Google Search Rolls Out” explains:

“Google isn’t going to keep its search engine the way it was for the past two decades. Google knows it has to change, despite them making an absolute fortune from search ads. Google is worried about TikTok, worried about, ChatGPT, worried about searchers going to something new and better.”

These paragraphs make clear that SEO is not going to its grave without causing some harm to the grave diggers:

“There are a lot of concerned people in the search marketing industry right now. The bottom line is while many of us like to complain and we honestly have good reason to be upset, complaining won’t help. We need to adapt and change and experiment. Experiment with these new experiences, keep on top of these changes happening in Google and at other AI and search companies. Then try new things and keep testing. If you do not adapt, you will die. SEO won’t die, but you will become irrelevant. The good news, SEOs are some of the best at adapting, embracing change and testing new strategies out. So you are all ready and equipped for the future of search.”

Let me share some observations about this statement from the cited write up:

First, the SEO professionals are concerned. About relevance and returning precise on point information to the user? Are you kidding me? SEO professionals are worried about their making money. Google, after using SEOs as part of their push to sell ads, the SEO crowd is wracked with uncertainty.

Second, adaptation is important. A failure to adapt means no money. Now the SEO professionals must embrace anxiety. Is stress good for SEO professionals? Probably not.

Third, SEO professionals with 20 years of experience must experiment. Are these individuals equipped to head to the innovation space and crank out new ways to generate money? A few will be able to be the old that that learns to roll over on late night television. Most — well — will struggle to get up or die trying.

What’s my prediction for the future of SEO? Snake oil vendors are part of the data carnival. Ladies and gentlemen, get your cure for no traffic here. Step right up.”

Stephen E Arnold, May xx, 2-25

Coincidence or No Big Deal for the Google: User Data and Suicide

May 27, 2025

Dino 5 18 25_thumbJust the dinobaby operating without Copilot or its ilk.

I have ignored most of the carnival noise about smart software. Google continues its bug spray approach to thwarting the equally publicity-crazed Microsoft and OpenAI. (Is Copilot useful? Is Sam Altman the heir to Steve Jobs?)

Two stories caught my attention. The first is almost routine. Armed with the Chrome Hoover, long-lived cookies, and the permission hungry Android play — The Verge published “Google Has a Big AI Advantage: It Already Knows Everything about You.” Sigh. another categorical affirmative: “Everything.” Is that accurate? “Everything” or is it just a scare tactic to draw readers? Old news.

But the sub title is more interesting; to wit:

Google is slowly giving Gemini more and more access to user data to ‘personalize’ your responses.

Slowly. Really? More access? More than what? And “your responses?” Whose?

The write up says:

As an example, Google says if you’re chatting with a friend about road trip advice, Gemini can search through your emails and files, allowing it to find hotel reservations and an itinerary you put together. It can then suggest a response that incorporates relevant information. That, Google CEO Sundar Pichai said during the keynote, may even help you “be a better friend.” It seems Google plans on bringing personal context outside Gemini, too, as its blog post announcing the feature says, “You can imagine how helpful personal context will be across Search, Gemini and more.” Google said in March that it will eventually let users connect their YouTube history and Photos library to Gemini, too.

No kidding. How does one know that Google has not been processing personal data for decades. There’s a patent *with a cute machine generated profile of Michael Jackson. This report generated by Google appeared in the 2007 patent application US2007/0198481:

image

The machine generated bubble gum card about Michael Jackson, including last known address, nicknames, and other details. See US2007/0198481 A1, “Automatic Object Reference Identification and Linking in a Browsable Fact Repository.”

The inventors Andrew W. Hogue (Ho Ho Kus, NJ) and Jonathan T. Betz (Summit, NJ) appear on the “final” version of their invention. The name of the patent was the same, but there was an important different between the patent application and the actual patent. The machine generated personal profile was replaced with a much less useful informative screen capture; to wit:

image

From Google Patent 7774328, granted in 2010 as “Browsable Fact Repository.”

Google wasn’t done “inventing” enhancements to its profile engine capable of outputting bubble gum cards for either authorized users or Google systems. Check out Extension US9760570 B2 “Finding and Disambiguating References to Entities on Web Pages.” The idea is that items like “aliases” and similarly opaque factoids can be made concrete for linking to cross correlated content objects.,

Thus, the “everything” assertion while a categorical affirmative reveals a certain innocence on the part of the Verge “real news” story.

Now what about the information in “Google, AI Firm Must Face Lawsuit Filed by a Mother over Suicide of Son, US Court Says.” The write up is from the trusted outfit Thomson Reuters (I know it is trusted because it says so on the Web page). The write up dated May 21, 2025, reports:

The lawsuit is one of the first in the U.S. against an AI company for allegedly failing to protect children from psychological harms. It alleges that the teenager killed himself after becoming obsessed with an AI-powered chatbot. A Character.AI spokesperson said the company will continue to fight the case and employs safety features on its platform to protect minors, including measures to prevent "conversations about self-harm." Google spokesperson Jose Castaneda said the company strongly disagrees with the decision. Castaneda also said that Google and Character.AI are "entirely separate" and that Google "did not create, design, or manage Character.AI’s app or any component part of it."

Absent from the Reuters’ report and the allegedly accurate Google and semi-Google statements, the company takes steps to protect users, especially children. With The profiling and bubble gum card technology Google invented, does it seem prudent for Google to identify a child, cross correlate the child’s queries with the bubble gum card and dynamically [a] flag an issue, [b] alert a parent or guardian, [c] use the “everything” information to present suggestions for mental health support? I want to point out that if one searches for words on a stop list, the Dark Web search engine Ahmia.fi presents a page providing links to Clear Web resources to assist the person with counseling. Imagine: A Dark Web search engine performing a function specifically intended to help users.

Google, is Ahmia,fi more sophisticated that you and your quasi-Googles? Are the statements made about Google’s AI capabilities in line with reality? My hunch is requests like “Google spokesperson Jose Castaneda said the company strongly disagrees with the decision. Castaneda also said that Google and Character.AI are "entirely separate" and that Google "did not create, design, or manage Character.AI’s app or any component part of it." made after the presentation of evidence were not compelling. (Compelling is a popular word in some AI generated content. Yeah, compelling: A kid’s death. Inventions by Googlers specifically designed to profile a user, disambiguate disparate content objects, and make available a bubble gum card. Yeah, compelling.

I am optimistic that Google knowing “everything,” the death of a child, a Dark Web search engine that can intervene, and the semi-Google lawyers  add up to comfort and support.

Yeah, compelling. Google’s been chugging along in the profiling vineyard since 2007. Let’s see that works out to longer than the 14 year old had been alive.

Compelling? Nah. Googley.

Stephen E Arnold, May 27, 2025

AI Search: Go Retro

May 27, 2025

CIO’s article, “Invest In AI Search As An Enterprise Business Asset” reads like a blast from the pasta circa early 2000s. Back then it was harder to find decent information, ergo the invention of Google. However, it was also a tad easier to get ranked. With the advent of AI search the entire game has shifted so these tips are questionable.

CIO shares helpful stats about AI: 90% of AI projects never develop beyond proof of concept and 97% of organizations have trouble demonstrating the business value of generative AI. Then this apt paragraph is tossed at readers:

  1. “A major reason is that many cautious business leaders treat AI as a source of incremental improvements to existing processes rather than a tool to reshape core business functions. Too often, business leaders underestimate the people, behavior, and organizational changes entailed by strategically using AI.”
  2. Generative AI is still a new technology so it’s rational not everyone understands its implications and potential. The article then transitions into the difficulties employees have finding information. Another apt observation is made:
  3. “They have become accustomed to instant gratification on the web, but the lack of investment many organizations make in relevance and content curation makes searching inside the corporate firewall maddeningly unproductive.”

Then readers are treated to sales pitch that’s been heard since every new search technology emerged (well before Google):

“AI search not only incrementally improves productivity but can radically reshape core business capabilities. It replaces simple keyword searches with advanced semantic techniques that understand the intent and context behind a query. Semantic search combines technologies including natural language processing, vector data stores, and machine learning to deliver results that more closely match what users need than keywords without requiring major investments in content curation.”

There is something new that Steve Mayzak, the global managing director of Search at Elastic said: “With semantic search, you can search across an entire book instead of relying on the index alone.”

Now that has my attention. Indices are great but are limited. When I’m doing research, I love having a digital copy and physical copy of the book. The physical copy is easier to maneuver and read, while I have the searching feature, copy/paste, and notes tool in the digital version.

Helpful? Sort of.

Whitney Grace, May 27, 2025

Real News Outfit Finds a Study Proving That AI Has No Impact in the Workplace

May 27, 2025

Dino 5 18 25_thumb_thumb_thumbJust the dinobaby operating without Copilot or its ilk.

The “real news” outfit is the wonderful business magazine Fortune, now only $1 a month. Subscribe now!

The title of the write up catching my attention was “Study Looking at AI Chatbots in 7,000 Workplaces Finds ‘No Significant Impact on Earnings or Recorded Hours in Any Occupation.” Out of the blocks this story caused me to say to myself, “This is another you-can’t-fire-human-writers” proxy.”

Was I correct? Here are three snips, and I not only urge you to subscribe to Fortune but read the original article and form your own opinion. Another option is to feed it into an LLM which is able to include Web content and ask it to tell you about the story. If you are reading my essay, you know that a dinobaby plucks the examples, no smart software required, although as I creep toward 81, I probably should let a free AI do the thinking for me.

Here’s the first snip I captured:

Their [smart software or large language models] popularity has created and destroyed entire job descriptions and sent company valuations into the stratosphere—then back down to earth. And yet, one of the first studies to look at AI use in conjunction with employment data finds the technology’s effect on time and money to be negligible.

You thought you could destroy humans, you high technology snake oil peddlers (not the contraband Snake Oil popular in Hong Kong at this time). Think old-time carnival barkers.

Here’s the second snip about the sample:

focusing on occupations believed to be susceptible to disruption by AI

Okay, “believed” is the operative word. Who does the believing a University of Chicago assistant professor of economics (Yay, Adam Smith. Yay, yay, Friedrich Hayak) and a graduate student. Yep, a time honored method: A graduate student.

Now the third snip which presents the rock solid proof:

On average, users of AI at work had a time savings of 3%, the researchers found. Some saved more time, but didn’t see better pay, with just 3%-7% of productivity gains being passed on to paychecks. In other words, while they found no mass displacement of human workers, neither did they see transformed productivity or hefty raises for AI-wielding super workers.

Okay, not much payoff from time savings. Okay, not much of a financial reward for the users. Okay, nobody got fired. I thought it was hard to terminate workers in some European countries.

After reading the article, I like the penultimate paragraph’s reminder that outfits like Duolingo and Shopify have begun rethinking the use of chatbots. Translation: You cannot get rid of human writers and real journalists.

Net net: A temporary reprieve will not stop the push to shift from expensive humans who want health care and vacations. That’s the news.

Stephen E Arnold, May 27, 2025

Ten Directories of AI Tools

May 26, 2025

Dino 5 18 25Just the dinobaby operating without Copilot or its ilk.

I scan DailyHunt, an India-based news summarizer powered by AI I think. The link I followed landed me on a story titled “Best 10 AI Directories to Promote.” I looked for a primary source, an author, and links to each service. Zippo. Therefore, I assembled the list, provided links, and generated with my dinobaby paws and claws the list below. Enjoy or ignore. I am weary of AI, but many others are not. I am not sure why, but that is our current reality, replete with alternative facts, cheating college professors, and oodles of crypto activity. Remember. The list is not my “best of”; I am simply presenting incomplete information in a slightly more useful format.

AIxploria https://www.aixploria.com/en/ [Another actual directory. Its promotional language says “largest list”. Yeah, I believe that]

AllAITool.ai at https://allaitool.ai/

FamouseAITools.ai https://famousaitools.ai/ [Another marketing outfit sucking up AI tool submissions]

Futurepedia.io https://www.futurepedia.io/ 

TheMangoAI.co https://themangoai.co/ [Not a directory, an advertisement of sorts for an AI-powered marketing firm]

NeonRev https://www.neonrev.com/ [Another actual directory. It looks like a number of Telegram bot directories]

Spiff Store https://spiff.store/ [Another directory. I have no idea how many tools are included]

StackViv https://stackviv.ai/ [An actual directory with 10,000 tools. No I did not count them. Are you kidding me?]

TheresanAIforThat https://theresanaiforthat.com/ [You have to register to look at the listings. A turn off for me]

Toolify.ai https://www.toolify.ai/ [An actual listing of more than 25,000 AI tools organized into categories probably by AI, not a professional indexing specialist]

When I looked at each of these “directories”, marketing is something the AI crowd finds important. A bit more effort in the naming of some of these services might help. Just a thought. Enjoy.

Stephen E Arnold, May 26, 2025

Censorship Gains Traction at an Individual Point

May 23, 2025

dino-orange_thumb_thumb_thumb_thumb_[1]No AI, just the dinobaby expressing his opinions to Zillennials.

I read a somewhat sad biographical essay titled “The Great Displacement Is Already Well Underway: It’s Not a Hypothetical, I’ve Already Lost My Job to AI For The Last Year.” The essay explains that a 40 something software engineer lost his job. Despite what strike me as heroic efforts, no offers ensued. I urge you to take a look at this essay because the push to remove humans from “work” is accelerating. I think with my 80 year old neuro-structures that the lack of “work” will create some tricky social problems.

I spotted one passage in the essay which struck me as significant. The idea of censorship is a popular topic in central Kentucky. Quite a few groups and individuals have quite specific ideas about what books should be available for students and others to read. Here is the quote about censorship from the cited “Great Displacement” essay:

I [the author of the essay] have gone back and deleted 95% of those articles and vlogs, because although many of the ideas they presented were very forward-thinking and insightful at the time, they may now be viewed as pedestrian to AI insiders merely months later due to the pace of AI progress. I don’t want the wrong person with a job lead to see a take like that as their first exposure to me and think that I’m behind the last 24 hours of advancements on my AI takes.

Self-censorship was used to create a more timely version of the author. I have been writing articles with titles like “The Red Light on the Green Board” for years. This particular gem points out that public school teachers sell themselves and their ideas out. The prostitution analogy was intentional. I caught a bit of criticism from an educator in the public high school in which I “taught” for 18 months. Now people just ignore what I write. Thankfully my lectures about online fraud evoke a tiny bit of praise because the law enforcement, crime analysts, and cyber attorneys don’t throw conference snacks at me when I offer one of my personal observations about bad actors.

The cited essay presents a person who is deleting content into to present an “improved” or “shaped” version of himself. I think it is important to have in original form essays, poems, technical reports, and fiction — indeed, any human-produced artifact — available. These materials I think will provide future students and researchers with useful material to mine for insights and knowledge.

Deletion means that information is lost. I am not sure that is a good thing. What’s notable is that censorship is taking place by the author for the express purpose of erasing the past and shaping an impression of the present individual. Will that work? Based on the information in the essay, it had not when I read the write up.

Censorship may be one facet of what the author calls a “displacement.” I am not too keen on censorship regardless of the decider or the rationalization. But I am a real dinobaby, not a 40-something dinobaby like the author of the essay.

Stephen E Arnold, May 23, 2025

AI: Improving Spam Quality, Reach, and Effectiveness

May 22, 2025

It is time to update our hoax detectors. The Register warns, “Generative AI Makes Fraud Fluent—from Phishing Lures to Fake Lovers.” What a great phrase: “fluent fraud.” We can see it on a line of hats and t-shirts. Reporter Iain Thomson consulted security pros Chester Wisniewski of Sophos and Kevin Brown at NCC Group. We learn:

“One of the red flags that traditionally identified spam, including phishing attempts, was poor spelling and syntax, but the use of generative AI has changed that by taking humans out of the loop. … AI has also widened the geographical scope of spam and phishing. When humans were the primary crafters of such content, the crooks stuck to common languages to target the largest audience with the least amount of work. But, Wisniewski explained, AI makes it much easier to craft emails in different languages.”

For example, residents of Quebec used to spot spam by its use of European French instead of the Québécois dialect. Similarly, folks in Portugal learned to dismiss messages written in Brazilian Portuguese. Now, though, AI makes it easy to replicate regional dialects. Perhaps more eerily, it also make it easier to replicate human empathy. Thomson writes:

“AI chatbots have proven highly effective at seducing victims into thinking they are being wooed by an attractive partner, at least during the initial phases. Wisniewski said that AI chatbots can easily handle the opening phases of the scams, registering interest and appearing to be empathetic. Then a human operator takes over and begins removing funds from the mark by asking for financial help, or encouraging them to invest in Ponzi schemes.”

Great. To make matters worse, much of this is now taking place with realistic audio fakes. For example:

“Scammers might call everybody on the support team with an AI-generated voice that duplicates somebody in the IT department, asking for a password until one victim succumbs.”

Chances are good someone eventually will. Whether video bots are a threat (yet) is up for debate. Wisniewski, for one, believes convincing, real-time video deepfakes are not quite there. But Brown reports the experienced pros at his firm have successfully created them for specific use cases. Both believe it is only a matter of time before video deepfakes become not only possible but easy to create and deploy. It seems we must soon learn to approach every interaction that is not in-person with great vigilance and suspicion. How refreshing.

Cynthia Murrell, May 22, 2025

IBM CEO Replaces Human HR Workers with AskHR AI

May 21, 2025

An IBM professional asks the smart AI system, “Have I been terminated?” What if the   smart software hallucinates? Yeah, surprise!

Which employees are the best to replace with AI? For IBM, ironically, it is the ones with “Human” in their title. Entrepreneur reports, “IBM Replaced Hundreds of HR Workers with AI, According to Its CEO.” But not to worry, the firm actually hired workers in other areas. We learn:

“IBM CEO Arvind Krishna told The Wall Street Journal … that the tech giant had tapped into AI to take over the work of several hundred human resources employees. However, IBM’s workforce expanded instead of shrinking—the company used the resources freed up by the layoffs to hire more programmers and salespeople. ‘Our total employment has actually gone up, because what [AI] does is it gives you more investment to put into other areas,’ Krishna told The Journal. Krishna specified that those ‘other areas’ included software engineering, marketing, and sales or roles focused on ‘critical thinking,’ where employees ‘face up or against other humans, as opposed to just doing rote process work.’”

Yes, the tech giant decided to dump those touchy feely types in personnel. Who need human sensitivity with issues like vacations, medical benefits, discrimination claims, or potential lawsuits? That is all just rote process work, right? The AskHR agent can handle it.

According to Wedbush analyst Dan Ives, IBM is just getting started on its metamorphosis into an AI company. What does that mean for humans in other departments? Will their jobs begin to go the way of their former colleagues’ in HR? If so, who would they complain to? Watson, are you on the job?

Cynthia Murrell, May 21, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta