The China Smart, US Dumb Push Is Working

August 7, 2025

Dino 5 18 25This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.

I read “The US Should Run Faster on AI Instead of Trying to Trip Up China.” In a casual way, I am keeping an eye open for variations on the “China smart, US dumb” information I spot. The idea is that China is not just keeping pace with US innovation, the Middle Kingdom is either even or leading. The context is that the star burning bright for the American era has begun collapsing into a black hole or maybe to a brown dwarf. Avoidance of the US may be the best policy. As one of Brazil’s leaders noted: “America is not bullying our country [Brazil]. America is bullying the world.”

Right or wrong? I have zero idea.

The cited essay suggests that certain technology and economic policies have given China an advantage. The idea is that the disruptive kid in high school sits in the back of the room and thinks up a better Facebook-type system and then implements it.

The write up states:

The ostensible reason for the [technology and economic] controls was to cripple China’s AI progress. If that was the goal, it has been a failure.

As I zipped through the essay, I noted that the premise of the write up is that the US has goofed. The proof of this is no farther than data about China’s capabilities in smart software. I think that any large language model will evidence bias. Bias is encapsulated in many human-created utterances. I, for example, have written critically about search and retrieval for decades. Am I biased toward enterprise search? Absolutely. I know from experience that software that attempts to index content in an organization inevitably disappoints a user of that system. Why? No system to which I have been exposed has access to the totality of “information” generated by an organization. Maybe someday? But for the last 40 years, systems simply could not deliver what the marketers promised. Therefore, I am biased against claims that an enterprise search system can answer employees’ questions.

China is a slippery fish. I had a brief and somewhat weird encounter with a person deeply steeped in China’s somewhat nefarious effort to gain access to US pharma-related data. I have encountered a similar effort afoot in the technical disciplines related to nuclear power. These initiatives illustrate that China wants to be a serious contender for the title of world leader in bio-science and nuclear. Awareness of this type of information access is low even today.

I am, as a dinobaby, concerned that the lack of awareness issue creates more opportunities for information exfiltration from a proprietary source to an “open source” concept. To be frank, I am in favor of a closed approach to technology.

The reason I am making sure I have this source document and my comments is that it is a very good example of how the China good, America dumb information is migrating into what might be termed a more objective looking channel.

Net net: China’s weaponizing of information is working reasonably well. We are no longer in TikTok territory.

Stephen E Arnold, August 6, 2025

Can Clarity Confuse? No, It Is Just Zeitgeist

August 1, 2025

Dino 5 18 25This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.

In my newsreader this morning, popped this article “Why Navigating Ongoing Uncertainty Requires Living in the Now, Near, and Next.” I was not familiar with Clarity Global. I think it is a public relations firm. The CEO of the firm is a former actress. I have minimal knowledge of PR and even less about acting.

I plunged into the essay. The purpose of the write up, in my opinion, was to present some key points from a conference called “TNW2025.”  Conference often touch upon many subjects. One event at which I spoke this year had a program listing on six pages the speakers. I think 90 percent of the people attending the conference were speakers.

The first ideas in the write up touch upon innovation, technology adoption, funding, and the zeitgeist. Yep, zeitgeist.

As if these topics were not of sufficient scope, the write up identifies three themes. These are:

  1. “Regulation is a core business competency”
  2. “Partnership is the engine of progress”
  3. “Culture is critical”.

Notably absent was making money and generating a profit.

What about the near, now, and next?

The near means having enough cash on hand to pay the bills at the end of the month. The now means having enough credit or money to cover the costs of being in business. Recently a former CIA operative invited me to lunch. When the bill arrived, he said, “Oh, I left my billfold at home.” I paid the bill and decided to delete him from my memory bank. He stiffed me for $11, and he told me quite a bit about his “now.” And the next means that without funding there is a greatly reduced chance of having a meaningful future. I wondered, “Was this ‘professional’ careless, dumb, or unprofessional?” (Maybe all three?)

Now what about these themes. First, regulation means following the rules. I am not sure this is a competency. To me, it is what one does. Second, partnership is a nice word, not as slick as zeitgeist but good. The idea of doing something alone seems untoward. Partnerships have a legal meaning. I am not sure that a pharmaceutical company with a new drug is going to partner up. The company is going to keep a low profile, file paperwork, and get the product out. Paying people and companies to help is not a partnership. It is a fee-for-service relationship. These are good. Partnerships can be “interesting.” And culture is critical. In a market, one has to identify a market. Each market has a profile. It is common sense to match the product or service to each market’s profile. Apple cannot sell an iPhone to a person who cannot afford to pay for connectivity, buy apps or music, or plug the gizmo in. (I am aware that some iPhone users steal them and just pretend, but those are potential customers, not “real” customers.)

Where does technology fit into this conference? It is the problem organizations face. It is also the 10th word in the essay. I learned “… the technology landscape continues to evolve at an accelerating page.” Where’s smart software? Where’s non-democratic innovation? Where’s informed resolution of conflict?

What about smart software, AI, or artificial intelligence? Two mentions: One expert at the conference invests in AI and in this sentence:

As AI, regulation and societal expectations evolve, the winners will be those who anticipate change and act with conviction.

I am not sure regulation,  partnership, and coping with culture can do the job. As for AI, I think funding and pushing out products and services capture the zeitgeist.

Stephen E Arnold, August 1, 2025

AI, Math, and Cognitive Dissonance

July 28, 2025

Dino 5 18 25This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.

AI marketers will have to spend some time positioning their smart software as great tools for solving mathematical problems. “Not Even Bronze: Evaluating LLMs on 2025 International Math Olympiad” reports that words about prowess are disconnected from performance. The write up says:

The best-performing model is Gemini 2.5 Pro, achieving a score of 31% (13 points), which is well below the 19/42 score necessary for a bronze medal. Other models lagged significantly behind, with Grok-4 and Deepseek-R1 in particular underperforming relative to their earlier results on other MathArena benchmarks.

The write up points out, possibly to call attention to the slight disconnect between the marketing of Google AI and its performance in this contest:

As mentioned above, Gemini 2.5 Pro achieved the highest score with an average of 31% (13 points). While this may seem low, especially considering the $400 spent on generating just 24 answers, it nonetheless represents a strong performance given the extreme difficulty of the IMO. However, these 13 points are not enough for a bronze medal (19/42). In contrast, other models trail significantly behind and we can already safely say that none of them will achieve the bronze medal. Full results are available on our leaderboard, where everyone can explore and analyze individual responses and judge feedback in detail.

This is one “competition”, the lousy performance of the high-profile models, and the complex process required to assess performance make it easy to ignore this result.

Let’s just assume that it is close enough for horse shoes and good enough. With that  assumption in mind, do you want smart software making decisions about what information you can access, the medical prognosis for your nine-year-old child, or decisions about your driver’s license renewal?

Now, let’s consider this write up fragmented across Tweets: [Thread] An OpenAI researcher says the company’s latest experimental reasoning LLM achieved gold medal-level performance on the 2025 International Math Olympiad. The little posts are perfect for a person familliar with TikTok-type and Twitter-like content. Not me. The main idea is that in the same competition, OpenAI earned “gold medal-level performance.”

The $64 dollar question is, “Who is correct?” The answer is, “It depends.”

Is this an example of what I learned in 1962 in my freshman year at a so-so university? I think the term was cognitive dissonance.

Stephen E Arnold, July 28, 2025

AI Content Marketing: Claims about Savings Are Pipe Dreams

July 24, 2025

Dino 5 18 25This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.

My tiny team and I sign up for interesting smart software “innovations.” We plopped down $40 to access 1min.ai. Some alarm bells went off. These were not the panic inducing Code Red buzzers at the Google. But we noticed. First, registration was   wonky. After several attempts were had an opportunity to log in. After several tries, we gained access to the cornucopia of smart software goodies. We ran one query and were surprised to see Hamster Kombat-style points. However, the 1min.ai crowd flipped the winning click-to-earn model on its head.  Every click consumed points. When the points were gone, the user had to buy more. This is an interesting variation of taxi meter pricing, a method reviled in the 1980s when commercial databases were the rage.

I thought about my team’s experience with 1min.ai and figured that an objective person would present some of these wobbles. Was I wrong? Yes.

Your New AI-Powered Team Costs Less Than $80. Meet 1min.ai” is one of the wildest advertorial or content marketing smoke screens I have encountered in the last week or so. The write up asserts as actual factual, hard-hitting, old-fashioned technology reporting:

If ChatGPT’s your sidekick, think of 1min.AI as your entire productivity squad. This AI-powered tool lets you automate all kinds of content and business tasks—including emails, social media posts, blog drafts, reports, and even ad copy—without ever opening a blank doc.

I would suggest that one might tap 1min.ai to write an article for a hard-working, logic-charged professional at Macworld.

How about this descriptive paragraph which may have been written by an entity or construct:

Built for speed and scale, 1min.AI gives you access to over 80 AI tools designed to handle everything from content generation to data analysis, customer support replies, and more. You can even build your own tools inside the platform using its AI builder—no coding required.

And what about this statement:

The UI is slick and works in any browser on macOS.

What’s going on?

First, this information is PR assertions without factual substance.

Two, the author did not try to explain the taxi meter business model. It is important if one uses one account for a “team.”

Three, the functionality of the system is less useful that You.com based on our tests. Comparing 1min.ai is a key word play. ChatGPT has some bit time flaws. These include system crashes and delivering totally incorrect information. But 1min.ai lags behind. When ChatGPT stumbles over the prompt finish line, 1min.ai is still lacing its sneakers.

Here’s the final line of this online advertorial:

Act now while plans are still in stock!

How does a digital subscription go out of stock. Isn’t the offer removed?

I think more of this type of AI play acting will appear in the months ahead.

Stephen E Arnold, July 24, 2025

AI Forces Stack Exchange to Try a Rebranding Play

June 19, 2025

Stack Exchange is a popular question and answer Web site. Devclass reports it will sone be rebranding: “Stack Overflow Seeks Rebrand As Traffic Continues To Plummet – Which Is Bad News For Developers.”

According to Stack Overflow’s data explorer, the amount of questions and answers posted in April 2025 compared to April 2024 is down 64% and it’s down 90% from 2020. The company will need to rebrand because AI is changing how users learn, build, and resolve problems. Some users don’t think a rebrand is necessary, but the Stack Exchange thinks differently:

“Nevertheless, community SVP Philippe Beaudette and marketing SVP Eric Martin stated that the company’s “brand identity” is causing “daily confusion, inconsistency, and inefficiency both inside and outside the business.”

Among other things, Beaudette and Martin feel that Stack Overflow, dedicated to developer Q&A, is too prominent and that “most decisions are developer-focused, often alienating the wider network.”

CEO Prashanth Chandrasekar wants his company’s focus to change from only a question and answer platform to include community and career pillars. The company needs to do a lot to maintain its relevancy but Stack Overflow is still important to AI:

“The company’s search for a new direction though confirms that the fast-disappearing developer engagement with Stack Overflow poses an existential challenge to the organization. Those who have found the site unfriendly or too ready to close carefully-worded questions as duplicate or off-topic may not be sad; but it is also true that the service has delivered high value to developers over many years. Although AI may seem to provide a better replacement, some proportion of those AI answers will be based on the human-curated information posted by the community to Stack Overflow. The decline in traffic is not good news for developers, nor for the AI which is replacing it.”

Stack Overflow is an important information fount, but the human side of it is its most important resource. Why not let gentle OpenAI suggest some options?

Whitney Grace, June 19, 2025

Professor Marcus, You Missed One Point about the Apple Reasoning Paper

June 16, 2025

Dino 5 18 25An opinion essay written by a dinobaby who did not rely on smart software but for the so-so cartoon.

The intern-fueled Apple academic paper titled “The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity” has caused a stir. An interesting analysis of the responses to this tour de force is “Seven Replies to the Viral Apple Reasoning Paper – and Why They Fall Short.” Professor Gary Marcus in his analysis identifies categories of reactions to the Apple document.

In my opinion, these are, and I paraphrase with abandon:

  1. Human struggle with complex problems; software does too
  2. Smart software needs lots of computation so deliver a good enough output that doesn’t cost too much
  3. The paper includes an intern’s work because recycling and cheap labor are useful to busy people
  4. Bigger models are better because that’s what people do in Texas
  5. System can solve some types of problems and fail at others
  6. Limited examples because the examples require real effort
  7. The paper tells a reader what is already known: Smart software can be problematic because it is probabilistic, not intelligent.

I look at the Apple paper from a different point of view.

The challenge for Apple has been for more than a year to make smart software with its current limitations work reasonably well. Apple’s innovation in smart software has been the somewhat flawed SIRI (sort of long in the tooth) and the formulation of a snappy slogan “Apple Intelligence.”

image

This individual is holding a “cover your a**” document. Thanks, You.com. Good enough given your constraints, guard rails, and internal scripts.

The job of a commercial enterprise is to create something useful and reasonably clever to pull users to a product. Apple failed. Other companies have rolled out products making use of smart software as it currently is. One of the companies with a reasonably good product is OpenAI’s ChatGPT. Another is Perplexity.

Apple is not in this part of the smart software game. Apple has failed to use “as is” software in a way that adds some zing to the firm’s existing products. Apple has failed, just as it failed with the weird googles, its push into streaming video, and the innovations for the “new” iPhone. Changing case colors and altering an interface to look sort of like Microsoft’s see-through approach are not game changers. Labeling software by the year of release does not make me want to upgrade.

What is missing from the analysis of the really important paper that says, “Hey, this  smart software has big  problems. The whole house of LLM cards is wobbling in the wind”?

The answer is, “The paper is a marketing play.” The best way to make clear that Apple has not rolled out AI is because the current technology is terrible. Therefore, we need more time to figure out how to do AI well with crappy tools and methods not invented at Apple.

I see the paper as pure marketing. The timing of the paper’s release is marketing. The weird colors of the charts are marketing. The hype about the paper itself is marketing.

Anyone who has used some of the smart software tools knows one thing: The systems make up stuff. Everyone wants the “next big thing.” I think some of the LLM capabilities can be quite  useful. In the coming months and years, smart software will enable useful functions beyond giving students a painless way to cheat, consultants a quick way to appear smart in a very short time, and entrepreneurs a way to vibe code their way into a job.

Apple has had one job: Find a way to use  the available technology to deliver something novel and useful to its customers. It has failed. The academic paper  is a “cover your a**”  memo more suitable for a scared 35 year old middle manager in an advertising agency. Keep in mind that I am no professor. I am a dinobaby. In my world, an “F” is an “F.” Apple’s viral paper is an excuse for delivering something useful with Apple Intelligence. The company has delivered an illustration of why there is no Apple smart TV or Apple smart vehicle.

The paper is marketing, and it is just okay marketing.

Stephen E Arnold, June 16, 2025

A 30-Page Explanation from Tim Apple: AI Is Not Too Good

June 9, 2025

Dino 5 18 25I suppose I should use smart software. But, no, I prefer the inept, flawed, humanoid way. Go figure. Then say to yourself, “He’s a dinobaby.

Gary Marcus, like other experts, are putting Apple into an old-fashioned peeler. You can get his insights in “A Knock Out Blow for LLMs.” I have a different angle on the Apple LLM explainer. Here we go:

Many years ago I had some minor role to play in the commercial online database sector. One of our products seemed to be reasonably good at summarizing business and technical journal articles, academic flights of fancy, and some just straight out crazy write ups from Harvard Business Review-type publications.

I read a 30-page “research” paper authored by what appear to be some of the “aw, shucks” folks at Apple. The write up is located on Apple’s content delivery network, of course. No run-of-the-mill service is up to those high Apple standards of Tim and his orchard keepers. The paper is authored by Parshin Shojaee (who is identified as an intern who made an equal contribution to the write up), Imam Mirzadeh (Apple), Keivan Alizadeh (Apple), Maxwell Horton (Apple), Samy Bengio (Apple), and Mehrdad Farajtabar (Apple). Okay, this seems to be a very academic line up with an intern who was doing “equal contribution” along with the high-powered horticulturists laboring on the write up.

The title is interesting: “The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity.” In a nutshell, the paper tries to make clear that current large language models deliver inconsistent results and cannot reason in a reliable manner. When I read this title, my mind conjured up an image of AI products and services delivering on-point outputs to iPhone users. That was the “illusion” of a large, ageing company trying to keep pace with technology and applications from its competitors, the upstarts, and the nation-states doing interesting things with the admittedly-flawed large language models. But those outside the Apple orchard have been delivering something.

My reaction to this document and its easy-to-read pastel charts like this one from page 30:

image

One of my addled professors told me, “Also, end on a strong point. Be clear, concise, and issue a call to action.” Apple obviously believes that these charts deliver exactly what my professor told me.

I interpreted the paper differently; to wit:

  1. Apple announced “Apple intelligence” and failed to ship for what a year or more had been previously announced
  2. Siri still sucks from my point of view
  3. Apple reorganized its smart software team in a significant way. Why? See items 1 and 2.
  4. Apple runs the risk of having its many iPhone users just skip “Apple intelligence” and maybe not upgrade due to the dalliance with China, the tariff issue, and the reality of assuming that what worked in the past will be just super duper in the future.

Sorry, gardeners. A 30-page paper is not going to change reality. Apple is a big outfit. It seems to be struggling. No Apple car. An increasingly wonky browser. An approach to “settings” almost as bad as Microsoft’s. And much, much more. Coming soon will be a new iOS numbering system and more!

That’s what happens when interns contribute as much as full-time equivalents and employees. The result is a paper. Okay, good enough.

But, sorry, Tim Apple: Papers, pastel charts, and complaining about smart software will not change a failure to match marketing with what users can access.

Stephen E Arnold, June 9, 2025

Is AI Experiencing an Enough Already Moment?

June 4, 2025

Consumers are fatigued from AI even though implementation of the technology is still new. Why are they tired? The Soatok Blog digs into that answer in the post: “Tech Companies Apparently Do Not Understand Why We Dislike AI – Dhole Moments.” Big Tech and other businesses don’t understand that their customers hate AI.

Soatok took a survey about AI that asked for opinions about AI that included questions about a “potential AI uprising.” Soatok is abundantly clear that he’s not afraid of a robot uprising or the “Singularity.” He has other reasons to worry about AI:

“I’m concerned about the kind of antisocial behaviors that AI will enable.

• Coordinated inauthentic behavior

• Misinformation

• Nonconsensual pornography

• Displacing entire industries without a viable replacement for their income

In aggregate, people’s behavior are largely the result of the incentive structures they live within.

But there is a feedback loop: If you change the incentive structures, people’s behaviors will certainly change, but subsequently so, too, will those incentive structures. If you do not understand people, you will fail to understand the harms that AI will unleash on the world. Distressingly, the people most passionate about AI often express a not-so-subtle disdain for humanity.”

Soatok is describing toxic human behaviors. These include toxic masculinity and femininity, but it’s more so the former. He aptly describes them:

"I’m talking about the kind of X users that dislike experts so much that they will ask Grok to fact check every statement a person makes. I’m also talking about the kind of “generative AI” fanboys that treat artists like garbage while claiming that AI has finally “democratized” the creative process.”

Insert a shudder here.

Soatok goes to explain how AI can be implemented in encrypted software that would collect user information. He paints a scenario where LLMs collect user data and they’re not protected by the Fourth and Fifth Amendments. Also AI could create psychological profiles of users that incorrectly identify them as psychotic terrorists.

Insert even more shuddering.

Soatok advises Big Tech to make AI optional and not the first out of the box solution. He wants users to have the choice of engaging with AI, even it means lower user metrics and data fed back to Big Tech. Is Soatok hallucinating like everyone’s favorite over-hyped technology. Let’s ask IBM Watson. Oh, wait.

Whitney Grace, June 4, 2025

NSO Group: When Marketing and Confidence Mix with Specialized Software

May 13, 2025

dino-orange_thumb_thumb_thumb_thumb_thumbNo AI, just the dinobaby expressing his opinions to Zellenials.

Some specialized software must remain known only to a small number of professionals specifically involved in work related to national security. This is a dinobaby view, and I am not going to be swayed with “information wants to be free” arguments or assertions about the need to generate revenue to make the investors “whole.” Abandoning secrecy and common sense for glittering generalities and MBA mumbo jumbo is ill advised.

I read “Meta Wins $168 Million in Damages from Israeli Cyberintel Firm in Whatsapp Spyware Scandal.” The write up reports:

Meta won nearly $168 million in damages Tuesday from Israeli cyberintelligence company NSO Group, capping more than five years of litigation over a May 2019 attack that downloaded spyware on more than 1,400 WhatsApp users’ phones.

The decision is likely to be appealed, so the “won” is not accurate. What is interesting is this paragraph:

[Yaron] Shohat [NSO’s CEO] declined an interview outside the Ron V. Dellums Federal Courthouse, where the court proceedings were held.

From my point of view, fewer trade shows, less marketing, and a lower profile should be action items for Mr. Shohat, the NSO Group’s founders, and the firm’s lobbyists.

I watched as NSO Group became the poster child for specialized software. I was not happy as the firm’s systems and methods found their way into publicly accessible Web sites. I reacted negatively as other specialized software firms (these I will not identify) began describing their technology as similar to NSO Group’s.

The desperation of cyber intelligence, specialized software firms, and — yes — trade show operators is behind the crazed idea of making certain information widely available. I worked in the nuclear industry in the early 1970s. From Day One on the job, the message was, “Don’t talk.” I then shifted to a blue chip consulting firm working on a wide range of projects. From Day One on that job, the message was, “Don’t talk.” When I set up my own specialized research firm, the message I conveyed to my team members was, “Don’t talk.”

Then it seemed that everyone wanted to “talk”. Marketing, speeches, brochures, even YouTube videos distributed information that was never intended to be made widely available. Without operating context and quite specific knowledge, jazzy pitches that used terms like “zero day vulnerability” and other crazy sales oriented marketing lingo made specialized software something many people without operating context and quite specific knowledge “experts.”

I see this leakage of specialized software information in the OSINT blurbs on LinkedIn. I see it in social media posts by people with weird online handles like those used in Top Gun films. I see it when I go to a general purpose knowledge management meeting.

Now the specialized software industry is visible. In my opinion, that is not a good thing. I hope Mr. Shohat and others in the specialized software field continue the “decline to comment” approach. Knock off the PR. Focus on the entities authorized to use specialized software. The field is not for computer whiz kids, eGame players, and  wanna be intelligence officers.

Do your job. Don’t talk. Do I think these marketing oriented 21st century specialized software companies will change their behavior? Answer: Oh, sure.

PS. I hope the backstory for Facebook / Meta’s interest in specialized software becomes part of a public court record. I am curious is what I have learned matches up to the court statements. My hunch is that some social media executives have selective memories. That’s a useful skill I have heard.

Stephen E Arnold, May 13, 2025

Big Numbers and Bad Output: Is This the Google AI Story

May 13, 2025

dino orange_thumbNo AI. Just a dinobaby who gets revved up with buzzwords and baloney.

Alphabet Google reported financials that made stakeholders happy. Big numbers were thrown about. I did not know that 1.5 billion people used Google’s AI Overviews. Well, “use” might be misleading. I think the word might be “see” or “were shown” AI Overviews. The key point is that Google is making money despite its legal hassles and its ongoing battle with infrastructure costs.

I was, therefore, very surprised to read “Google’s AI Overviews Explain Made-Up Idioms With Confident Nonsense.” If the information in the write up is accurate, the factoid suggests that a lot of people may be getting bogus information. If true, what does this suggest about Alphabet Google?

The Cnet article says:

…the author and screenwriter Meaghan Wilson Anastasios shared what happened when she searched “peanut butter platform heels.” Google returned a result referencing a (not real) scientific experiment in which peanut butter was used to demonstrate the creation of diamonds under high pressure.

Those Nobel prize winners, brilliant Googlers, and long-time wizards like Jeff Dean seem to struggle with simple things. Remember the glue cheese on pizza suggestion before Google’s AI improved.

The article adds by quoting a non-Google wizard:

“They [large language models] are designed to generate fluent, plausible-sounding responses, even when the input is completely nonsensical,” said Yafang Li, assistant professor at the Fogelman College of Business and Economics at the University of Memphis. “They are not trained to verify the truth. They are trained to complete the sentence.”

Turning in lousy essay and showing up should be enough for a C grade. Is that enough for smart software with 1.5 billion users every three or four weeks?

The article reminds its readers”

This phenomenon is an entertaining example of LLMs’ tendency to make stuff up — what the AI world calls “hallucinating.” When a gen AI model hallucinates, it produces information that sounds like it could be plausible or accurate but isn’t rooted in reality.

The outputs can be amusing for a person able to identify goofiness. But a grade school kid? Cnet wants users to craft better prompts.

I want to be 17 years old again and be a movie star. The reality is that I am 80 and look like a very old toad.

AI has to make money for Google. Other services are looking more appealing without the weight of legal judgments and hassles in numerous jurisdictions. But Google has already won the AI race. Its DeepMind unit is curing disease and crushing computational problems. I know these facts because Google’s PR and marketing machine is running at or near its red line.

But the 1.5 billion users potentially receiving made up, wrong, or hallucinatory information seems less than amusing to me.

Stephen E Arnold, May 13, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta