AI Search Is Great. Believe It. Now!

September 18, 2025

Dino 5 18 25_thumbSadly I am a dinobaby and too old and stupid to use smart software to create really wonderful short blog posts.

Cheerleaders are necessary. The idea is that energetic people lead other people to chant: Stand Up, Sit Down, Fight! Fight! Fight! If you get with the program, you stand up. You sit down. You shout, of course, fight, fight, fight. Does it help? I don’t know because I don’t cheer at sports events. I say, “And again” or some other statement designed to avoid getting dirty looks or caught up in standing, sitting, and chanting.

Others are different. “GPT-5 Thinking in ChatGPT (aka Research Goblin) Is Shockingly Good at Search” states:

Don’t use chatbots as search engines” was great advice for several years… until it wasn’t. I wrote about how good OpenAI’s o3 was at using its Bing-backed search tool back in April. GPT-5 feels even better.

The idea is that instead of working with a skilled special librarian and participating in a reference interview, people started using online Web indexes. Now we have moved from entering a query to asking a smart software system for an answer.

Consider the trajectory. A person seeking information works with a professional with knowledge of commercial databases, traditional (book) reference tools, and specific ways of tracking down and locating information needed to answer the user’s question. When the user  was not sure, the special librarian would ask, “What specific information do you need?” Some users would reply, “Get me everything about subject X?” The special librarian would ask other questions until a particular item could be identified. In the good old days, special librarians would seek the information and provide selected items to the person with the question. Ellen Shedlarz at Booz, Allen & Hamilton when I was a lowly peon did this type of work as did Dominque Doré at Halliburton NUS (a nuclear outfit).

We then moved to the era of PCs and do-it-yourself research. Everyone became an expert. Google just worked. Then mobile phones arrived so research on the go was a thing. But keying words into a search box and fiddling with links was a drag. Now just tell the smart software your problem. The solution is just there like instant oatmeal.

The Stone Age process was knowledge work. Most people seeking information did not ask, preferring as one study found to look through trade publications in an old-fashioned in box or pick up the telephone and ask a person whom one assumed knew something about a particular subject. The process was slow, inefficient, and fraught with delays. Let’s be efficient. Let’s let software do everything.

Flash forward to the era of smart software or seemingly smart software. The write up reports:

I’ve been trying out hints like “go deep” which seem to trigger a more thorough research job. I enjoy throwing those at shallow and unimportant questions like the UK Starbucks cake pops one just to see what happens! You can throw questions at it which have a single, unambiguous answer—but I think questions which are broader and don’t have a “correct” answer can be a lot more fun. The UK supermarket rankings above are a great example of that. Since I love a questionable analogy for LLMs Research Goblin is… well, it’s a goblin. It’s very industrious, not quite human and not entirely trustworthy. You have to be able to outwit it if you want to keep it gainfully employed.

The reference / special librarians are an endangered species. The people seeking information use smart software. Instead of a back-and-forth and human-intermediated interaction between a trained professional and a person with a question, we get “trying out” and “accepting the output.”

I think there are three issues inherent in this cheerleading:

  1. Knowledge work is short circuited. Instead of information-centric discussion, users accept the output. What if the output is incorrect, biased, incomplete, or made up? Cheerleaders shout more enthusiastically until a really big problem occurs.
  2. The conditioning process of accepting outputs makes even intelligent people susceptible to mental short cuts. These are good, but accuracy, nuance, and a sense of understanding the information may be pushed to the side of the information highway. Sometimes those backroads deliver unexpected and valuable insights. Forget that. Grab a burger and go.
  3. The purpose of knowledge work is to make certain that an idea, diagnosis, research study can be trusted. The mechanisms of large language models are probabilistic. Think close enough for horseshoes. Cheering loudly does not deliver accuracy of output, just volume.

Net net: Inside each large language model lurks a system capable of suggesting glue cheese on pizza, the gray mass is cancer, and eat rocks.

What’s been lost? Knowledge value from the process of obtaining information the Stone Age way. Let’s work in caves with fire provided by burning books. Sounds like a plan, Sam AI-Man. Use GPT5, use GPT5, use GPT5.

Stephen E Arnold, September 18, 2025

AI Maggots: Are These Creatures Killing the Web?

September 18, 2025

The short answer is, “Yep.”

The early days of the free, open Web held such promise. Alas, AI is changing the Internet and there is, apparently, nothing we can do about it. The Register laments, “AI Web Crawlers Are Destroying Websites in their Never-Ending Hunger for Any and All Content: But the Cure May Ruin The Web.…” Writer Steven J. Vaughan-Nichols tells us a whopping 30% of traffic is now bots, according to Cloudflare. And 80% of that, reports Fastly, comes from AI-data fetcher bots. Web crawlers have been around since 1993, of course, but this volume is something new. And destructive. Vaughan-Nichols writes:

“Fastly warns that [today’s AI crawlers are] causing ‘performance degradation, service disruption, and increased operational costs.’ Why? Because they’re hammering websites with traffic spikes that can reach up to ten or even twenty times normal levels within minutes. Moreover, AI crawlers are much more aggressive than standard crawlers. As the InMotionhosting web hosting company notes, they also tend to disregard crawl delays or bandwidth-saving guidelines and extract full page text, and sometimes attempt to follow dynamic links or scripts. he result? If you’re using a shared server for your website, as many small businesses do, even if your site isn’t being shaken down for content, other sites on the same hardware with the same Internet pipe may be getting hit. This means your site’s performance drops through the floor even if an AI crawler isn’t raiding your website. Smaller sites, like my own Practical Tech, get slammed to the point where they’re simply knocked out of service. Thanks to Cloudflare Distributed Denial of Service (DDoS) protection, my microsite can shrug off DDoS attacks. AI bot attacks – and let’s face it, they are attacks – not so much.”

Even big websites are shelling out for more processor, memory, and network resources to counter the slowdown. And no wonder: According to Web hosting firms, most visitors abandon a site that takes more than three seconds to load. Site owners have some tools to try mounting a defense, like paywalls, logins, and annoying CAPTCHA games. Unfortunately, AI is good at getting around all of those. As for the tried and true, honor-system based robots.txt files, most AI crawlers breeze right on by. Hey, love maggots.

Cynthia Murrell, September 18, 2025

AI and Security? What? Huh?

September 18, 2025

As technology advances so do bad actors and their devious actions. Bad actors are so up to date with the latest technology that it takes white hat hackers and cyber security engineers awhile to catch up to them. AI has made bad actors smarter and EWeek explains that there is we are facing a banking security crisis: “Altman Warns Of AI-Powered Fraud Crisis in Banking, Urges Stronger Security Measures.”

OpenAI CEO Sam Altman warned that AI vocal technology is a danger to society. He told the Federal Reserve Vice Chair for Supervision Michelle Bowman that US banks are lagging behind Ai vocal security, because many financial institutions still rely on voiceprint technology to verify customers’ identities.

Altman warned that AI vocal technology can easily replicate humans and deepfake videos are even scarier when they become indistinguishable from reality. Bowman mentioned potential partnering with tech companies to create solutions.

Despite sounding the warning bells, Altman didn’t offer much help:

“Despite OpenAI’s prominence in the AI industry, Altman clarified that the company is not creating tools for impersonation. Still, he stressed that the broader AI community must take responsibility for developing new verification systems, such as “proof of human” solutions.

Altman is supporting tools like The Orb, developed by Tools for Humanity. The device aims to provide “proof of personhood” in a digital world flooded with fakes. His concerns go beyond financial fraud, extending to the potential for AI superintelligence to be misused in areas such as cyberwarfare or biological threats.”

Proof of personhood? It’s like the blue check on verified X/Twitter accounts. Altman might be helping make the future but he’s definitely also part of the problem.

Whitney Grace, September 18, 2025

Qwen: Better, Faster, Cheaper. Sure, All Three

September 17, 2025

Dino 5 18 25No smart software involved. Just a dinobaby’s work.

I spotted another China Smart, U S Dumb write up. Analytics India published “Alibaba Introduces Qwen3-Next as a More Efficient LLM Architecture.” The story caught my attention because it was a high five to the China-linked Alibaba outfit and because it is a signal that India and China are on the path to BFF bliss.

The write up says:

Alibaba’s Qwen team has introduced Qwen3-Next, a new large language model architecture designed to improve efficiency in both training and inference for ultra-long context and large-parameter settings.

The sentence reinforces the better, faster, cheaper sales mantra one beloved by Crazy Eddie.

Here’s another sentence catching my attention:

At its core, Qwen3-Next combines a hybrid attention mechanism with a highly sparse mixture-of-experts (MoE) design, activating just three billion of its 80 billion parameters during inference.  The announcement blog explains that the new mechanism allows the base model to match, and in some cases outperform, the dense Qwen3-32B, while using less than 10% of its training compute. In inference, throughput surpasses 10x at context lengths beyond 32,000 tokens.

This passage emphasizes the value of the mixture of experts approach in the faster and cheaper assertions.

Do I believe the data?

Sure, I believe every factoid presented in the better, faster, cheaper marketing of large language models. Personally I find that these models, regardless of development group, are useful for some specific functions. The hallucination issue is the deal breaker. Who wants to kill a person because a smart medical system is making benign out of malignancy? Who wants an autonomous AI underwater drone to take out those college students and not the adversary’s stealth surveillance boat?

Where can you get access this better, faster, cheaper winner? The write up says, “Hugging Face, ModelScope, Alibaba Cloud Model Studio and NVIDIA API Catalog, with support from inference frameworks like SGLang and vLLM.”

Stephen E Arnold, September 17, 2025

Professor Goes Against the AI Flow

September 17, 2025

One thing has Cornell professor Kate Manne dreading the upcoming school year: AI. On her Substack, “More to Hate,” the academic insists, “Yes, It Is Our Job as Professors to Stop our Students Using ChatGPT.” Good luck with that.

Manne knows even her students who genuinely love to learn may give in to temptation when faced with an unrelenting academic schedule. She cites the observations of sociologist Tressie McMillan Cottom as she asserts young, stressed-out students should not bear that burden. The responsibility belongs, she says, to her and her colleagues. How? For one thing, she plans to devote precious class time to having students hand-write essays. See the write-up for her other ideas. It will not be easy, she admits, but it is important. After all, writing assignments are about developing one’s thought processes, not the finished product. Turning to ChatGPT circumvents the important part. And it is sneaky. She writes:

“Again, McMillan Cottom crystallized this perfectly in the aforementioned conversation: learning is relational, and ChatGPT fools you into thinking that you have a relationship with the software. You ask it a question, and it answers; you ask it to summarize a text, and it offers to draft an essay; you request it respond to a prompt, using increasingly sophisticated constraints, and it spits out a response that can feel like your own achievement. But it’s a fake relationship, and a fake achievement, and a faulty simulacrum of learning. It’s not going to office hours, and having a meeting of the minds with your professor; it’s not asking a peer to help you work through a problem set, and realizing that if you do it this way it makes sense after all; it’s not consulting a librarian and having them help you find a resource you didn’t know you needed yet. Your mind does not come away more stimulated or enriched or nourished by the endeavor. You yourself are not forging new connections; and it makes a demonstrable difference to what we’ve come to call ‘learning outcomes.’”

Is it even possible to keep harried students from handing in AI-generated work? Manne knows she is embarking on an uphill battle. But to her, it is a fight worth having. Saddle up, Donna Quixote.

Cynthia Murrell, September 17, 2025

Who Needs Middle Managers? AI Outfits. MBAs Rejoice

September 16, 2025

Dino-5-18-25_thumb3No smart software involved. Just a dinobaby’s work.

I enjoy learning about new management trends. In most cases, these hip approaches to reaching a goal using people are better than old Saturday Night Live skits with John Belushi dressed as a bee. Here’s a good one if you enjoy the blindingly obvious insights of modern management thinkers.

Navigate to “Middle Managers Are Essential for AI Success.” That’s a title for you!

The write up reports without a trace of SNL snarkiness:

31% of employees say they’re actively working against their company’s AI initiatives. Middle managers can bridge the gap.

Whoa, Nellie. I thought companies were pushing forward with AI because, AI is everywhere. Microsoft Word, Google “search” (I use the term as a reminder that relevance is long gone), and from cloud providers like Salesforce.com. (Yeah, I know Salesforce is working hard to get the AI thing to go, and it is doing what big companies like to do: Cut costs by terminating humanoids.)

But the guts of the modern management method is a list (possibly assisted by AI?) The article explains without a bit of tongue in cheek élan “ways managers can turn anxious employees into AI champions.”

Here’s the list:

  1. Communicate the AI vision. [My observation: Isn’t that what AI is supposed to deliver? Fewer employees, no health care costs, no retirement costs, and no excess personnel because AI is so darned effective?”]
  2. Say, “I understand” and “Let’s talk about it.” [My observation: How long does psychological- and attitudinal-centric interactions take when there are fires to put out about an unhappy really big customer’s complaint about your firm’s product or service?]
  3. Explain to the employee how AI will pay off for the employee who fears AI won’t work or will cost the person his/her job? [My observation: A middle manager can definitely talk around, rationalize, and lie to make the person’s fear go away. Then the middle manager will write up the issue and forward it to HR or a superior. We don’t need a weak person on our team, right?]
  4. “Walk the talk.” [My observation: That’s a variant of fake it until you make it. The modern middle manager will use AI, probably realize that an AI system can output a good enough response so the “walk the talk” person can do the “walk the walk” to the parking lot to drive home after being replaced by an AI agent.]
  5. Give employees training and a test. [My observation: Adults love going to online training sessions and filling in the on-screen form to capture trainee responses. Get the answers wrong, and there is an automated agent pounding emails to the failing employee to report to security, turn in his/her badge, and get escorted out of the building.]

These five modern management tips or insights are LinkedIn-grade output. Who will be the first to implement these at an AI company or a firm working hard to AI-ify its operations. Millions I would wager.

Stephen E Arnold, September 16, 2025

Google Is Going to Race Penske in Court!

September 15, 2025

Dino 5 18 25Written by an unteachable dinobaby. Live with it.

How has smart software affected the Google? On the surface, we have the Code Red klaxons. Google presents big time financial results so the sirens drowned out by the cheers for big bucks. We have Google dodging problems with the Android and Chrome snares, so the sounds are like little chicks peeping in the eventide.

—-

FYI: The Penske Outfits

  • Penske Corporation itself focuses on transportation, truck leasing, automotive retail, logistics, and motorsports.
  • Penske Media Corporation (PMC), a separate entity led by Jay Penske, owns major media brands like Rolling Stone and Billboard.

—-

What’s actually going on is different, if the information in “Rolling Stone Publisher Sues Google Over AI Overview Summaries.” [Editor’s note: I live the over over lingo, don’t you?] The write up states:

Google has insisted that its AI-generated search result overviews and summaries have not actually hurt traffic for publishers. The publishers disagree, and at least one is willing to go to court to prove the harm they claim Google has caused. Penske Media Corporation, the parent company of Rolling Stone and The Hollywood Reporter, sued Google on Friday over allegations that the search giant has used its work without permission to generate summaries and ultimately reduced traffic to its publications.

Site traffic metrics are an interesting discipline. What exactly are the log files counting? Automated pings, clicks, views, downloads, etc.? Google is the big gun in traffic, and it has legions of SEO people who are more like cheerleaders for making sites Googley, doing the things that Google wants, and pitching Google advertising to get sort of reliable traffic to a Web site.

The SEO crowd is busy inventing new types of SEO. Now one wants one’s weaponized content to turn up as a link, snippet, or footnote in an AI output. Heck, some outfits are pitching to put ads on the AI output page because money is the name of the game. Pay enough and the snippet or summary of the answer to the user’s prompt may contain a pitch for that item of clothing or electronic gadget one really wants to acquire. Psychographic ad matching is marvelous.

The write up points out that an outfit I thought was into auto racing and truck rentals but is now a triple threat in publishing has a different take on the traffic referral game. The write up says:

Penske claims that in recent years, Google has basically given publishers no choice but to give up access to its content. The lawsuit claims that Google now only indexes a website, making it available to appear in search, if the publisher agrees to give Google permission to use that content for other purposes, like its AI summaries. If you think you lose traffic by not getting clickthroughs on Google, just imagine how bad it would be to not appear at all.

Google takes a different position, probably baffled why a race car outfit is grousing. The write up reports:

A spokesperson for Google, unsurprisingly, said that the company doesn’t agree with the claims. “With AI Overviews, people find Search more helpful and use it more, creating new opportunities for content to be discovered. We will defend against these meritless claims.” Google Spokesperson Jose Castaneda told Reuters.

Gizmodo, the source for the cited article about the truck rental outfit, has done some original research into traffic. I quote from the cited article:

Just for kicks, if you ask Google Gemini if Google’s AI Overviews are resulting in less traffic for publishers, it says, “Yes, Google’s AI Overview in search results appears to be resulting in less traffic for many websites and publishers. While Google has stated that AI Overviews create new opportunities for content discovery, several studies and anecdotal reports from publishers suggest a negative impact on traffic.”

I have some views on this situation, and I herewith present them to you:

  1. Google is calm on the outside but in crazy mode internally. The Googlers are trying to figure out how to keep revenues growing as referral traffic and the online advertising are undergoing some modest change. Is the glacier calving? Yep, but it is modest because a glacier is big and the calf is small.
  2. The SEO intermediaries at the Google are communicating like Chatty Cathies to the SEO innovators. The result will be a series of shotgun marriages among the lucrative ménage à trois of Google’s ad machine, search engine optimization professional, and advertising services firms in order to lure advertisers to a special private island.
  3. The bean counters at Google are looking at their MBA course materials, exam notes for CPAs, and reading books about forensic accounting in order to make the money furnaces at Google hot using less cash as fuel. This, gentle reader, is a very, very difficult task. At another time, a government agency might be curious about the financial engineering methods, but at this time, attention is directed elsewhere I presume.

Net net: This is a troublesome point. Google has lots of lawyers and probably more cash to spend on fighting the race car outfit and its news publications. Did you know that the race outfit owned the definitive publication about heavy metal as well at Billboard magazine?

Stephen E Arnold, September 15, 2025

Shame, Stress, and Longer Hours: AI’s Gifts to the Corporate Worker

September 15, 2025

Office workers from the executive suites to entry-level positions have a new reason to feel bad about themselves. Fortune reports, “ ‘AI Shame’ Is Running Rampant in the Corporate Sector—and C-Suite Leaders Are Most Worried About Getting Caught, Survey Says.” Writer Nick Lichtenberg cites a survey of over 1,000 workers by SAP subsidiary WalkMe. We learn almost half (48.8%) of the respondents said they hide their use of AI at work to avoid judgement. The number was higher at 53.4% for those at the top—even though they use AI most often. But what about the generation that has entered the job force amid AI hype? We learn:

“Gen Z approaches AI with both enthusiasm and anxiousness. A striking 62.6% have completed work using AI but pretended it was all their own effort—the highest rate among any generation. More than half (55.4%) have feigned understanding of AI in meetings. … But only 6.8% report receiving extensive, time-consuming AI training, and 13.5% received none at all. This is the lowest of any age group.”

In fact, the study found, only 3.7% of entry-level workers received substantial AI training, compared to 17.1% of C-suite executives. The write-up continues:

“Despite this, an overwhelming 89.2% [of Gen Z workers] use AI at work—and just as many (89.2%) use tools that weren’t provided or sanctioned by their employer. Only 7.5% reported receiving extensive training with AI tools.”

So younger employees use AI more but receive less training. And, apparently, are receiving little guidance on how and whether to use these tools in their work. What could go wrong?

From executives to fresh hires and those in between, the survey suggests everyone is feeling the impact of AI in the workplace. Lichtenberg writes:

“AI is changing work, and the survey suggests not always for the better. Most employees (80%) say AI has improved their productivity, but 59% confess to spending more time wrestling with AI tools than if they’d just done the work themselves. Gen Z again leads the struggle, with 65.3% saying AI slows them down (the highest amount of any group), and 68% feeling pressure to produce more work because of it.”

In addition, more than half the respondents said AI training initiatives amounted to a second, stressful job. But doesn’t all that hard work pay off? Um, no. At least, not according to this report from MIT that found 95% of AI pilot programs at large companies fail. So why are we doing this again? Ask the investor class.

Cynthia Murrell, September 15, 2025

How Much Is That AI in the Window? A Lot

September 15, 2025

AI technology is expensive. Big Tech companies are aware of the rising costs, but the average organization is unaware of how much AI will make their budgets skyrocket. The Kilo Code blog shares insights into AI’s soaring costs in, “Future AI Bills Of $100K/YR Per Dev.”

Kilo recently broke the 1 trillion tokens a month barrier on OpenRouter for the first time. Other open source AI coding tools experienced serious growth too. Claude and Cursor “throttled” their users and encouraged them to use open source tools. These AI algorithms needed to be throttled because their developers didn’t anticipate that application inference costs would rise. Why did this happen?

“Application inference costs increased for two reasons: the frontier model costs per token stayed constant and the token consumption per application grew a lot. We’ll first dive into the reasons for the constant token price for frontier models and end with explaining the token consumption per application. The price per token for the frontier model stayed constant because of the increasing size of models and more test-time scaling. Test time scaling, also called long thinking, is the third way to scale AI…While the pre- and post-training scaling influenced only the training costs of models. But this test-time scaling increases the cost of inference. Thinking models like OpenAI’s o1 series allocate massive computational effort during inference itself. These models can require over 100x compute for challenging queries compared to traditional single-pass inference.”

If organizations don’t want to be hit with expensive AI costs they should consider using open source models. Open source models ere designed to assist users instead of throttling them on the back send. That doesn’t even account for people expenses such as salaries and training.

Costs and customers’ willingness to pay escalating and unpredictable fees for AI may be a problem the the AI wizards cannot explain away. Those free and heavily discounted deals may deflate some AI balloons.

Whitney Grace, September 15, 2025

China Smart, US Dumb: The Baidu AI Service

September 12, 2025

It seems smart software is good for something. CNBC reports, “AI Avatars in China Just Proved They Are Ace Influencers: It Only Took a Duo 7 Hours to Rake in More than $7 Million.” Chinese tech firm Baidu collaborated with two human influencers on the project. Reporter Evelyn Cheng tells us:

“Luo Yonghao, one of China’s earliest and most popular live streamers, and his co-host Xiao Mu both used digital versions of themselves to interact with viewers in real time for well over six hours on Sunday on Baidu’s e-commerce livestreaming platform ‘Youxuan’, the Chinese tech company said. The session raked in 55 million yuan ($7.65 million). In comparison, Luo’s first livestream attempt on Youxuan last month, which lasted just over four hours, saw fewer orders for consumer electronics, food and other key products, Baidu said.”

The experiment highlights Baidu’s avatar technology, which can save marketing departments a lot of money. We learn:

“Luo’s and his co-host’s avatars were built using Baidu’s generative AI model, which learned from five years’ worth of videos to mimic their jokes and style, Wu Jialu, head of research at Luo’s other company, Be Friends Holding, told CNBC on Wednesday. … AI avatars can sharply reduce costs since companies don’t need to hire a large production team or a studio to livestream. The digital avatars can also stream nonstop without needing breaks. … [Wu] said that Baidu now offers the best digital human product currently available, compared to the early days of livestreaming e-commerce five or six years ago.”

Yes, the “early” days of five or six years ago, when the pandemic forced companies and workers to explore their online options. Both landed on livestreaming to generate sales and commissions. Now, it seems, companies can cut the human talent out of the equation. How efficient.

Cynthia Murrell, September 12, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta