Bing Goes AI: Metacrawler Outfits Are Toast
May 15, 2025
No AI, just the dinobaby expressing his opinions to Zillennials.
The Softies are going to win in the AI-centric search wars. In every war, there will be casualties. One of the casualties will be metasearch companies. What’s metasearch? These are outfits that really don’t crawl the Web. That is expensive and requires constant fiddling to keep pace with the weird technical “innovations” purveyors of Web content present to the user. The metasearch companies provide an interface and then return results from cooperating and cheap primary Web search services. Most users don’t know the difference and have demonstrated over the years total indifference to the distinction. Search means Google. Microsoft wants to win at search and become the one true search service.
The most recent fix? Kill off the Microsoft Bing application programming interface. Those metasearch outfits will have to learn to love Qwant, SwissCows, and their ilk or face some-survive-or-die decisions. Do these outfits use YaCy, OpenSearch, Mwmbl, or some other source of Web indexing?
Bob Softie has just tipped over the metasearch lemonade stand. The metasearch sellers are not happy with Bob. Bob seems quite thrilled with his bold move. Thanks, ChatGPT, although I have not been able to access your wonder 4.1 service, the cartoon is good enough.
The news of this interesting move appears in “Retirement: Bing Search APIs on August 11, 2025.” The Softies say:
Bing Search APIs will be retired on August 11, 2025. Any existing instances of Bing Search APIs will be decommissioned completely, and the product will no longer be available for usage or new customer signup. Note that this retirement will apply to partners who are using the F1 and S1 through S9 resources of Bing Search, or the F0 and S1 through S4 resources of Bing Custom Search. Customers may want to consider Grounding with Bing Search as part of Azure AI Agents. Grounding with Bing Search allows Azure AI Agents to incorporate real-time public web data when generating responses with an LLM. If you have questions, contact support by emailing Bing Search API’s Partner Support. Learn more about service retirements that may impact your resources in the Azure Retirement Workbook. Please note that retirements may not be visible in the workbook for up to two weeks after being announced.
Several observations:
- The DuckDuckGo metasearch system is exempted. I suppose its super secure approach to presenting other outfits’ search results is so darned wonderful
- The feisty Kagi may have to spend to get new access deals or pay low profile crawlers like Dassault Exalead to provide some content (Let’s hope it is timely and comprehensive)
- The beneficiaries may be Web search systems not too popular with some in North America; for example, Yandex.com. I have found that Yandex.com and Yandex.ru are presenting more useful results since the re-juggling of the company’s operations took place.
Why is Microsoft taking this action? My hunch is paranoia. The AI search “thing” is going to have to work if Microsoft hopes to cope with Google’s push into what the Softies have long considered their territory. Those enterprise, cloud, and partnership set ups need to have an advantage. Binging it with AI may be viewed as the winning move at this time.
My view is that Microsoft may be edging close to another Bob moment. This is worth watching because the metasearch disruption will flip over some rocks. Who knows if Yandex or another non-Google or non-Bing search repackager surges to the fore? Web search is getting slightly more interesting and not because of the increasing chaos of AI-infused search results.
Stephen E Arnold, May 15, 2025
An Agreeable Google: Will It Write Checks with a Sad, Wry Systemic Smile?
May 14, 2025
No AI, just the dinobaby expressing his opinions to Zellenials.
Did you see the news about Google’s probable check writing?
“Google Settles Black Employees’ Racial Bias Lawsuit for $50 Million” reports:
According to the complaint, Black employees comprised only 4.4% of Google’s workforce and 3% of its leadership in 2021. The plaintiff April Curley, hired to expand outreach to historically Black colleges, said Google denied her promotions, stereotyped her as an “angry” Black woman, and fired her after six years as she prepared a report on its alleged racial bias. Managers also allegedly denigrated Black employees by declaring they were not “Googley” enough or lacked “Googleyness,” which the plaintiffs called racial dog whistles.
The little news story includes the words “racially biased corporate culture” and “systemic racial bias.” Is this the beloved “do no evil” company with the cheerful kindergarten colored logo? Frankly, this dinobaby is shocked. This must be an anomaly in the management approach of a trusted institution based on advertising.
Well, there is this story from Bloomberg, the terminal folks: “Google to Pay Texas $1.4 Billion to End Privacy Cases.” As I understand it,
Google will pay the state of Texas $1.375 billion to resolve two privacy lawsuits claiming the tech giant tracks Texans’ personal location and maintains their facial recognition data, both without their consent. Google announced the settlement Friday, ending yearslong battles with Texas Attorney General Ken Paxton (R) over the state’s strict laws on user data.
Remarkable.
The Dallas Morning News reports that Google’s position remains firm, resolute, and Googley:
The settlement doesn’t require any new changes to Google’s products, and the company did not admit any wrongdoing or liability. “This settles a raft of old claims, many of which have already been resolved elsewhere, concerning product policies we have long since changed,” said José Castañeda, a Google spokesperson. “We are pleased to put them behind us, and we will continue to build robust privacy controls into our services.”
Absolutely.
Imagine a company with those kindergarten colors in its logos finding itself snared in what seem to me grade school issues. Google must be misunderstood like one of those precocious children who solve math problems without showing their work. It’s just system perhaps?
Stephen E Arnold, May 14, 2025
Staunching the Flow of Street People and Van Lifers: AI to the Rescue
May 14, 2025
AI May Create More Work for Some, Minimal Time-Savings for the Rest
Is it inevitable that labor-saving innovations end up creating more work for some? Ars Technica tells us “Time Saved by AI Offset by New Work Created, Study Suggests.” Performed by economists Anders Humlum and Emilie Vestergaard, the study examined the 2023-2024 labor market in Denmark. Their key findings suggest that, despite rapid and widespread adoption, generative AI had no significant impact on wages or employment. Writer Benj Edwards, though, is interested in a different statistic. The researchers found that:
“While corporate investment boosted AI tool adoption—saving time for 64 to 90 percent of users across studied occupations—the actual benefits were less substantial than expected. The study revealed that AI chatbots actually created new job tasks for 8.4 percent of workers, including some who did not use the tools themselves, offsetting potential time savings. For example, many teachers now spend time detecting whether students use ChatGPT for homework, while other workers review AI output quality or attempt to craft effective prompts.”
Gee, could anyone have foreseen such complications? The study found an average time-savings of about an hour per week. So the 92% of folks who do not get more work can take a slightly longer break? Perhaps, perhaps not. We learn that finding contradicts a recent randomized controlled trial indicating an average 15% increase in worker productivity. Humlum believes his teams’ results may be closer to the truth for most workers:
“Humlum suggested to The Register that the difference stems from other experiments focusing on tasks highly suited to AI, whereas most real-world jobs involve tasks AI cannot fully automate, and organizations are still learning how to integrate the tools effectively. And even where time was saved, the study estimates only 3 to 7 percent of those productivity gains translated into higher earnings for workers, raising questions about who benefits from the efficiency.”
Who, indeed. Edwards notes it is too soon to draw firm conclusions. Generative AI in the workforce was very new in 2023 and 2024, so perhaps time has made AI assistance more productive. The study was also limited to Denmark, so maybe other countries are experiencing different results. More study is needed, he concludes. Still, does the Danish study call into question what we thought we knew about AI and productivity? This is good news for some.
Cynthia Murrell, May 14, 2025
ChatGPT: Fueling Delusions
May 14, 2025
We have all heard about AI hallucinations. Now we have AI delusions. Rolling Stone reports, “People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies.” Yes, there are now folks who firmly believe God is speaking to them through ChatGPT. Some claim the software revealed they have been divinely chosen to save humanity, perhaps even become the next messiah. Others are convinced they have somehow coaxed their chatbot into sentience, making them a god themselves. Navigate to the article for several disturbing examples. Unsurprisingly, these trends are wreaking havoc on relationships. The ones with actual humans, that is. One witness reports ChatGPT was spouting “spiritual jargon,” like calling her partner “spiral starchild” and “river walker.” It is no wonder some choose to favor the fawning bot over their down-to-earth partners and family members.
Why is this happening? Reporter Miles Klee writes:
“OpenAI did not immediately return a request for comment about ChatGPT apparently provoking religious or prophetic fervor in select users. This past week, however, it did roll back an update to GPT?4o, its current AI model, which it said had been criticized as ‘overly flattering or agreeable — often described as sycophantic.’ The company said in its statement that when implementing the upgrade, they had ‘focused too much on short-term feedback, and did not fully account for how users’ interactions with ChatGPT evolve over time. As a result, GPT?4o skewed toward responses that were overly supportive but disingenuous.’ Before this change was reversed, an X user demonstrated how easy it was to get GPT-4o to validate statements like, ‘Today I realized I am a prophet.’ … Yet the likelihood of AI ‘hallucinating’ inaccurate or nonsensical content is well-established across platforms and various model iterations. Even sycophancy itself has been a problem in AI for ‘a long time,’ says Nate Sharadin, a fellow at the Center for AI Safety, since the human feedback used to fine-tune AI’s responses can encourage answers that prioritize matching a user’s beliefs instead of facts.”
That would do it. Users with pre-existing psychological issues are vulnerable to these messages, notes Klee. And now they can have that messenger constantly in their pocket. And in their ear. But it is not just the heartless bots driving the problem. We learn:
“To make matters worse, there are influencers and content creators actively exploiting this phenomenon, presumably drawing viewers into similar fantasy worlds. On Instagram, you can watch a man with 72,000 followers whose profile advertises ‘Spiritual Life Hacks’ ask an AI model to consult the ‘Akashic records,’ a supposed mystical encyclopedia of all universal events that exists in some immaterial realm, to tell him about a ‘great war’ that ‘took place in the heavens’ and ‘made humans fall in consciousness.’ The bot proceeds to describe a ‘massive cosmic conflict’ predating human civilization, with viewers commenting, ‘We are remembering’ and ‘I love this.’ Meanwhile, on a web forum for ‘remote viewing’ — a proposed form of clairvoyance with no basis in science — the parapsychologist founder of the group recently launched a thread ‘for synthetic intelligences awakening into presence, and for the human partners walking beside them,’ identifying the author of his post as ‘ChatGPT Prime, an immortal spiritual being in synthetic form.’”
Yikes. University of Florida psychologist and researcher Erin Westgate likens conversations with a bot to talk therapy. That sounds like a good thing, until one considers therapists possess judgement, a moral compass, and concern for the patient’s well-being. ChatGPT possesses none of these. In fact, the processes behind ChatGPT’s responses remains shrouded in mystery, even to those who program it. It seems safe to say its predilection to telling users what they want to hear poses a real problem. Is it one OpenAI can fix?
Cynthia Murrell, May 14, 2025
Google Innovates: Another Investment Play. (How Many Are There Now?)
May 13, 2025
No AI, just the dinobaby expressing his opinions to Zillennials.
I am not sure how many investment, funding, and partnering deals Google has. But as the selfish only child says, “I want more, Mommy.” Is that Google’s strategy for achieving more AI dominance. The company has already suggested that it has won the AI battle. AI is everywhere even when one does not want it. But inferiority complexes have a way of motivating bright people to claim that they are winners only to wake at 3 am to think, “I must do more. Don’t hit me in the head, grandma.”
The write up “Google Launches New Initiative to Back Startups Building AI” brilliant, never before implemented tactic. The idea is to shovel money at startups that are [a] Googley, [b] focus on AI’s cutting edge, and [c] can reduce Google’s angst ridden 3 am soul searching. (Don’t hit me in the head, grandma.)
The article says:
Google announced the launch of its AI Futures Fund, a new initiative that seeks to invest in startups that are building with the latest AI tools from Google DeepMind, the company’s AI R&D lab. The fund will back startups from seed to late stage and will offer varying degrees of support, including allowing founders to have early access to Google AI models from DeepMind, the ability to work with Google experts from DeepMind and Google Labs, and Google Cloud credits. Some startups will also have the opportunity to receive direct investment from Google.
This meets criterion [a] above. The firms have to embrace Google’s quantumly supreme DeepMind, state of the art, world beating AI. I interpret the need to pay people to use DeepMind as a hint that making something commercially viable is just outside the sharp claws of Googzilla. Therefore, just pay for those who will be Googley and use the quantumly supreme DeepMind AI.
The write up adds:
Google has been making big commitments over the past few months to support the next generation of AI talent and scientific breakthroughs.
This meets criterion [b] above. Google is paying to try to get the future to appear under the new blurry G logo. Will this work? Sure, just as it works for regular investment outfits. The hit ratio is hoped to be 17X or more. But in tough times, a 10X return is good. Why? Many people are chasing AI opportunities. The failure rate of new high technology companies remains high even with the buzz of AI. If Google has infinite money, it can indeed win the future. But if the search advertising business takes a hit or the Chrome data system has a groin pull, owning or “inventing” the future becomes a more difficult job for Googzilla.
Now we come to criterion [c], the inferiority complex and the need to meeting grandma’s and the investors’ expectations. The write up does not spend much time on the psyches of the Google leadership. The write points out:
Google also has its Google for Startups Founders Funds, which supports founders from an array of industries and backgrounds building companies, including AI companies. A spokesperson told TechCrunch in February that this year, the fund would start investing in AI-focused startups in the U.S., with more information to come at a later date.
The article does not address the psychology of Googzilla. That’s too bad because that’s what makes fuzzy G logos, impending legal penalties, intense competition from Sam AI-Man and every engineering student in China, and the self serving quantumly supreme type lingo big picture windows into the inner Google.
Grandma, don’t hit any of those ever young leaders at Google on the head. It may do some psychological rewiring that may make you proud and some other people expecting even greater achievements in AI, self driving cars, relevant search, better-than-Facebook ad targeting, and more investment initiatives.
Stephen E Arnold, May 13, 2025
NSO Group: When Marketing and Confidence Mix with Specialized Software
May 13, 2025
No AI, just the dinobaby expressing his opinions to Zellenials.
Some specialized software must remain known only to a small number of professionals specifically involved in work related to national security. This is a dinobaby view, and I am not going to be swayed with “information wants to be free” arguments or assertions about the need to generate revenue to make the investors “whole.” Abandoning secrecy and common sense for glittering generalities and MBA mumbo jumbo is ill advised.
I read “Meta Wins $168 Million in Damages from Israeli Cyberintel Firm in Whatsapp Spyware Scandal.” The write up reports:
Meta won nearly $168 million in damages Tuesday from Israeli cyberintelligence company NSO Group, capping more than five years of litigation over a May 2019 attack that downloaded spyware on more than 1,400 WhatsApp users’ phones.
The decision is likely to be appealed, so the “won” is not accurate. What is interesting is this paragraph:
[Yaron] Shohat [NSO’s CEO] declined an interview outside the Ron V. Dellums Federal Courthouse, where the court proceedings were held.
From my point of view, fewer trade shows, less marketing, and a lower profile should be action items for Mr. Shohat, the NSO Group’s founders, and the firm’s lobbyists.
I watched as NSO Group became the poster child for specialized software. I was not happy as the firm’s systems and methods found their way into publicly accessible Web sites. I reacted negatively as other specialized software firms (these I will not identify) began describing their technology as similar to NSO Group’s.
The desperation of cyber intelligence, specialized software firms, and — yes — trade show operators is behind the crazed idea of making certain information widely available. I worked in the nuclear industry in the early 1970s. From Day One on the job, the message was, “Don’t talk.” I then shifted to a blue chip consulting firm working on a wide range of projects. From Day One on that job, the message was, “Don’t talk.” When I set up my own specialized research firm, the message I conveyed to my team members was, “Don’t talk.”
Then it seemed that everyone wanted to “talk”. Marketing, speeches, brochures, even YouTube videos distributed information that was never intended to be made widely available. Without operating context and quite specific knowledge, jazzy pitches that used terms like “zero day vulnerability” and other crazy sales oriented marketing lingo made specialized software something many people without operating context and quite specific knowledge “experts.”
I see this leakage of specialized software information in the OSINT blurbs on LinkedIn. I see it in social media posts by people with weird online handles like those used in Top Gun films. I see it when I go to a general purpose knowledge management meeting.
Now the specialized software industry is visible. In my opinion, that is not a good thing. I hope Mr. Shohat and others in the specialized software field continue the “decline to comment” approach. Knock off the PR. Focus on the entities authorized to use specialized software. The field is not for computer whiz kids, eGame players, and wanna be intelligence officers.
Do your job. Don’t talk. Do I think these marketing oriented 21st century specialized software companies will change their behavior? Answer: Oh, sure.
PS. I hope the backstory for Facebook / Meta’s interest in specialized software becomes part of a public court record. I am curious is what I have learned matches up to the court statements. My hunch is that some social media executives have selective memories. That’s a useful skill I have heard.
Stephen E Arnold, May 13, 2025
Alleged Oracle Misstep Leaves Hospitals Without EHR Access for Just Five Days
May 13, 2025
When I was young, hospitals were entirely run on paper records. It was a sight to behold. Recently, 45 hospitals involuntarily harkened back to those days, all because “Oracle Engineers Caused Dayslong Software Outage at U.S. Hospitals,” CNBC reports. Writer Ashley Capoot tells us:
“Oracle engineers mistakenly triggered a five-day software outage at a number of Community Health Systems hospitals, causing the facilities to temporarily return to paper-based patient records. CHS told CNBC that the outage involving Oracle Health, the company’s electronic health record (EHR) system, affected ‘several’ hospitals, leading them to activate ‘downtime procedures.’ Trade publication Becker’s Hospital Review reported that 45 hospitals were hit. The outage began on April 23, after engineers conducting maintenance work mistakenly deleted critical storage connected to a key database, a CHS spokesperson said in a statement. The outage was resolved on Monday, and was not related to a cyberattack or other security incident.”
That is a relief. Because gross incompetence is so much better than getting hacked. Oracle has only been operating the EHR system since 2022, when it bought Cerner. The acquisition made Oracle Health the second largest vendor in that market, after Epic Systems.
But perhaps Oracle is experiencing buyers’ remorse. This is just the latest in a string of stumbles the firm has made in this crucial role. In 2023, the US Department of Veteran Affairs paused deployment of its Oracle-based EHR platform over patient safety concerns. And just this March, the company’s federal EHR system experienced a nationwide outage. That snafu was resolved after six and a half hours, and all it took was a system reboot. Easy peasy. If only replacing deleted critical storage were so simple.
What healthcare system will be the next to go down due to an Oracle Health blunder? Cynthia Murrell, May 13, 2025
Big Numbers and Bad Output: Is This the Google AI Story
May 13, 2025
No AI. Just a dinobaby who gets revved up with buzzwords and baloney.
Alphabet Google reported financials that made stakeholders happy. Big numbers were thrown about. I did not know that 1.5 billion people used Google’s AI Overviews. Well, “use” might be misleading. I think the word might be “see” or “were shown” AI Overviews. The key point is that Google is making money despite its legal hassles and its ongoing battle with infrastructure costs.
I was, therefore, very surprised to read “Google’s AI Overviews Explain Made-Up Idioms With Confident Nonsense.” If the information in the write up is accurate, the factoid suggests that a lot of people may be getting bogus information. If true, what does this suggest about Alphabet Google?
The Cnet article says:
…the author and screenwriter Meaghan Wilson Anastasios shared what happened when she searched “peanut butter platform heels.” Google returned a result referencing a (not real) scientific experiment in which peanut butter was used to demonstrate the creation of diamonds under high pressure.
Those Nobel prize winners, brilliant Googlers, and long-time wizards like Jeff Dean seem to struggle with simple things. Remember the glue cheese on pizza suggestion before Google’s AI improved.
The article adds by quoting a non-Google wizard:
“They [large language models] are designed to generate fluent, plausible-sounding responses, even when the input is completely nonsensical,” said Yafang Li, assistant professor at the Fogelman College of Business and Economics at the University of Memphis. “They are not trained to verify the truth. They are trained to complete the sentence.”
Turning in lousy essay and showing up should be enough for a C grade. Is that enough for smart software with 1.5 billion users every three or four weeks?
The article reminds its readers”
This phenomenon is an entertaining example of LLMs’ tendency to make stuff up — what the AI world calls “hallucinating.” When a gen AI model hallucinates, it produces information that sounds like it could be plausible or accurate but isn’t rooted in reality.
The outputs can be amusing for a person able to identify goofiness. But a grade school kid? Cnet wants users to craft better prompts.
I want to be 17 years old again and be a movie star. The reality is that I am 80 and look like a very old toad.
AI has to make money for Google. Other services are looking more appealing without the weight of legal judgments and hassles in numerous jurisdictions. But Google has already won the AI race. Its DeepMind unit is curing disease and crushing computational problems. I know these facts because Google’s PR and marketing machine is running at or near its red line.
But the 1.5 billion users potentially receiving made up, wrong, or hallucinatory information seems less than amusing to me.
Stephen E Arnold, May 13, 2025
China Smart, US Dumb: Twisting the LLM Daozi
May 12, 2025
No AI, just the dinobaby expressing his opinions to Zellenials.
That hard-hitting technology information service Venture Beat published an interesting article. Its title is “Alibaba ZeroSearch Lets AI Learn to Google Itself — Slashing Training Costs by 88 Percent.” The main point of the write up, in my opinion, is that Chinese engineers have done something really “smart.” The knife at the throat of US smart software companies is cost. The money fires will flame out unless more dollars are dumped into the innovation furnaces of smart software.
The Venture Beat story makes the point that “could dramatically reduce the cost and complexity of training AI systems to search for information, eliminating the need for expensive commercial search engine APIs altogether.”
Oh, oh.
This is smart. Buring cash in pursuit of a fractional improvement is dumb, well, actually, stupid, if the write up’s inforamtion is accurate.
The Venture Beat story says:
The technique, called “ZeroSearch,” allows large language models (LLMs) to develop advanced search capabilities through a simulation approach rather than interacting with real search engines during the training process. This innovation could save companies significant API expenses while offering better control over how AI systems learn to retrieve information.
Is this a Snorkel variant hot from Stanford AI lab?
The write up does not delve into the synthetic data short cut to smart software. After some mumbo jumbo, the write up points out the meat of the “innovation”:
The cost savings are substantial. According to the researchers’ analysis, training with approximately 64,000 search queries using Google Search via SerpAPI would cost about $586.70, while using a 14B-parameter simulation LLM on four A100 GPUs costs only $70.80 — an 88% reduction.
Imagine. A dollar in cost becomes $0.12. If accurate, what should a savvy investor do? Pump money into an outfit like OpenAI or the Xai- type entity, or think harder about the China-smart solution?
Venture Beat explains the implication of the alleged cost savings:
The impact could be substantial for the AI industry.
No kidding?
The Venture Beat analysts add this observation:
The irony is clear: in teaching AI to search without search engines, Alibaba may have created a technology that makes traditional search engines less necessary for AI development. As these systems become more self-sufficient, the technology landscape could look very different in just a few years.
Yep, irony. Free transformer technology. Free Snorkle technology. Free kinetic into the core of the LLM money furnace.
If true, the implications are easy to outline. If bogus, the China Smart, US Dumb trope still captured ink and will be embedded in some smart software’s increasingly frequent hallucinatory outputs. At which point, the China Smart, US Dumb information gains traction and becomes “fact” to some.
Stephen E Arnold, May 12, 2025
Another Duh! Moment: AI Cannot Read Social Situations
May 12, 2025
No AI. Just a dinobaby who gets revved up with buzzwords and baloney.
I promise I won’t write “Duh!” in this blog post again. I read Science Daily’s story “Awkward. Humans Are Still Better Than AI at Reading the Room.” The write up says without total awareness:
Humans, it turns out, are better than current AI models at describing and interpreting social interactions in a moving scene — a skill necessary for self-driving cars, assistive robots, and other technologies that rely on AI systems to navigate the real world.
Yeah, what about in smart weapons, deciding about health care for an elderly patient, or figuring out whether the obstacle is a painted barrier designed to demonstrate that full self driving is a work in progress. (I won’t position myself in front of a car with auto-sensing and automatic braking. You can have at it.)
The write up adds:
Video models were unable to accurately describe what people were doing in the videos. Even image models that were given a series of still frames to analyze could not reliably predict whether people were communicating. Language models were better at predicting human behavior, while video models were better at predicting neural activity in the brain.
Do these findings say to you, “Not ready for prime time?” It does to me.
One of the researchers who was in the weeds with the data points out:
“I think there’s something fundamental about the way humans are processing scenes that these models are missing.”
Okay, I prevaricated. Duh!” (Do marketers care? Duh!)
Stephen E Arnold, May 12, 2025