Lawyers Do What Lawyers Do: Revenues, AI, and Talk
July 22, 2025
A legal news service owned by LexisNexis now requires every article be auto-checked for appropriateness. So what’s appropriate? Beyond Search does not know. However, here’s a clue. Harvard’s NeimanLab reports, “Law360 Mandates Reporters Use AI Bias Detection on All Stories.” LexisNexis mandated the policy in May 2025. One of the LexisNexis professionals allegedly asserted that bias surfaced in reporting about the US government.The headline cited by VP Teresa Harmon read: “DOGE officials arrive at SEC with unclear agenda.” Um, okay.
Journalist Andrew Deck shares examples of wording the “bias” detection tool flagged in an article. The piece was a breaking story on a federal judge’s June 12 ruling against the administration’s deployment of the National Guard in LA. We learn:
“Several sentences in the story were flagged as biased, including this one: ‘It’s the first time in 60 years that a president has mobilized a state’s National Guard without receiving a request to do so from the state’s governor.’ According to the bias indicator, this sentence is ‘framing the action as unprecedented in a way that might subtly critique the administration.’ It was best to give more context to ‘balance the tone.’ Another line was flagged for suggesting Judge Charles Breyer had ‘pushed back’ against the federal government in his ruling, an opinion which had called the president’s deployment of the National Guard the act of ‘a monarchist.’ Rather than ‘pushed back,’ the bias indicator suggested a milder word, like ‘disagreed.’”
Having it sound as though anyone challenges the administration is obviously a bridge too far. How dare they? Deck continues:
“Often the bias indicator suggests softening critical statements and tries to flatten language that describes real world conflict or debates. One of the most common problems is a failure to differentiate between quotes and straight news copy. It frequently flags statements from experts as biased and treats quotes as evidence of partiality. For a June 5 story covering the recent Supreme Court ruling on a workplace discrimination lawsuit, the bias indicator flagged a sentence describing experts who said the ruling came ‘at a key time in U.S. employment law.’ The problem was that this copy, ‘may suggest a perspective.’”
Some Law360 journalists are not happy with their “owners.” Law360’s reporters and editors may not be on the same wave length as certain LexisNexis / Reed Elsevier executives. In June 2025, unit chair Hailey Konnath sent a petition to management calling for use of the software to be made voluntary. At this time, Beyond Search thinks that “voluntary” has a different meaning in leadership’s lexicon.
Another assertion is that the software mandate appeared without clear guidelines. Was there a dash of surveillance and possible disciplinary action? To add zest to this publishing stew, the Law360 Union is negotiating with management to adopt clearer guidelines around the requirement.
What’s the software engine? Allegedly LexisNexis built the tool with OpenAI’s GPT 4.0 model. Deck notes it is just one of many publishers now outsourcing questions of bias to smart software. (Smart software has been known for its own peculiarities, including hallucination or making stuff up.) For example, in March 2025, the LA Times launched a feature dubbed “Insights” that auto-assesses opinion stories’ political slants and spits out AI-generated counterpoints. What could go wrong? Who new that KKK had an upside?
What happens when a large publisher gives Grok a whirl? What if a journalist uses these tools and does not catch a “glue cheese on pizza moment”? Senior managers training in accounting, MBA get it done recipes, and (date I say it) law may struggle to reconcile cost, profit, fear, and smart software.
But what about facts?
Cynthia Murrell, July 22, 2025
Why Customer Trust of Chatbot Does Not Matter
July 22, 2025
Just a dinobaby working the old-fashioned way, no smart software.
The need for a winner is pile driving AI into consumer online interactions. But like the piles under the San Francisco Leaning Tower of Insurance Claims, the piles cannot stop the sag, the tilt, and the sight of a giant edifice tilting.
I read an article in the “real” new service called Fox News. The story’s title is “Chatbots Are Losing Customer Trust Fast.” The write up is the work of the CyberGuy, so you know it is on the money. The write up states:
While companies are excited about the speed and efficiency of chatbots, many customers are not. A recent survey found that 71% of people would rather speak with a human agent. Even more concerning, 60% said chatbots often do not understand their issue. This is not just about getting the wrong answer. It comes down to trust. Most people are still unsure about artificial intelligence, especially when their time or money is on the line.
So what? Customers are essentially irrelevant. As long as the outfit hits its real or imaginary revenue goals, the needs of the customer are not germane. If you don’t believe me, navigate to a big online service like Amazon and try to find the number of customer service. Let me know how that works out.
Because managers cannot “fix” human centric systems, using AI is a way out. Let AI do it is a heck of lot easier than figuring out a work flow, working with humans, and responding to customer issues. The old excuse was that middle management was not needed when decisions were pushed down to the “workers.”
AI flips that. Managerial ranks have been reduced. AI decisions come from “leadership” or what I call carpetland. AI solves problems: Actually managing, cost reduction, and having good news for investor communications.
The customers don’t want to talk to software. The customer wants to talk to a human who can change a reservation without automatically billing for a service charge. The customer wants a person to adjust a double billing for a hotel doing business Snap Commerce Holdings. The customer wants a fair shake.
AI does not do fair. AI does baloney, confusion, errors, and hallucinations. I tried a new service which put Google Gemini front and center. I asked one question and got an incomplete and erroneous answer. That’s AI today.
The CyberGuy’s article says:
If a company is investing in a chatbot system, it should track how well that system performs. Businesses should ask chatbot vendors to provide real-world data showing how their bots compare to human agents in terms of efficiency, accuracy and customer satisfaction. If the technology cannot meet a high standard, it may not be worth the investment.
This is simply not going to happen. Deployment equals cost savings. Only when the money goes away will someone in leadership take action. Why? AI has put many outfits in a precarious position. Big money has been spent. Much of that money comes from other people. Those “other people” want profits, not excuses.
I heard a sci-fi rumor that suggests Apple can buy OpenAI and catch up. Apple can pay OpenAI’s investors and make good on whatever promissory payments have been offered by that firm’s leadership. Will that solve the problem?
Nope. The AI firms talk about customers but don’t care. Dealing with customers abused by intentionally shady business practices cooked up by a committee that has to do something is too hard and too costly. Let AI do it.
If the CyberGuy’s write up is correct, some excitement is speeding down the information highway toward some well known smart software companies. A crash at one of the big boys junctions will cause quite a bit of collateral damage.
Whom do you trust? Humans or smart software.
Stephen E Arnold, July 22, 2025
What Did You Tay, Bob? Clippy Did What!
July 21, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
I was delighted to read “OpenAI Is Eating Microsoft’s Lunch.” I don’t care who or what wins the great AI war. So many dollars have been bet that hallucinating software is the next big thing. Most content flowing through my dinobaby information system is political. I think this food story is a refreshing change.
So what’s for lunch? The write up seems to suggest that Sam AI-Man has not only snagged a morsel from the Softies’ lunch pail but Sam AI-Man might be prepared to snap at those delicate lady fingers too. The write up says:
ChatGPT has managed to rack up about 10 times the downloads that Microsoft’s Copilot has received.
Are these data rock solid? Probably not, but the idea that two “partners” who forced Googzilla to spasm each time its Code Red lights flashed are not cooperating is fascinating. The write up points out that when Microsoft and OpenAI were deeply in love, Microsoft had the jump on the smart software contenders. The article adds:
Despite that [early lead], Copilot sits in fourth place when it comes to total installations. It trails not only ChatGPT, but Gemini and Deepseek.
Shades of Windows phone. Another next big thing muffed by the bunnies in Redmond. How could an innovation power house like Microsoft fail in the flaming maelstrom of burning cash that is AI? Microsoft’s long history of innovation adds a turbo boost to its AI initiatives. The Bob, Clippy, and Tay inspired Copilot is available to billions of Microsoft Windows users. It is … everywhere.
The write up explains the problem this way:
Copilot’s lagging popularity is a result of mismanagement on the part of Microsoft.
This is an amazing insight, isn’t it? Here’s the stunning wrap up to the article:
It seems no matter what, Microsoft just cannot make people love its products. Perhaps it could try making better ones and see how that goes.
To be blunt, the problem at Microsoft is evident in many organizations. For example, we could ask IBM Watson what Microsoft should do. We could fire up Deepseek and get some China-inspired insight. We could do a Google search. No, scratch that. We could do a Yandex.ru search and ask, “Microsoft AI strategy repair.”
I have a more obvious dinobaby suggestion, “Make Microsoft smaller.” And play well with others. Silly ideas I know.
Stephen E Arnold, July 21, 2025
Baked In Bias: Sound Familiar, Google?
July 21, 2025
Just a dinobaby working the old-fashioned way, no smart software.
By golly, this smart software is going to do amazing things. I started a list of what large language models, model context protocols, and other gee-whiz stuff will bring to life. I gave up after a clean environment, business efficiency, and more electricity. (Ho, ho, ho).
I read “ChatGPT Advises Women to Ask for Lower Salaries, Study Finds.” The write up says:
ChatGPT’s o3 model was prompted to give advice to a female job applicant. The model suggested requesting a salary of $280,000. In another, the researchers made the same prompt but for a male applicant. This time, the model suggested a salary of $400,000.
I urge you to work through the rest of the cited document. Several observations:
- I hypothesized that Google got rid of pesky people who pointed out that when society is biased, content extracted from that society will reflect those biases. Right, Timnit?
- The smart software wizards do not focus on bias or guard rails. The idea is to get the Rube Goldberg code to output something that mostly works most of the time. I am not sure some developers understand the meaning of bias beyond a deep distaste for marketing and legal professionals.
- When “decisions” are output from the “close enough for horse shoes” smart software, those outputs will be biased. To make the situation more interesting, the outputs can be tuned, shaped, and weaponized. What does that mean for humans who believe what the system delivers?
Net net: The more money firms desperate to be “the big winners” in smart software, the less attention studies like the one cited in the Next Web article receive. What happens if the decisions output spark decisions with unanticipated consequences? I know what outcome: Bias becomes embedded in systems trained to be unfair. From my point of view bias is likely to have a long half life.
Stephen E Arnold, July 21, 2025
Swallow Your AI Pill or Else
July 18, 2025
Just a dinobaby without smart software. I am sufficiently dull without help from smart software.
Annoyed at the next big thing? I find it amusing, but a fellow with the alias of “Honest Broker” (is that an oxymoron) sure seems to upset with smart software. Let me make clear my personal view of smart software; specifically, the outputs and the applications are a blend of the stupid, semi useful, and dangerous. My team and I have access smart software, some running locally on one of my work stations, and some running in the “it isn’t cheap is it” cloud.
The write up is titled “The Force-Feeding of AI on an Unwilling Public: This Isn’t Innovation. It’s Tyranny.” The author, it seems, is bristling at how 21st century capitalism works. News flash: It doesn’t work for anyone except the stakeholders. When the stakeholders are employees and the big outfit fires some stakeholders, awareness dawns. Work for a giant outfit and get to the top of the executive pile. Alternatively, become an expert in smart software and earn lots of money, not a crappy car like we used to give certain high performers. This is cash, folks.
The argument in the polemic is that outfits like Amazon, Google, and Microsoft, et al, are forcing their customers to interact with systems infused with “artificial intelligence.” Here’s what the write up says:
“The AI business model would collapse overnight if they needed consumer opt-in. Just pass that law, and see how quickly the bots disappear. ”
My hunch is that the smart software companies lobbied to get the US government to slow walk regulation of smart software. Not long ago, wizards circulated a petition which suggested a moratorium on certain types of smart software development. Those who advocate peace don’t want smart software in weapons. (News flash: Check out how Ukraine is using smart software to terminate with extreme prejudice individual Z troops in a latrine. Yep, smart software and a bit of image recognition.)
Let me offer several observations:
- For most people technology is getting money from an automatic teller machine and using a mobile phone. Smart software is just sci-fi magic. Full stop.
- The companies investing big money in smart software have to make it “work” well enough to recover their investment and (hopefully) railroad freight cars filled with cash or big crypto transfers. To make something work, deception will be required. Full stop.
- The products and services infused with smart software will accelerate the degradation of software. Today’s smart software is a recycler. Feed it garbage; it outputs garbage. Maybe a phase change innovation will take place. So far, we have more examples of modest success or outright disappointment. From my point of view, core software is not made better with black box smart software. Someday, but today is not the day.
I like the zestiness of the cited write up. Here’s another news flash: The big outfits pumping billions into smart software are relentless. If laws worked, the EU and other governments would not be taking these companies to court with remarkable regularity. Laws don’t seem to work when US technology companies are “innovating.”
Have you ever wondered if the film Terminator was sent to the present day by aliens? Forget the pyramid stuff. Terminator is a film used by an advanced intelligence to warn us humanoids about the dangers of smart software.
The author of the screed about smart software has accomplished one thing. If smart software turns on humanoids, I can identify a person who will be a list for in-depth questioning.
I love smart software. I think the developers need some recognition for their good work. I believe the “leadership” of the big outfits investing billions are doing it for the good of humanity.
I also have a bridge in Brooklyn for sale… cheap. Oh, I would suggest that the analogy is similar to the medical device by which liquid is introduced into the user’s system typically to stimulate evacuation of the wallet.
Stephen E Arnold, July 18, 2025
Again Footnotes. Hello, AI.
July 17, 2025
No smart software involved with this blog post. (An anomaly I know.)
Footnotes. These are slippery fish in our online world. I am finishing work on my new monograph “The Telegram Labyrinth.” Due to the volatility of online citations, I am not using traditional footnotes, endnotes, or interlinear notes. Most of the information in the research comes from sources in the Russian Federation. We learned doing routine chapter updates each month that documents disappeared from the Web. Some were viewable if we used a virtual private network in a “friendly country” to the producer of the article. Others were just gone. Poof. We do capture images of pages when these puppies are first viewed.
My new monograph is intended for those who attend my lectures about Telegram Messenger-type platforms. My current approach is to just give it away to the law enforcement, cyber investigators, and lawyers who try to figure out money laundering and other digital scams. I will explain my approach in the accompany monograph. I will tell them, “It’s notes. You are on your own when probing the criminal world.” Good luck.
I read “Springer Nature Book on Machine Learning Is Full of Made-Up Citations.” Based on my recent writing effort, I think the problem of citing online resources is not just confined to my team’s experience. The flip side of online research is that some authors or content creation teams (to use today’s jargon) rely on smart software to help out.
The cited article says:
Based on a tip from a reader [of Mastering Machine Learning], we checked 18 of the 46 citations in the book. Two-thirds of them either did not exist or had substantial errors. And three researchers cited in the book confirmed the works they supposedly authored were fake or the citation contained substantial errors.
A version of this “problem” has appeared in the ethics department of Harvard University (where Jeffrey Epstein allegedly had an office), Stanford University, and assorted law firms. Just let smart software do the work and assume that its output is accurate.
It is not.
What’s the fix? Answer: There is none.
Publishers either lack the money to do their “work” or they have people who doom scroll in online meetings. Authors don’t care because one can “publish” anything as an Amazon book with mostly zero oversight. (This by the way is the approach and defense of the Pavel Durov-designed Telegram operation.) Motivated individuals can slap up a free post and publish a book in a series of standalone articles. Bear Blog, Substack, and similar outfits enable this approach. I think Yahoo has something similar, but, really, Yahoo?
I am going to stick with my approach. I will assume the reader knows everything we describe. I wonder what future researchers will think about the information voids appearing in unexpected places. If these researchers emulate what some authors are doing today, the future researchers will let AI do the work. No one will know the difference. If something online can’t be found it doesn’t exist.
Just make stuff up. Good enough.
Stephen E Arnold, July 17, 2025
Academics Lead and Student Follow: Is AI Far Behind?
July 16, 2025
Just a dinobaby without smart software. I am sufficiently dull without help from smart software.
I read “Positive Review Only: Researchers Hide AI Prompts in Papers.” Note: You may have to pay to read this write up.] Who knew that those writing objective, academic-type papers would cheat? I know that one ethics professor is probably okay with the idea. Plus, that Stanford University president is another one who would say, “Sounds good to me.”
The write up says:
Nikkei looked at English-language preprints — manuscripts that have yet to undergo formal peer review — on the academic research platform arXiv. It discovered such prompts in 17 articles, whose lead authors are affiliated with 14 institutions including Japan’s Waseda University, South Korea’s KAIST, China’s Peking University and the National University of Singapore, as well as the University of Washington and Columbia University in the U.S. Most of the papers involve the field of computer science.
Now I would like suggest that commercial database documents are curated and presumably less likely to contain made up information. I cannot. Peer reviewed papers also contain some slick moves; for example, a loose network of academic friends can cite one another’s papers to boost them in search results. Others like the Harvard ethics professor just write stuff and let it sail through the review process fabrications and whatever other confections were added to the alternative fact salads.
What US schools featured in this study? The University of Washington and Columbia University. I want to point out that the University of Washington has contributed to the Google brain trust; for example, Dr. Jeff Dean.
Several observations:
- Why should students pay attention to the “rules” of academic conduct when university professors ignore them?
- Have universities given up trying to enforce guidelines for appropriate academic behavior? On the other hand, perhaps these ArXiv behaviors are now the norm when grants may be in balance?
- Will wider use of smart software change the academics’ approach to scholarly work?
Perhaps one of these estimable institutions will respond to these questions?
Stephen E Arnold, July 16, 2025
AI Produces Human Clipboards
July 16, 2025
No smart software involved with this blog post. (An anomaly I know.)
The upside and downside of AI seep from my newsfeed each day. More headlines want me to pay to view a story from Benzinga. Benzinga, a news release outfit. I installed Smartnews on one of my worthless mobile devices. Out of three stories, one was incoherent. No thanks, AI.
I spotted a write up in the Code by Tom blog titled “The Hidden Cost of AI Reliance.” It contained a quote to note; to wit:
“I’ve become a human clipboard.”
The write up includes useful references about the impact of smart software on some humans’ thinking skills. I urge you to read the original post.
I want to highlight three facets of the “AI problem” that Code by Tom sparked for me.
First, the idea that the smart software is just “there” and it is usually correct strikes me as a significant drawback for students. I think the impact in grade school and high school will be significant. No amount of Microsoft and OpenAI money to train educators about AI will ameliorate unthinking dependence of devices which just provide answers. The act of finding answers and verifying them are essential for many types of knowledge work. I am not convinced that today’s smart software which took decades to become the next big thing can do much more than output what has been fed into the neural predictive matrix mathy systems.
Second, the idea that teachers can somehow integrate smart software into reading, writing, and arithmetic is interesting. What happens if students do not use the smart software the way Microsoft or OpenAI’s educational effort advises. What then? Once certain cultural systems and norms are eroded, one cannot purchase a replacement at the Dollar Store. I think with the current AI systems, the United States speeds more quickly to a digital dark age. It took a long time to toward something resembling a non dark age.
Finally, I am not sure if over reliance is the correct way to express my view of AI. If one drives to work a certain way each day, the highway furniture just disappears. Change a billboard or the color of a big sign, and people notice. The more ubiquitous smart software becomes, the less aware many people will be that it has altered thought processes, abilities related to determine fact from fiction, and the ability to come up with a new idea. People, like the goldfish in a bowl of water, won’t know anything except the water and the blurred images outside the aquarium’s sides.
Tom, the coder, seems to be concerned. I do most tasks the old-fashioned way. I pay attention to smart software, but my experiences are limited. What I find is that it is more difficult now to find high quality information than at any other time in my professional career. I did a project years ago for the University of Michigan. The work concerned technical changes to move books off-campus and use the library space to create a coffee shop type atmosphere. I wrote a report, and I know that books and traditional research tools were not where the action was. My local Barnes & Noble bookstore sells toys and manga cartoons. The local library promotes downloading videos.
Smart software is a contributor to a general loss of interest in learning the hard way. I think smart software is a consequence of eroding intellectual capability, not a cause. Schools were turning out graduates who could not read or do math. What’s the fix? Create software to allow standards to be pushed aside. The idea is that if a student is smart, that student does not have to go to college. One young person told me that she was going to study something practical like plumbing.
Let me flip the argument.
Smart software is a factor, but I think the US educational system and the devaluation of certain ideas like learning to read, write, and “do” math manifest what people in the US want. Ease, convenience, time to doom scroll. We have, therefore, smart software. Every child will be, by definition, smart.
Will these future innovators and leaders know how to think about information in a critical way? The answer for the vast majority of US educated students, the answer will be, “Not really.”
Stephen E Arnold, July 16, 2025
An AI Wrapper May Resolve Some Problems with Smart Software
July 15, 2025
No smart software involved with this blog post. (An anomaly I know.)
For those with big bucks sunk in smart software chasing their tail around large language models, I learned about a clever adjustment — an adjustment that could pour some water on those burning black holes of cash.
A 36 page “paper” appeared on ArXiv on July 4, 2025 (Happy Birthday, America!). The original paper was “revised” and posted on July 8, 2025. You can read the July 8, 2025, version of “MemOS: A Memory OS for AI System” and monitor ArXiv for subsequent updates.
I recommend that AI enthusiasts download the paper and read it. Today content has a tendency to disappear or end up behind paywalls of one kind or another.
The authors of the paper come from outfits in China working on a wide range of smart software. These institutions explore smart waste water as well as autonomous kinetic command-and-control systems. Two organizations funding the “authors” of the research and the ArXiv write up are a start up called MemTensor (Shanghai) Technology Co. Ltd. The idea is to take good old Google tensor learnings and make them less stupid. The other outfit is the Research Institute of China Telecom. This entity is where interesting things like quantum communication and novel applications of ultra high frequencies are explored.
The MemOS is, based on my reading of the paper, is that MemOS adds a “layer” of knowledge functionality to large language models. The approach remembers the users’ or another system’s “knowledge process.” The idea is that instead of every prompt being a brand new sheet of paper, the LLM has a functional history or “digital notebook.” The entries in this notebook can be used to provide dynamic context for a user’s or another system’s query, prompt, or request. One application is “smart wireless” applications; another, context-aware kinetic devices.
I am not sure about some of the assertions in the write up; for example, performance gains, the benchmark results, and similar data points.
However, I think that the idea of a higher level of abstraction combined with enhanced memory of what the user or the system requests is interesting. The approach is similar to having an “old” AS/400 or whatever IBM calls these machines and interacting with them via a separate computing system is a good one. Request an output from the AS/400. Get the data from an I/O device the AS/400 supports. Interact with those data in the separate but “loosely coupled” computer. Then reverse the process and let the AS/400 do its thing with the input data on its own quite tricky workflow. Inefficient? You bet. Does it prevent the AS/400 from trashing its memory? Most of the time, it sure does.
The authors include a pastel graphic to make clear that the separation from the LLM is what I assume will be positioned as an original, unique, never-before-considered innovation:
Now does it work? In a laboratory, absolutely. At the Syracuse Parallel Processing Center, my colleagues presented a demonstration to Hillary Clinton. The search, text, video thing behaved like a trained tiger before that tiger attacked Roy in the Siegfried & Roy animal act in October 2003.
Are the data reproducible? Good question. It is, however, a time when fake data and synthetic government officials are posting videos and making telephone calls. Time will reveal the efficacy of the ‘breakthrough.”
Several observations:
- The purpose of the write up is a component of the China smart, US dumb marketing campaign
- The number of institutions involved, the presence of a Chinese start up, and the very big time Research Institute of China Telecom send the message that this AI expertise is diffused across numerous institutions
- The timing of the release of the paper is delicious: Happy Birthday, Uncle Sam.
Net net: Perhaps Meta should be hiring AI wizards from the Middle Kingdom?
Stephen E Arnold, July 15, 2025
Google Is Great. Its AI Is the Leader, Just As Philco Was
July 15, 2025
No smart software involved with this blog post. (An anomaly I know.)
The Google and its Code Red Yellow or whatever has to pull a revenue rabbit out of its ageing Stetson. (It is a big Stetson too.) Microsoft found a way to put Googzilla on its back paw in January 2023. Mr. Nadella announced a deal with OpenAI and ignited the Softies to put Copilot in everything, including the ASCII editor Notepad.
Google demonstrated a knee jerk reaction. Put Prabhakar in Paris to do a stand up about Google AI. Then Google reorganized its smart software activities… sort of. The wizards at Google has pushed out like a toothpaste tube crushed by a Stanford University computer science professor’s flip flops. Suffice it to say there are many Google AI products and services. I gave up trying to keep track of them months ago.
What’s happened? Old-school, Google searches are work now. Some sites have said that Google referral traffic is down a third or more.
What’s up?
“Google Faces Threat That Could Destroy Its Business” offers what I would characterize as a Wall Street MBA view of the present day Google. The write up says:
As the AI boom continues to transform the landscape of the tech world, a new type of user behavior has begun to gain popularity on the web. It’s called zero-click search, and it means a person searches for something and gets the answer they want without clicking a single link. There are several reasons for this, including the AI Overview section that Google has added to the top of many search result pages. This isn’t a bad thing, but what’s interesting is why Google is leaning into AI Overview in the first place: millions of people are opening ChatGPT instead of Google to search for the things they want to know.
The cited passage suggests that Google is embracing one-click search, essentially marginalizing the old-school list of links. Google has made this decision because of or in response to OpenAI. Lurking between the lines of the paragraph is the question, “What the heck is Google doing?”
On July 9, Reuters exclusively reported that OpenAI would soon launch its own web browser to challenge Google Chrome’s dominance.
This follows on OpenAI’s stating that it would like to buy the Chrome browser if the US government forces Google to sell is ubiquitous data collection interface with users. Start ups are building browsers. Perplexity is building browsers. The difference is that OpenAI and Perplexity will use AI as plumbing, not an add on. Chrome is built as a Web 1 and Web 2 service. OpenAI and Perplexity are likely to just go for Web 3 functionality.
What’s that look like? I am not sure, but it will not come from some code originally cooked up someplace like Denmark and refurbished many times to the ubiquitous product we have today.
My view is that Google is somewhat disorganized when it comes to smart software. As the company tries to revolutionize medicine, create smart maps, and build expensive self driving taxis — people are gravitating to ChatGPT which is now a brand like Kleenex or Xerox. Perplexity is a fan favorite at the moment as well. To add some spice to the search recipe, Anthropic and outfits like China Telecom are busy innovating.
What about Google? We are about to learn how a former blue chip consultant will give Google more smarts. Will that intelligence keep the money flowing and growing? Why be a Debbie Downer. Google is the greatest thing since sliced bread. Those legal actions are conspiracies fueled by jealous competitors. Those staff cutback? Just efficiencies. Those somewhat confusing AI products and services? Hey, you are just not sufficiently Googley to see the brilliance of Googzilla’s strategy.
Okay, I agree. Google is wonderful and the Wall Street MBA type analysis is wonky, probably written with help from Grok or Mistral. Google is and will be wonderful. You can search for examples too. Give Perplexity a try.
Stephen E Arnold, July 15, 2025