Publishers Are Not Googley about AI
June 2, 2025
“Google’s AI Mode Is the Definition of Theft, Publishers Say, Opt-Out Was Considered” reports that Google is a criminal and stealing content from its rightful owners. This is not a Googley statement. Criticism of the Google is likely to be filtered from search results because it is false statement and likely to cause harm. If this were not enough, the article states:
“The AI takeover of Search is in full swing, especially as Google’s new AI Mode is going live for all US users. But for publishers, this continues the existential crisis around how Google Search is changing, with a new statement calling AI Mode “the definition of theft” while legal documents reveal that Google did consider opt out controls that ultimately weren’t implemented.”
Quick question: Is this a surprise action by the Google? Answer: Yes, if one ignores Google’s approach to information. No, if one pays a modicum of attention to how the company has approached “publishing” in the last 20 years. Google is a publisher, probably the largest generator of outputs in history. It protects its information, and others should too. If those others are non-Googley, that information is to Google what Jurassic Park’s velociraptors were to soft, juicy humanoids — lunch.
The write up says:
“As it stands today, publishers are unable to opt out of Google’s AI tools without effectively opting out of Search as a whole.”
I am a dinobaby, old, dumb, but smart enough to understand the value of a de facto monopoly. Most of the open source intelligence industry is built on Google dorks. Publishers may be the original “dorks” when it comes to understanding what happens when one controls access, distribution, and monetization of online.
“Giving publishers the ability to opt out of AI products while still benefiting from Search would ultimately make Google’s flashy new tools useless if enough sites made the switch. It was very much a move in the interest of building a better product.”
I think this means that Google cares about the users and search quality. There is not hint of revenue, copyright issues, or raw power. Google just … cares.
The article and by extension the publisher “9 to 5 Google” gently suggests that Google is just being Google:
“Google’s tools continue to serve the company and its users (mostly) well, but as they continue to bleed publishers dry, those publishers are on the verge of vanishing or, arguably worse, turning to cheap and poorly produced content just to get enough views to survive. This is a problem Google needs to address, as it’s making the internet as a whole worse for everyone.”
Yep, continuing to serve the company, its users, and fresh double talk. Enjoy.
Stephen E Arnold, June 2, 2025
News Flash: US Losing AI Development Talent (Duh?)
June 2, 2025
The United States is leading country in technology development. It’s been at the cutting edge of AI since its inception, but according to Semafor that is changing: “Reports: US Losing Edge In AI Talent Pool.” Semafor’s article summarizes the current industry relating to AI development. Apparently the top brass companies want to concentrate on mobile and monetization, while the US government is cutting federal science funding (among other things) and doing some performative activity.
Meanwhile in China:
“China’s ascendency has played a role. A recent paper from the Hoover Institution, a policy think tank, flags that some of the industry’s most exciting recent advancements — namely DeepSeek — were built by Chinese researchers who stayed put. In fact, more than half of the researchers listed on DeepSeek’s papers never left China for school or work — evidence that the country doesn’t need Western influence to develop some of the smartest AI minds, the report says.”
India is bolstering its own tech talent as its people and businesses are consuming AI. Also they’re not exporting their top tech talent due to the US crackdowns. The Gulf countries and Europe are also expanding talent retention and expanding their own AI projects. London is the center for AI safety with Google DeepMind. The UAE and Saudi Arabia are developing their own AI infrastructure and energy sector to support it.
Will the US lose AI talent, code, and some innovative oomph? Semafor seems to think that greener pastures lie just over the sea.
Whitney Grace, June 2, 2025
A SundAI Special: Who Will Get RIFed? Answer: News Presenters for Sure
June 1, 2025
Just a dinobaby and some AI: How horrible an approach?
Why would “real” news outfits dump humanoids for AI-generated personalities? For my money, there are three good reasons:
- Cost reduction
- Cost reduction
- Cost reduction.
The bean counter has donned his Ivy League super smart financial accoutrements: Meta smart glasses, an Open AI smart device, and an Apple iPhone with the vaunted AI inside (sorry, Intel, you missed this trend). Unfortunately the “good enough” approach, like a gradient descent does not deal in reality. Sum those near misses and what do you get: Dead organic things. The method applies to flora and fauna, including humanoids with automatable jobs. Thanks, You.com, you beat the pants off Venice.ai which simply does not follow prompts. A perfect solution for some applications, right?
My hunch is that many people (humanoids) will disagree. The counter arguments are:
- Human quantum behavior; that is, flubbing lines, getting into on air spats, displaying annoyance standing in a rain storm saying, “The wind velocity is picking up.”
- The cost of recruitment, training, health care, vacations, and pension plans (ho ho ho)
- The management hassle of having to attend meetings to talk about, become deciders, and — oh, no — accept responsibility for those decisions.
I read “The White-Collar Bloodbath’ Is All Part of the AI Hype Machine.” I am not sure how fear creates an appetite for smart software. The push for smart software boils down to generating revenues. To achieve revenues one can create a new product or service like the iPhone of the original Google search advertising machine. But how often do those inventions doddle down the Information Highway? Not too often because most of the innovative new new next big things are smashed by a Meta-type tractor trailer.
The write up explains that layoff fears are not operable in the CNN dataspace:
If the CEO of a soda company declared that soda-making technology is getting so good it’s going to ruin the global economy, you’d be forgiven for thinking that person is either lying or fully detached from reality. Yet when tech CEOs do the same thing, people tend to perk up. ICYMI: The 42-year-old billionaire Dario Amodei, who runs the AI firm Anthropic, told Axios this week that the technology he and other companies are building could wipe out half of all entry-level office jobs … sometime soon. Maybe in the next couple of years, he said.
First, the killing jobs angle is probably easily understood and accepted by individuals responsible for “cost reduction.” Second, the ICYMI reference means “in case you missed it,” a bit of short hand popular with those are not yet 80 year old dinobabies like me. Third, the source is a member of the AI leadership class. Listen up!
Several observations:
- AI hype is marketing. Money is at stake. Do stakeholders want their investments to sit mute and wait for the old “build it and they will come” pipedream to manifest?
- Smart software does not have to be perfect; it needs to be good enough. Once it is good enough cost reductionists take the stage and employees are ushered out of specific functions. One does not implement cost reductions at random. Consultants set priorities, develop scorecards, and make some charts with red numbers and arrows point up. Employees are expensive in general, so some work is needed to determine which can be replaced with good enough AI.
- News, journalism, and certain types of writing along with customer “support”, and some jobs suitable for automation like reviewing financial data for anomalies are likely to be among the first to be subject to a reduction in force or RIF.
So where does that leave the neutral observer? On one hand, the owners of the money dumpster fires are promoting like crazy. These wizards have to pull rabbit after rabbit out of a hat. How does that get handled? Think P.T. Barnum.
Some AI bean counters, CFOs, and financial advisors dream about dumpsters filled with money burning. This was supposed to be an icon, but Venice.ai happily ignores prompt instructions and includes fruit next to a burning something against a wooden wall. Perfect for the good enough approach to news, customer service, and MBA analyses.
On the other hand, you have the endangered species, the “real” news people and others in the “knowledge business but automatable knowledge business.” These folks are doing what they can to impede the hyperbole machine of smart software people.
Who or what will win? Keep in mind that I am a dinobaby. I am going extinct, so smart software has zero impact on me other than making devices less predictable and resistant to my approach to “work.” Here’s what I see happening:
- Increasing unemployment for those lower on the “knowledge word” food chain. Sorry, junior MBAs at blue chip consulting firms. Make sure you have lots of money, influential parents, or a former partner at a prestigious firm as a mom or dad. Too bad for those studying to purvey “real” news. Junior college graduates working in customer support. Yikes.
- “Good enough” will replace excellence in work. This means that the air traffic controller situation is a glimpse of what deteriorating systems will deliver. Smart software will probably come to the rescue, but those antacid gobblers will be history.
- Increasing social discontent will manifest itself. To get a glimpse of the future, take an Uber from Cape Town to the airport. Check out the low income housing.
Net net: The cited write up is essentially anti-AI marketing. Good luck with that until people realize the current path is unlikely to deliver the pot of gold for most AI implementations. But cost reduction only has to show payoffs. Balance sheets do not reflect a healthy, functioning datasphere.
Stephen E Arnold, June 1, 2025
2025 Is a Triangular Number: Tim Apple May Have No Way Out
May 30, 2025
Just a dinobaby and no AI: How horrible an approach?
Macworld in my mind is associated with happy Macs, not sad Macs. I just read “Tim Cook’s Year Is Doomed and It’s Not Even June Yet.” That’s definitely a sad Mac headline and suggests that Tim Apple will morph into a well-compensated human in a little box something like this:
The write up says:
Cook’s bad, awful 2025 is pretty much on the record…
Why, pray tell? How about:
- The failure of Apple’s engineers to deliver smart software
- A donation to a certain political figure’s campaign only to be rewarded with tariffs
- Threats of an Apple “tax”
- Fancy dancing with China and pumping up manufacturing in India only to be told by a person of authority, “That’s not a good idea, Tim Apple.”
I think I have touched on the main downers. The write up concludes with:
For Apple, this may be a case of too much success being a bad thing. It is unlikely that Cook could have avoided Trump’s attention, given its inherent gravimetric field. The question is, now that a moderate show of obsequiousness has proven insufficiently mollifying, what will Cook do next?
Imagine a high flying US technology company not getting its way in the US and a couple of other countries to boot. And what about the European Union?
Several observations are warranted:
- Tim Cook should be paranoid. Lots of people are out to get Apple and he will be collateral damage.
- What happens if the iPhone craters? Will Apple TV blossom or blow?
- How many pro-Apple humans will suffer bouts of depression? My guess? Lots.
Net net: Numerologists will perceive 2025 as a year for Apple to reflect and prepare for new cycles. I just see 2025 as a triangular number with Tim Apple in its perimeter and no way out evident.
Stephen E Arnold, May 30, 2025
Copilot Disappointments: You Are to Blame
May 30, 2025
No AI, just a dinobaby and his itty bitty computer.
Another interesting Microsoft story from a pro-Microsoft online information service. Windows Central published “Microsoft Won’t Take Bigger Copilot Risks — Due to ‘a Post-Traumatic Stress Disorder from Embarrassments,’ Tracing Back to Clippy.” Why not invoke Bob, the US government suggesting Microsoft security was needy, or the software of the Surface Duo?
The write up reports:
Microsoft claims Copilot and ChatGPT are synonymous, but three-quarters of its AI division pay out of pocket for OpenAI’s superior offering because the Redmond giant won’t allow them to expense it.
Is Microsoft saving money or is Microsoft’s cultural momentum maintaining the velocity of Steve Ballmer taking an Apple iPhone from an employee and allegedly stomping on the device. That helped make Microsoft’s management approach clear to some observers.
The Windows Central article adds:
… a separate report suggested that the top complaint about Copilot to Microsoft’s AI division is that “Copilot isn’t as good as ChatGPT.” Microsoft dismissed the claim, attributing it to poor prompt engineering skills.
This statement suggests that Microsoft is blaming a user for the alleged negative reaction to Copilot. Those pesky users again. Users, not Microsoft, is at fault. But what about the Microsoft employees who seem to prefer ChatGPT?
Windows Central stated:
According to some Microsoft insiders, the report details that Satya Nadella’s vision for Microsoft Copilot wasn’t clear. Following the hype surrounding ChatGPT’s launch, Microsoft wanted to hop on the AI train, too.
I thought the problem was the users and their flawed prompts. Could the issue be Microsoft’s management “vision”? I have an idea. Why not delegate product decisions to Copilot. That will show the users that Microsoft has the right approach to smart software: Cutting back on data centers, acquiring other smart software and AI visionaries, and putting Copilot in Notepad.
Stephen E Arnold, May 30, 2025
AI Can Do Your Knowledge Work But You Will Not Lose Your Job. Never!
May 30, 2025
The dinobaby wrote this without smart software. How stupid is that?
Ravical is going to preserve jobs for knowledge workers. Nevertheless, the company’s AI may complete 80% of the work in these types of organizations. No bean counter on earth would figure out that reducing humanoid workers would cut costs, eliminate the useless vacation scam, and chop the totally unnecessary health care plan. None.
The write up “Belgian AI Startup Says It Can Automate 80% of Work at Expert Firms” reports:
Joris Van Der Gucht, Ravical’s CEO and co-founder, said the “virtual employees” could do 80% of the work in these firms. “Ravical’s agents take on the repetitive, time-consuming tasks that slow experts down,” he told TNW, citing examples such as retrieving data from internal systems, checking the latest regulations, or reading long policies. Despite doing up to 80% of the work in these firms, Van Der Gucht downplayed concerns about the agents supplanting humans.
I believe this statement is 100 percent accurate. AI firms do not use excessive statements to explain their systems and methods. The article provides more concrete evidence that this replacement of humans is spot on:
Enrico Mellis, partner at Lakestar, the lead investor in the round, said he was excited to support the company in bringing its “proven” experience in automation to the booming agentic AI market. “Agentic AI is moving from buzzword to board-level priority,” Mellis said.
Several observations:
- Humans absolutely will be replaced, particularly those who cannot sell
- Bean counters will be among the first to point out that software, as long as it is good enough, will reduce costs
- Executives are judged on financial performance, not the quality of the work as long as revenues and profits result.
Will Ravical become the go-to solution for outfits engaged in knowledge work? No, but it will become a company that other agentic AI firms will watch closely. As long as the AI is good enough, humanoids without the ability to close deals will have plenty of time to ponder opportunities in the world of good enough, hallucinating smart software.
Stephen E Arnold, May 30, 2025
Information Filtering with Mango Chutney, Please
May 30, 2025
Censorship is having a moment. And not just in the US. For example, India’s The Wire laments, “Academic Censorship Has Become the Norm in Indian Universities.” Writer Apoorvanand, who teaches at Dheli University, describes his experience when a seminar he was to speak at was “postponed.” See the article for the details, like the importance and difficulty of bringing together a diverse panel. Or the college principal who informed speakers the event was off without notifying its organizer, Apoorvanand’s colleague. He writes:
“It was a breach of trust and a personal humiliation, my colleague fumed. Of course the problematic speaker would not know the story but he knew what was the real reason. He said that principals today only want one type of speaker to be invited. The non-problematic ones. Was it only about an individual? No. My friend felt that it went beyond that. There is an attempt to disallow discussion on topics which can make students think. Any seminars which would expose the students to different ways of looking at a problem and making their own decision are not permitted. For the last 10 years we see only one kind of meets being held in the colleges. They cannot be called academic and intellectual fora. They are platforms created for propaganda for the regime and one kind of ‘Indianness’ or ‘nationalism.’ If you do a survey of the topics across colleges, you would find a monotonous similarity. It is a campaign to indoctrinate young people. For it to succeed, the authorities keep other voices and ideas out of the reach of the students.”
Despite the organizer’s intent to not single out the “problematic” participant, the individual knew. Apoorvanand spoke to him and learned cancellations are now a common occurrence for him. And, he added, a growing list of his colleagues. Neither is this pattern limited to Dheli University. We learn:
“When I told [other teachers] about this, they opened up. Some of them were from ‘elite’ universities like Ashoka or Krea and Azim Premji University. There too the authorities have become very cautious. Names of the speakers have to be cleared by the authorities. There is an order in one university to share the slides the speakers would use three days before the event. The teachers are also cautioned against going to places that could upset the regime or accepting invitations from people who are considered to be its critics.”
At Indian universities both public and private, Apoorvanand writes, censorship is now the norm a bit like mango chutney.
Cynthia Murrell, May 30, 2025
It Takes a Village Idiot to Run an AI Outfit
May 29, 2025
The dinobaby wrote this without smart software. How stupid is that?
I liked the the write up “The Era Of The Business Idiot.” I am not sure the term “idiot” is 100 percent accurate. According to the Oxford English Dictionary, the word “idiot” is a variant of the phrase “the village idget.” Good enough for me.
The AI marketing baloney is a big thick sausage indeed. Here’s a pretty good explanation of a high-technology company executive today:
We live in the era of the symbolic executive, when "being good at stuff" matters far less than the appearance of doing stuff, where "what’s useful" is dictated not by outputs or metrics that one can measure but rather the vibes passed between managers and executives that have worked their entire careers to escape the world of work. Our economy is run by people that don’t participate in it and our tech companies are directed by people that don’t experience the problems they allege to solve for their customers, as the modern executive is no longer a person with demands or responsibilities beyond their allegiance to shareholder value.
The essay contains a number of observations which match well to my experiences as an officer in companies and as a consultant to a wide range of organizations. Here’s an example:
In simpler terms, modern business theory trains executives not to be good at something, or to make a company based on their particular skills, but to "find a market opportunity" and exploit it. The Chief Executive — who makes over 300 times more than their average worker — is no longer a leadership position, but a kind of figurehead measured on their ability to continually grow the market capitalization of their company. It is a position inherently defined by its lack of labor, the amorphousness of its purpose and its lack of any clear responsibility.
I urge you to read the complete write up.
I want to highlight some assertions (possibly factoids) which I found interesting. I shall, of course, offer a handful of observations.
First, I noted this statement:
When the leader of a company doesn’t participate in or respect the production of the goods that enriches them, it creates a culture that enables similarly vacuous leaders on all levels.
Second, this statement:
Management has, over the course of the past few decades, eroded the very fabric of corporate America, and I’d argue it’s done the same in multiple other western economies, too.
Third, this quote from a “legendary” marketer:
As the legendary advertiser Stanley Pollitt once said, “bullshit baffles brains.”
Fourth, this statement about large language models, the next big thing after quantum, of course:
A generative output is a kind of generic, soulless version of production, one that resembles exactly how a know-nothing executive or manager would summarise your work.
And, fifth, this comment:
By chasing out the people that actually build things in favour of the people that sell them, our economy is built on production puppetry — just like generative AI, and especially like ChatGPT.
More little nuggets nestle in the write up; it is about 13,000 words. (No, I did not ask Copilot to count the words. I am a good estimator of text length.) It is now time for my observations:
- I am not sure the leadership is vacuous. The leadership does what it learned, knows how to do, and obtained promotions for just being “authentic.” One leader at the blue chip consulting firm at which I learned to sell scope changes, built pianos in his spare time. He knew how to do that: Build a piano. He also knew how to sell scope changes. The process is one that requires a modicum of knowledge and skill.
- I am not sure management has eroded the “fabric.” My personal view is that accelerated flows of information has blasted certain vulnerable types of constructs. The result is leadership that does many of the things spelled out in the write up. With no buffer between thinking big thoughts and doing work, the construct erodes. Rebuilding is not possible.
- Mr. Pollitt was a marketer. He is correct, and that marketing mindset is in the cat-bird seat.
- Generative AI outputs what is probably an okay answer. Those who were happy with a “C” in school will find the LLM a wonderful invention. That alone may make further erosion take place more rapidly. If I am right about information flows, the future is easy to predict, and it is good for a few and quite unpleasant for many.
- Being able to sell is the top skill. Learn to embrace it.
Stephen E Arnold, May 29, 2025
A Grok Crock: That Dog Ate My Homework
May 29, 2025
Just the dinobaby operating without Copilot or its ilk.
I think I have heard Grok (a unit of XAI I think) explain that outputs have been the result of a dog eating the code or whatever. I want to document these Grok Crocks. Perhaps I will put them in a Grok Pot and produce a list of recipes suitable for middle school and high school students.
The most recent example of “something just happened” appears in “Grok Says It’s Skeptical’ about Holocaust Death Toll, Then Blames Programming Error.” Does this mean that smart software is programming Grok? If so, the explanation should be worded, “Grok hallucinates.” If a human wizard made a programming error, then making a statement that quality control will become Job One. That worked for Microsoft until Copilot became the go-to task.
The cited article stated:
Grok said this response was “not intentional denial” and instead blamed it on “a May 14, 2025, programming error.” “An unauthorized change caused Grok to question mainstream narratives, including the Holocaust’s 6 million death toll, sparking controversy,” the chatbot said. Grok said it “now aligns with historical consensus” but continued to insist there was “academic debate on exact figures, which is true but was misinterpreted.” The “unauthorized change” that Grok referred to was presumably the one xAI had already blamed earlier in the week for the chatbot’s repeated insistence on mentioning “white genocide” (a conspiracy theory promoted by X and xAI owner Elon Musk), even when asked about completely unrelated subjects.
I am going to steer clear of the legality of these statements and the political shadows these Grok outputs cast. Instead, let me offer a few observations:
- I use a number of large language models. I have used Grok exactly twice. The outputs had nothing of interest for me. I asked, “Can you cite X.com messages.” The system said, “Nope.” I tried again after Grok 3 became available. Same answer. Hasta la vista, Grok.
- The training data, the fancy math, and the algorithms determine the output. Since current LLMs rely on Google’s big idea, one would expect the outputs to be similar. Outlier outputs like these alleged Grokings are a bit of a surprise. Perhaps someone at Grok could explain exactly why these outputs are happening. I know dogs could eat homework. The event is highly unlikely in my experience, although I had a dog which threw up on the typewriter I used to write a thesis.
- I am a suspicious person. Grok makes me suspicious. I am not sure marketing and smarmy talk can reduce my anxiety about Grok providing outlier content to middle school, high school, college, and “I don’t care” adults. Weaponized information in my opinion is just that a weapon. Dangerous stuff.
Net net: Is the dog eating homework one of the Tesla robots? if so, speak with the developers, please. An alternative would be to use Claude 3.7 or Gemini to double check Grok’s programming.
Stephen E Arnold, May 29, 2025
Telegram and xAI: Deal? What Deal?
May 29, 2025
Just a dinobaby and no AI: How horrible an approach?
What happens when two people with a penchant for spawning babies seem to sort of, mostly, well, generally want a deal? On May 28, 2025, one of the super humans suggested a deal existed between the Telegram company and the xAI outfit. Money and equity would change hands. The two parties were in sync. I woke this morning to an email that said, “No deal signed.”
The Kyiv Independent, a news outfit that pays close attention to Telegram because of the “special operation”, published “Durov Announces Telegram’s Partnership with Musk’s xAI, Who Says No Deal Signed Yet.” The story reports:
Telegram and Elon Musk’s xAI will enter a one-year partnership, integrating the Grok chatbot into the messaging app, Telegram CEO Pavel Durov announced on May 28. Musk, the world’s richest man who also owns Tesla and SpaceX, commented that "no deal has been signed," prompting Durov to clarify that the deal has been agreed in "principle" with "formalities pending." "This summer, Telegram users will gain access to the best AI technology on the market," Durov said.
The write up included an interesting item of information; to wit:
Durov has claimed he is a pariah and has been effectively exiled from Russia, but it was reported last year that he had visited Russia over 60 times since leaving the country, according to Kremlingram, a Ukrainian group that campaigns against the use of Telegram in Ukraine.
Mr. Musk, the master mind behind a large exploding space vehicle, and Mr. Durov have much to gain from a linkage. Telegram, like Apple, is not known for its smart software. Third party bots have made AI services available to Telegram’s more enterprising users. xAI has made modest progress on its path to becoming the “everything” app might benefit from getting front and center to the Telegram user base.
Both individuals are somewhat idiosyncratic. Both have interesting technology. Both present themselves as bright, engaging, and often extremely confident professionals.
What’s likely to happen? With two leaders with much in common, Grok or another smart software will make its way to the Telegram faithful. When that happens is unknown. The terms of the “deal” (if one exists) are marketing or jockeying as of May 29, 2025. The timeline for action is fuzzy.
What’s obvious is that volatility and questionable information shine the spotlight on both forward leading companies. The Telegram information distracts a bit from the failed rocket. Good for Mr. Musk. The Grok deal distracts a bit from the French-styled dog collar around Mr. Durov’s neck. Good for Mr. Durov.
When elephants fight, grope, and deal, the grass may take a beating. When the dust settles, what are these elephants doing? The grass has been stomped upon, but the beasties?
Stephen E Arnold, May 29, 2025