Google Does Its Thing: Courts Vary in their Views of the Outfit
September 10, 2025
Just a dinobaby sharing observations. No AI involved. My apologies to those who rely on it for their wisdom, knowledge, and insights.
I am not sure I understand how the US legal system works or any other legal system works. A legal procedure headed by a somewhat critical judge has allowed the Google to keep on doing what it is doing: Selling ads, collecting personal data, and building walled gardens even if they encroach on a kiddie playground.
However, at the same time, the Google was found to be a bit too frisky in its elephantine approach to business.
The first example is that Google was found guilty of collecting user data when users disabled the data collection. The details of this gross misunderstanding of how the superior thinkers at Google interpreted assorted guidelines and user settings appear in “Jury Slams Google Over App Data Collection to Tune of $425 Million.” Now to me that sounds like a lot of money. To the Google, it is a cash flow issue which can be addressed by negotiation, slow administrative response, and consulting firm speak. The write up says:
Google attorney Benedict Hur of Cooley LLP told jurors Google “certainly thought” it had permission to access the data. He added that Google lets users know it will continue to collect certain types of data, even if they toggle off web activity.
Quite an argument.
The other write up with some news about Google behavior is “France Fines Google, Shein Record Sums over Cookie Law Violations.” I found this passage in the write up interesting:
France’s data protection watchdog CNIL on Wednesday fined Google €325 million ($380 million) and fast-fashion retailer Shein €150 million ($175 million) for violating cookie rules. The record penalties target two platforms with tens of millions of French users, marking among the heaviest sanctions the regulator has imposed.
Several observations are warranted:
- Google is manifesting behavior similar to the China-linked outfit Shein. Who is learning from whom?
- Some courts find Google problematic; other courts think that Google is just doing okay Googley things
- A showdown may occur from outside the United States if a nation state just gets fed up with Google doing exactly whatever it wants.
I wonder if anyone at Google is thinking about hassling the French judiciary in the remainder of 2025 and into 2026. If so, it may be instructive to recall how the French judiciary addressed a 13-year-old case of digital Toxic Epidermal Necrolysis. Pavel Durov was arrested, interrogated for four days, and must report to French authorities every couple of weeks. His legal matter is moving through a judicial system noted for its methodical and red-tape choked processes.
Fancy a nice dinner in Paris, Google?
Stephen E Arnold, September 10, 2025
Google Monopoly: A Circle Still Unbroken
September 9, 2025
Just a dinobaby sharing observations. No AI involved. My apologies to those who rely on it for their wisdom, knowledge, and insights.
I am no lawyer, and I am not sure how the legal journey will unfold for the Google. I assume Google is still a monopoly. Google, however, is not happy with the recent court decision that appears to be a light tap on Googzilla’s snout. The snow is not falling and no errant piece of space junk has collided with the Mountain View campus.
I did notice a post on the Google blog with a cute url. The words “outreach-initiatives” , “public policy,” and DOJ search decision speak volumes to me.
The post carries this Google title, well, a Googley command:
Read our statement on today’s decision in the case involving Google Search
Okay, snap to it. The write up instructs:
Competition is intense and people can easily choose the services they want. That’s why we disagree so strongly with the Court’s initial decision in August 2024 on liability.
Okay, not em dashes, so Gemini did not write the sentence, although it may contain some words rarely associated with Googley things. These are words like “easily choose”. Hey, I thought Google was a monopoly. The purpose of the construct is to take steps to narrow choice. The Chicago stockyards uses fences, guides, and designated killing areas. But the cows don’t have a choice. The path is followed and the hammer drops. Thonk.
The write up adds:
Now the Court has imposed limits on how we distribute Google services, and will require us to share Search data with rivals. We have concerns about how these requirements will impact our users and their privacy, and we’re reviewing the decision closely.
The logic is pure blue chip consultant with a headache. I like the use of the word “imposed”. Does Google impose on its users; for instance, irrelevant search results, filtered YouTube videos, or roll up of user generated information in Google services? Of course not, a Google user can easily choose which videos to view on YouTube. A person looking for information can easily choose to access Web content on another Web search system. Just use Bing, Ecosia, or Phind. I like “easily.”
What strikes me is the command language and the huffiness about the decision.
Wow, I love Google. Is it a monopoly? Definitely not Android or Chrome. Ads? I don’t know. Probably not.
Stephen E Arnold, September 9, 2025
Bending Reality or Creating a Question of Ownership and Responsibility for Errors
September 3, 2025
No AI. Just a dinobaby working the old-fashioned way.
The Google has may busy digital beavers working in the superbly managed organization. The BBC, however, seems to be agitated about what may be a truly insignificant matter: Ownership of substantially altered content and responsibility for errors introduced into digital content.
“YouTube secretly used AI to Edit People’s Videos. The Results Could Bend Reality” reports:
In recent months, YouTube has secretly used artificial intelligence (AI) to tweak people’s videos without letting them know or asking permission.
The BBC ignores a couple of issues that struck me as significant if — please, note the “if” — the assertion about YouTube altering content belonging to another entity. I will address these after some more BBC goodness.
I noted this statement:
the company [Google] has finally confirmed it is altering a limited number of videos on YouTube Shorts, the app’s short-form video feature.
Okay, the Google digital beavers are beavering away.
I also noted this passage attributed to Samuel Woolley, the Dietrich chair of disinformation studies at the University of Pittsburgh:
“You can make decisions about what you want your phone to do, and whether to turn on certain features. What we have here is a company manipulating content from leading users that is then being distributed to a public audience without the consent of the people who produce the videos…. “People are already distrustful of content that they encounter on social media. What happens if people know that companies are editing content from the top down, without even telling the content creators themselves?”
What about those issues I thought about after reading the BBC’s write up:
- If Google’s changes (improvements, enhancements, AI additions, whatever), will Google “own” the resulting content? My thought is that if Google can make more money by using AI to create a “fair use” argument, it will. How long will it take a court (assuming these are still functioning) to figure out if Google’s right or the individual content creator is the copyright holder?
- When, not if, Google’s AI introduces some type of error, is Google responsible or is it the creator’s problem? My hunch is that Google’s attorneys will argue that it provides a content creator with a free service. See the Terms of Service for YouTube and stop complaining.
- What if a content creator hits a home run and Google’s AI “learns” then outputs content via its assorted AI processes? Will Google be able to deplatform the original creator and just use it as a way to make money without paying the home-run hitting YouTube creator?
Perhaps the BBC would like to consider how these tiny “experiments” can expand until they shift the monetization methods further in favor of the Google. Maybe one reason is that BBC doesn’t think these types of thoughts. The Google, based on my experience, is indeed thinking these types of “what if” talks in a sterile room with whiteboards and brilliant Googlers playing with their mobile devices or snacking on goodies.
Stephen E Arnold, September 3, 2025
Google Anti-Competitive? No. No. No!
August 28, 2025
No AI. Just a dinobaby working the old-fashioned way.
I read another anti Google news release from a mere country. When I encounter statements that Google is anti competitive, I am flabbergasted. Google is search. Google is the Web. Google is great. Google is America. What’s with countries that don’t get with the program? The agenda has been crystal clear for more than 20 years. Is there as dumb drug in your water or your wheat?
“Google Admits Anti-Competitive Conduct Involving Google Search in Australia” reports that Google has been browbeaten, subjected to psychological pressure, and outrageous claims. Consequently, the wonderful Google has just said, “Okay, you are right. Whatever. How much?”
The write up from a nation state says:
Google has co-operated with the ACCC, admitted liability and agreed to jointly submit to the Court that Google should pay a total penalty of $55 million. It is a matter for the Court to determine whether the penalty and other orders are appropriate.
Happy now?
The write up crows about forcing Google to falter emotionally and make further statements to buttress the alleged anti competitive behavior; to wit:
Google and its US parent company, Google LLC, have also signed a court-enforceable undertaking which the ACCC has accepted to address the ACCC’s broader competition concerns relating to contractual arrangements between Google, Android phone manufacturers and Australian telcos since 2017. Google does not agree with all of the ACCC’s concerns but has acknowledged them and offered the undertaking to address these concerns.
And there is ample evidence that Google abandons any alleged improper behavior. Sure, there have been minor dust ups about accidental WiFi interception in Germany, some trivial issues with regards to the UK outfit Foundem, and the current misunderstanding in America’s judicial system. But in each of these alleged “issues,” Google has instantly and in good faith corrected any problem caused by a contractor, a junior employee, or a smart “robot.” Managing Google is tough even for former McKinsey consultants.
Mistakes happen.
The nation state issues word salad that does little to assuage the mental and financial harm Google has suffered. Here are the painful words which hang like a scimitar over the fair Google’s neck:
The ACCC remains committed to addressing anti-competitive conduct like this, as well as cartel conduct. Competition issues in the digital economy are a current priority area.
Google is America. America is good. Therefore, that which Google does is a benefit to America and anyone who uses its services.
How can countries not figure out who’s on first, what’s on second, and I don’t know’s on third.
Stephen E Arnold, August 28, 2025
Google! Manipulating Search Results? No Kidding
August 15, 2025
The Federal Trade Commission has just determined something the EU has been saying (and litigating) for years. The International Business Times tells us, “Google Manipulated Search Results to Bolster Own Products, FTC Report Finds.” Writer Luke Villapaz reports:
“For Internet searches over the past few years, if you typed ‘Google’ into Google, you probably got the exact result you wanted, but if you were searching for products or services offered by Google’s competitors, chances are those offerings were found further down the page, beneath those offered by Google. That’s what the U.S. Federal Trade Commission disclosed on Thursday, in an extensive 160-page report, which was obtained by the Wall Street Journal as part of a Freedom of Information Act request. FTC staffers found evidence that Google’s algorithm was demoting the search results of competing services while placing its own higher on the search results page, according to excerpts from the report. Among the websites affected: shopping comparison, restaurant review and travel.”
Villapaz notes Yelp has made similar allegations, estimating Google’s manipulation of search results may have captured some 20% of its potential users. So, after catching the big tech firm red handed, what will the FTC do about it? Nothing, apparently. We learn:
“Despite the findings, the FTC staffers tasked with investigating Google did not recommend that the commission issue a formal complaint against the company. However, Google agreed to some changes to its search result practices when the commission ended its investigation in 2013.”
Well OK then. We suppose that will have to suffice.
Cynthia Murrell, August 15, 2025
Lawyers Do What Lawyers Do: Revenues, AI, and Talk
July 22, 2025
A legal news service owned by LexisNexis now requires every article be auto-checked for appropriateness. So what’s appropriate? Beyond Search does not know. However, here’s a clue. Harvard’s NeimanLab reports, “Law360 Mandates Reporters Use AI Bias Detection on All Stories.” LexisNexis mandated the policy in May 2025. One of the LexisNexis professionals allegedly asserted that bias surfaced in reporting about the US government.The headline cited by VP Teresa Harmon read: “DOGE officials arrive at SEC with unclear agenda.” Um, okay.
Journalist Andrew Deck shares examples of wording the “bias” detection tool flagged in an article. The piece was a breaking story on a federal judge’s June 12 ruling against the administration’s deployment of the National Guard in LA. We learn:
“Several sentences in the story were flagged as biased, including this one: ‘It’s the first time in 60 years that a president has mobilized a state’s National Guard without receiving a request to do so from the state’s governor.’ According to the bias indicator, this sentence is ‘framing the action as unprecedented in a way that might subtly critique the administration.’ It was best to give more context to ‘balance the tone.’ Another line was flagged for suggesting Judge Charles Breyer had ‘pushed back’ against the federal government in his ruling, an opinion which had called the president’s deployment of the National Guard the act of ‘a monarchist.’ Rather than ‘pushed back,’ the bias indicator suggested a milder word, like ‘disagreed.’”
Having it sound as though anyone challenges the administration is obviously a bridge too far. How dare they? Deck continues:
“Often the bias indicator suggests softening critical statements and tries to flatten language that describes real world conflict or debates. One of the most common problems is a failure to differentiate between quotes and straight news copy. It frequently flags statements from experts as biased and treats quotes as evidence of partiality. For a June 5 story covering the recent Supreme Court ruling on a workplace discrimination lawsuit, the bias indicator flagged a sentence describing experts who said the ruling came ‘at a key time in U.S. employment law.’ The problem was that this copy, ‘may suggest a perspective.’”
Some Law360 journalists are not happy with their “owners.” Law360’s reporters and editors may not be on the same wave length as certain LexisNexis / Reed Elsevier executives. In June 2025, unit chair Hailey Konnath sent a petition to management calling for use of the software to be made voluntary. At this time, Beyond Search thinks that “voluntary” has a different meaning in leadership’s lexicon.
Another assertion is that the software mandate appeared without clear guidelines. Was there a dash of surveillance and possible disciplinary action? To add zest to this publishing stew, the Law360 Union is negotiating with management to adopt clearer guidelines around the requirement.
What’s the software engine? Allegedly LexisNexis built the tool with OpenAI’s GPT 4.0 model. Deck notes it is just one of many publishers now outsourcing questions of bias to smart software. (Smart software has been known for its own peculiarities, including hallucination or making stuff up.) For example, in March 2025, the LA Times launched a feature dubbed “Insights” that auto-assesses opinion stories’ political slants and spits out AI-generated counterpoints. What could go wrong? Who new that KKK had an upside?
What happens when a large publisher gives Grok a whirl? What if a journalist uses these tools and does not catch a “glue cheese on pizza moment”? Senior managers training in accounting, MBA get it done recipes, and (date I say it) law may struggle to reconcile cost, profit, fear, and smart software.
But what about facts?
Cynthia Murrell, July 22, 2025
BBC Warns Perplexity That the Beeb Lawyers Are Not Happy
July 10, 2025
The BBC has had enough of Perplexity AI gobbling up and spitting out its content. Sometimes with errors. The news site declares, “BBC Threatened AI Firm with Legal Action over Unauthorised Content Use.” Well, less a threat and more a strongly worded letter. Tech reporter Liv McMahon writes:
“The BBC is threatening to take legal action against an artificial intelligence (AI) firm whose chatbot the corporation says is reproducing BBC content ‘verbatim’ without its permission. The BBC has written to Perplexity, which is based in the US, demanding it immediately stops using BBC content, deletes any it holds, and proposes financial compensation for the material it has already used. … The BBC also cited its research published earlier this year that found four popular AI chatbots – including Perplexity AI – were inaccurately summarising news stories, including some BBC content. Pointing to findings of significant issues with representation of BBC content in some Perplexity AI responses analysed, it said such output fell short of BBC Editorial Guidelines around the provision of impartial and accurate news.”
Perplexity answered the BBC’s charges with an odd reference to a third party:
“In a statement, Perplexity said: ‘The BBC’s claims are just one more part of the overwhelming evidence that the BBC will do anything to preserve Google’s illegal monopoly.’ It did not explain what it believed the relevance of Google was to the BBC’s position, or offer any further comment.”
Huh? Of course, Perplexity is not the only AI firm facing such complaints, nor is the BBC the only publisher complaining. The Professional Publishers Association, which represents over 300 media brands, seconds the BBC’s allegations. In fact, the organization charges, Web-scraping AI platforms constantly violate UK copyrights. Though sites can attempt to block models with the Robots Exclusion Protocol (robots.txt), compliance is voluntary. Perplexity, the BBC claims, has not respected the protocol on its site. Perplexity denies that accusation.
Cynthia Murrell, July 10, 2025
Scattered Spider: Operating Freely Despite OSINT and Specialized Investigative Tools. Why?
July 7, 2025
No smart software to write this essay. This dinobaby is somewhat old fashioned.
I don’t want to create a dust up in the specialized software sector. I noted the July 2, 2025, article “A Group of Young Cybercriminals Poses the Most Imminent Threat of Cyberattacks Right Now.” That story surprised me. First, the Scattered Spider group was documented (more or less) by Trellix, a specialized software and services firm. You can read the article “Scattered Spider: The Modus Operandi” and get a sense of what Trellix reported. The outfit even has a Wikipedia article about their activities.
Last week I was asked a direct question, “Which of the specialized services firms can provide me with specific information about Telegram Groups and Channels, both public and private?” My answer, “None yet.”
Scattered Spider uses Telegram for some messaging functions, and if you want to get a sense of what the outfit does, just fire up your OSINT tools or better yet use one of the very expensive specialized services available to government agencies. The young cybercriminals appear to use the alias @ScatteredSpiderERC.” There is a Wikipedia article about this group’s activities.
So what? Let’s go back to the question addressed directly to me about firms that have content about Telegram. If we assume the Wikipedia write up is sort of correct, the Scattered Spider entity popped up in 2022 and its activities caught the attention of Trellix. The time between the Trellix post and the Wired story is about two years.
Why has a specialized services firm providing actionable data to the US government, the Europol investigators, and the dozens of others law enforcement operations around the world? Isn’t it a responsible act to use that access to Telegram data to take down outfits that endanger casinos and other organizations?
Apparently the answer is, “No.”
My hunch is that these specialized software firms talk about having tools to access Telegram. That talk is a heck of a lot easier than finding a reliable way to access private Groups and Channels, trace a handle back to a real live human being possibly operating in the EU or the US. I would suggest that France tried to use OSINT and the often nine figure systems to crack Telegram. Will other law enforcement groups realize that the specialized software vendors’ tools fall short of the mark and think about a France-type of response?
France seems to have made a dent in Telegram. I would hypothesize that the failure of OSINT and the specialized software tool vendors contributed to France’s decision to just arrest Pavel Durov. Mr. Durov is now ensnared in France’s judicial bureaucracy. To make the arrest more complex for Mr. Durov, he is a citizen of France and a handful of other countries, including Russia and the United Arab Emirates.
I mention this lack of Telegram cracking capability for three reasons:
- Telegram is in decline and the company is showing some signs of strain
- The changing attitude toward crypto in the US means that Telegram absolutely has to play in that market or face either erosion or decimation of its seven year push to create alternative financial services based on TONcoin and Pavel Durov’s partners’ systems
- Telegram is facing a new generation of messaging competitors. Like Apple, Telegram is late to the AI party.
One would think that at a critical point like this, the Shadow Server account would be a slam dunk for any licensee of specialized software advertising, “Telegram content.”
Where are those vendors who webinars, email blasts, and trade show demonstrations? Where are the testimonials that Company Nuco’s specialized software really did work. “Here’s what we used in court because the specialized vendor’s software generated this data for us” is what I want to hear. I would suggest that Telegram remains a bit of a challenge to specialized software vendors. Will I identify these “big hat, no cattle outfits”? Nope.
Just thought that a reminder that marketing and saying what government professionals want to hear are easier than just talking.
Stephen E Arnold, July 2025
Paper Tiger Management
June 24, 2025
An opinion essay written by a dinobaby who did not rely on smart software .
I learned that Apple and Meta (formerly Facebook) found themselves on the wrong side of the law in the EU. On June 19, 2025, I learned that “the European Commission will opt not to impose immediate financial penalties” on the firms. In April 2025, the EU hit Apple with a 500 million euro fine and Meta a 200 million euro fine for non compliance with the EU’s Digital Markets Act. Here’s an interesting statement in the cited EuroNews report the “grace period ends on June 26, 2025.” Well, not any longer.
What’s the rationale?
- Time for more negotiations
- A desire to appear fair
- Paper tiger enforcement.
I am not interested in items one and two. The winner is “paper tiger enforcement.” In my opinion, we have entered an era in management, regulation, and governmental resolve when the GenX approach to lunch. “Hey, let’s have lunch.” The lunch never happens. But the mental process follows these lanes in the bowling alley of life: [a] Be positive, [b] Say something that sounds good, [c] Check the box that says, “Okay, mission accomplished. Move on. [d] Forget about the lunch thing.
When this approach is applied on large scale, high-visibility issues, what happens? In my opinion, the credibility of the legal decision and the penalty is diminished. Instead of inhibiting improper actions, those who are on the receiving end of the punishment lean one thing: It doesn’t matter what we do. The regulators don’t follow through. Therefore, let’s just keep on moving down the road.
Another example of this type of management can be found in the return to the office battles. A certain percentage of employees are just going to work from home. The management of the company doesn’t do “anything”. Therefore, management is feckless.
I think we have entered the era of paper tiger enforcement. Make noise, show teeth, growl, and then go back into the den and catch some ZZZZs.
Stephen E Arnold, June 24, 2025
Hey, Creatives, You Are Marginalized. Embrace It
June 20, 2025
Considerations of right and wrong or legality are outdated, apparently. Now, it is about what is practical and expedient. The Times of London reports, “Nick Clegg: Artists’ Demands Over Copyright are Unworkable.” Clegg is both a former British deputy prime minister and former Meta executive. He spoke as the UK’s parliament voted down measures that would have allowed copyright holders to see when their work had been used and by whom (or what). But even that failed initiative falls short of artists’ demands. Writer Lucy Bannerman tells us:
“Leading figures across the creative industries, including Sir Elton John and Sir Paul McCartney, have urged the government not to ‘give our work away’ at the behest of big tech, warning that the plans risk destroying the livelihoods of 2.5 million people who work in the UK’s creative sector. However, Clegg said that their demands to make technology companies ask permission before using copyrighted work were unworkable and ‘implausible’ because AI systems are already training on vast amounts of data. He said: ‘It’s out there already.’”
How convenient. Clegg did say artists should be able to opt out of AI being trained on their works, but insists making that the default option is just too onerous. Naturally, that outweighs the interests of a mere 2.5 million UK creatives. Just how should artists go about tracking down each AI model that might be training on their work and ask them to please not? Clegg does not address that little detail. He does state:
“‘I just don’t know how you go around, asking everyone first. I just don’t see how that would work. And by the way if you did it in Britain and no one else did it, you would basically kill the AI industry in this country overnight. … I think expecting the industry, technologically or otherwise, to preemptively ask before they even start training — I just don’t see. I’m afraid that just collides with the physics of the technology itself.’”
The large technology outfits with the DNA of Silicon Valley has carried the day. So output and be quiet. (And don’t think any can use Mickey Mouse art. Different rules are okay.)
Cynthia Murrell, June 20, 2025

