Lawyers Do What Lawyers Do: Revenues, AI, and Talk

July 22, 2025

A legal news service owned by LexisNexis now requires every article be auto-checked for appropriateness. So what’s appropriate? Beyond Search does not know. However, here’s a clue. Harvard’s NeimanLab reports, “Law360 Mandates Reporters Use AI Bias Detection on All Stories.” LexisNexis mandated the policy in May 2025. One of the LexisNexis professionals allegedly asserted that bias surfaced in reporting about the US government.The headline cited by VP Teresa Harmon read: “DOGE officials arrive at SEC with unclear agenda.” Um, okay.

Journalist Andrew Deck shares examples of wording the “bias” detection tool flagged in an article. The piece was a breaking story on a federal judge’s June 12 ruling against the administration’s deployment of the National Guard in LA. We learn:

“Several sentences in the story were flagged as biased, including this one: ‘It’s the first time in 60 years that a president has mobilized a state’s National Guard without receiving a request to do so from the state’s governor.’ According to the bias indicator, this sentence is ‘framing the action as unprecedented in a way that might subtly critique the administration.’ It was best to give more context to ‘balance the tone.’ Another line was flagged for suggesting Judge Charles Breyer had ‘pushed back’ against the federal government in his ruling, an opinion which had called the president’s deployment of the National Guard the act of ‘a monarchist.’ Rather than ‘pushed back,’ the bias indicator suggested a milder word, like ‘disagreed.’”

Having it sound as though anyone challenges the administration is obviously a bridge too far. How dare they? Deck continues:

“Often the bias indicator suggests softening critical statements and tries to flatten language that describes real world conflict or debates. One of the most common problems is a failure to differentiate between quotes and straight news copy. It frequently flags statements from experts as biased and treats quotes as evidence of partiality. For a June 5 story covering the recent Supreme Court ruling on a workplace discrimination lawsuit, the bias indicator flagged a sentence describing experts who said the ruling came ‘at a key time in U.S. employment law.’ The problem was that this copy, ‘may suggest a perspective.’”

Some Law360 journalists are not happy with their “owners.” Law360’s reporters and editors may not be on the same wave length as certain LexisNexis / Reed Elsevier executives. In June 2025, unit chair Hailey Konnath sent a petition to management calling for use of the software to be made voluntary. At this time, Beyond Search thinks that “voluntary” has a different meaning in leadership’s lexicon.

Another assertion is that the software mandate appeared without clear guidelines. Was there a dash of surveillance and possible disciplinary action? To add zest to this publishing stew, the Law360 Union is negotiating with management to adopt clearer guidelines around the requirement.

What’s the software engine? Allegedly LexisNexis built the tool with OpenAI’s GPT 4.0 model. Deck notes it is just one of many publishers now outsourcing questions of bias to smart software. (Smart software has been known for its own peculiarities, including hallucination or making stuff up.) For example, in March 2025, the LA Times launched a feature dubbed “Insights” that auto-assesses opinion stories’ political slants and spits out AI-generated counterpoints. What could go wrong? Who new that KKK had an upside?

What happens when a large publisher gives Grok a whirl? What if a journalist uses these tools and does not catch a “glue cheese on pizza moment”? Senior managers training in accounting, MBA get it done recipes, and (date I say it) law may struggle to reconcile cost, profit, fear, and smart software.

But what about facts?

Cynthia Murrell, July 22, 2025

BBC Warns Perplexity That the Beeb Lawyers Are Not Happy

July 10, 2025

The BBC has had enough of Perplexity AI gobbling up and spitting out its content. Sometimes with errors. The news site declares, “BBC Threatened AI Firm with Legal Action over Unauthorised Content Use.” Well, less a threat and more a strongly worded letter. Tech reporter Liv McMahon writes:

“The BBC is threatening to take legal action against an artificial intelligence (AI) firm whose chatbot the corporation says is reproducing BBC content ‘verbatim’ without its permission. The BBC has written to Perplexity, which is based in the US, demanding it immediately stops using BBC content, deletes any it holds, and proposes financial compensation for the material it has already used. … The BBC also cited its research published earlier this year that found four popular AI chatbots – including Perplexity AI – were inaccurately summarising news stories, including some BBC content. Pointing to findings of significant issues with representation of BBC content in some Perplexity AI responses analysed, it said such output fell short of BBC Editorial Guidelines around the provision of impartial and accurate news.”

Perplexity answered the BBC’s charges with an odd reference to a third party:

“In a statement, Perplexity said: ‘The BBC’s claims are just one more part of the overwhelming evidence that the BBC will do anything to preserve Google’s illegal monopoly.’ It did not explain what it believed the relevance of Google was to the BBC’s position, or offer any further comment.”

Huh? Of course, Perplexity is not the only AI firm facing such complaints, nor is the BBC the only publisher complaining. The Professional Publishers Association, which represents over 300 media brands, seconds the BBC’s allegations. In fact, the organization charges, Web-scraping AI platforms constantly violate UK copyrights. Though sites can attempt to block models with the Robots Exclusion Protocol (robots.txt), compliance is voluntary. Perplexity, the BBC claims, has not respected the protocol on its site. Perplexity denies that accusation.

Cynthia Murrell, July 10, 2025

Scattered Spider: Operating Freely Despite OSINT and Specialized Investigative Tools. Why?

July 7, 2025

Dino 5 18 25No smart software to write this essay. This dinobaby is somewhat old fashioned.

I don’t want to create a dust up in the specialized software sector. I noted the July 2, 2025, article “A Group of Young Cybercriminals Poses the Most Imminent Threat of Cyberattacks Right Now.” That story surprised me. First, the Scattered Spider group was documented (more or less) by Trellix, a specialized software and services firm. You can read the article “Scattered Spider: The Modus Operandi” and get a sense of what Trellix reported. The outfit even has a Wikipedia article about their activities.

Last week I was asked a direct question, “Which of the specialized services firms can provide me with specific information about Telegram Groups and Channels, both public and private?” My answer, “None yet.”

Scattered Spider uses Telegram for some messaging functions, and if you want to get a sense of what the outfit does, just fire up your OSINT tools or better yet use one of the very expensive specialized services available to government agencies. The young cybercriminals appear to use the alias @ScatteredSpiderERC.” There is a Wikipedia article about this group’s activities.

So what? Let’s go back to the question addressed directly to me about firms that have content about Telegram. If we assume the Wikipedia write up is sort of correct, the Scattered Spider entity popped up in 2022 and its activities caught the attention of Trellix. The time between the Trellix post and the Wired story is about two years.

Why has a specialized services firm providing actionable data to the US government, the Europol investigators, and the dozens of others law enforcement operations around the world? Isn’t it a responsible act to use that access to Telegram data to take down outfits that endanger casinos and other organizations?

Apparently the answer is, “No.”

My hunch is that these specialized software firms talk about having tools to access Telegram. That talk is a heck of a lot easier than finding a reliable way to access private Groups and Channels, trace a handle back to a real live human being possibly operating in the EU or the US. I would suggest that France tried to use OSINT and the often nine figure systems to crack Telegram. Will other law enforcement groups realize that the specialized software vendors’ tools fall short of the mark and think about a France-type of response?

France seems to have made a dent in Telegram. I would hypothesize that the failure of OSINT and the specialized software tool vendors contributed to France’s decision to just arrest Pavel Durov. Mr. Durov is now ensnared in France’s judicial bureaucracy. To make the arrest more complex for Mr. Durov, he is a citizen of France and a handful of other countries, including Russia and the United Arab Emirates.

I mention this lack of Telegram cracking capability for three reasons:

  1. Telegram is in decline and the company is showing some signs of strain
  2. The changing attitude toward crypto in the US means that Telegram absolutely has to play in that market or face either erosion or decimation of its seven year push to create alternative financial services based on TONcoin and Pavel Durov’s partners’ systems
  3. Telegram is facing a new generation of messaging competitors. Like Apple, Telegram is late to the AI party.

One would think that at a critical point like this, the Shadow Server account would be a slam dunk for any licensee of specialized software advertising, “Telegram content.”

Where are those vendors who webinars, email blasts, and trade show demonstrations? Where are the testimonials that Company Nuco’s specialized software really did work. “Here’s what we used in court because the specialized vendor’s software generated this data for us” is what I want to hear. I would suggest that Telegram remains a bit of a challenge to specialized software vendors. Will I identify these “big hat, no cattle outfits”? Nope.

Just thought that a reminder that marketing and saying what government professionals want to hear are easier than just talking.

Stephen E Arnold, July 2025

Paper Tiger Management

June 24, 2025

Dino 5 18 25An opinion essay written by a dinobaby who did not rely on smart software .

I learned that Apple and Meta (formerly Facebook) found themselves on the wrong side of the law in the EU. On June 19, 2025, I learned that “the European Commission will opt not to impose immediate financial penalties” on the firms. In April 2025, the EU hit Apple with a 500 million euro fine and Meta a 200 million euro fine for non compliance with the EU’s Digital Markets Act. Here’s an interesting statement in the cited EuroNews report the “grace period ends on June 26, 2025.” Well, not any longer.

What’s the rationale?

  1. Time for more negotiations
  2. A desire to appear fair
  3. Paper tiger enforcement.

I am not interested in items one and two. The winner is “paper tiger enforcement.” In my opinion, we have entered an era in management, regulation, and governmental resolve when the GenX approach to lunch. “Hey, let’s have lunch.” The lunch never happens. But the mental process follows these lanes in the bowling alley of life: [a] Be positive, [b] Say something that sounds good, [c] Check the box that says, “Okay, mission accomplished. Move on. [d] Forget about the lunch thing.

When this approach is applied on large scale, high-visibility issues, what happens? In my opinion, the credibility of the legal decision and the penalty is diminished. Instead of inhibiting improper actions, those who are on the receiving end of the punishment lean one thing: It doesn’t matter what we do. The regulators don’t follow through. Therefore, let’s just keep on moving down the road.

Another example of this type of management can be found in the return to the office battles. A certain percentage of employees are just going to work from home. The management of the company doesn’t do “anything”. Therefore, management is feckless.

I think we have entered the era of paper tiger enforcement. Make noise, show teeth, growl, and then go back into the den and catch some ZZZZs.

Stephen E Arnold, June 24, 2025

Hey, Creatives, You Are Marginalized. Embrace It

June 20, 2025

Considerations of right and wrong or legality are outdated, apparently. Now, it is about what is practical and expedient. The Times of London reports, “Nick Clegg: Artists’ Demands Over Copyright are Unworkable.” Clegg is both a former British deputy prime minister and former Meta executive. He spoke as the UK’s parliament voted down measures that would have allowed copyright holders to see when their work had been used and by whom (or what). But even that failed initiative falls short of artists’ demands. Writer Lucy Bannerman tells us:

“Leading figures across the creative industries, including Sir Elton John and Sir Paul McCartney, have urged the government not to ‘give our work away’ at the behest of big tech, warning that the plans risk destroying the livelihoods of 2.5 million people who work in the UK’s creative sector. However, Clegg said that their demands to make technology companies ask permission before using copyrighted work were unworkable and ‘implausible’ because AI systems are already training on vast amounts of data. He said: ‘It’s out there already.’”

How convenient. Clegg did say artists should be able to opt out of AI being trained on their works, but insists making that the default option is just too onerous. Naturally, that outweighs the interests of a mere 2.5 million UK creatives. Just how should artists go about tracking down each AI model that might be training on their work and ask them to please not? Clegg does not address that little detail. He does state:

“‘I just don’t know how you go around, asking everyone first. I just don’t see how that would work. And by the way if you did it in Britain and no one else did it, you would basically kill the AI industry in this country overnight. … I think expecting the industry, technologically or otherwise, to preemptively ask before they even start training — I just don’t see. I’m afraid that just collides with the physics of the technology itself.’”

The large technology outfits with the DNA of Silicon Valley has carried the day. So output and be quiet. (And don’t think any can use Mickey Mouse art. Different rules are okay.)

Cynthia Murrell, June 20, 2025

Will the EU Use an AI Agent to Automate Fines?

June 10, 2025

Dino 5 18 25_thumbJust a dinobaby and no AI: How horrible an approach?

Apple, at least to date, has not demonstrated adeptness in lashing smart software to its super secure and really user friendly system. How many times do I have to dismiss “log in to iCloud” and “log in to Facetime”? How frequently will Siri wander in dataspace? How often do I have to dismiss “two factor authentication” for the old iPad I use to read Kindle books? How often? The answer is, “As many times as the European Union will fine the company for failure to follow its rules, guidelines, laws, and special directives.

I read “EU Ruling: Apple’s App Store Still in Violation of DMA, 30 Days to Comply” and I really don’t know what Apple has blown off. I vaguely recall that the company ignored a US court order in the US. However, the EU is not the US, and the EU can make quite miserable for the company, its employees  residing in the EU, and its contractors with primary offices in member countries. The tools can be trivial: A bit of friction at international airports. The machinery can be quite Byzantine when financial or certification activities can be quite entertaining to an observer.

The write up says:

Following its initial €500 million fine in April, the European Commission is now giving Apple 30 days to fully align its App Store rules with the Digital Markets Act (DMA). If it fails to comply, the EU says it will start imposing “periodic penalty payments” until Apple [follows the rules]…

For me, the operative word is “periodic.” I think it means a phenomenon that repeats at regular intervals of time. Okay, a fine like the most recent €500 would just occur in a heart beat fashion. One example would be every month. After one year, the fines total €6,000,000,000. What happens if the EU gets frisky after a bottle of French burgundy from a very good year? The fine would be levied for each day in a calendar year and amount to €2,190,000,000,000 or two trillion one hundred ninety billion euros. Even for a high flier like Apple and its pilot Tim Apple, stakeholders might suggest, “Just obey the law, please.”

I wonder if the EU might consider using Telegram bots to automate the periodic fines. The system developed by France’s favorite citizen Pavel Durov is robust, easily extensible, and essentially free. The “FineApple_bot” could fire on a schedule and message Tim Apple, his Board of Directors, the other “leadership” of Apple, and assorted news outlets. The free service operates quickly enough for most users, but by paying a nominal monthly fee, the FineApple_bot could issues 1,000 instructions a second. But that’s probably overkill unless the EU decides to fine Apple by the minute. In case you were wondering the annual fine would be in the neighborhood of €52,560,000,000,000 (or fifty-two trillion five hundred sixty billion euros).

My hunch is that despite Apple’s cavalier approach to court orders, some less intransigent professional in the core of Apple would find a way to resolve the problem. But I personally quite like the Telegram bot approach.

Stephen E Arnold, June 10, 2025

Lawyers Versus Lawyers: We Need a Spy Versus Spy Cartoon Now

June 5, 2025

Dino 5 18 25Just the dinobaby operating without Copilot or its ilk.

Rupert Murdoch, a media tycoon with some alleged telephone intercept activity, owns a number of “real” news outfits. One of these published “What Is Big Tech Trying to Hide? Amazon, Apple, Google Are All Being Accused of Abusing Legal Privilege in Battles to Strip Away Their Power.” As a dinobaby in rural Kentucky, I have absolutely no idea if the information in the write up is spot on, close enough for horseshoes, or dead solid slam dunk in the information game.

What’s interesting is that the US legal system is getting quite a bit of coverage. Recently a judge in a fly over state found herself in handcuffs. Grousing about biased and unfair judges pops up in social media posts. One of my contacts in Manhattan told me that some judges have been receiving communications implying kinetic action.

Yep, lawyers.

Now the story about US big technology companies using the US legal system in a way that directly benefits these firms reveals “news” that I found mildly amusing. In rural Kentucky, when one gets in trouble or receives a call from law enforcement about a wayward sibling, the first action is to call one of the outstanding legal professionals who advertise in direct mail blasts on the six pm news and put memorable telephone numbers on the sides of the mostly empty bus vehicles that slowly prowl the pot-holed streets.

The purpose of the legal system is to get paid to represent the client. The client pays money or here in rural Kentucky a working pinball machine was accepted as payment by my former, deceased, and dearly beloved attorney. You get the idea: Pay money, get professional services. The understanding in my dealings with legal professionals is that the lawyers listen to their paying customers, discuss options among themselves or here in rural Kentucky with a horse in their barn, and formulate arguments to present their clients’ sides of cases or matters.

Obviously a person with money wants attorneys who [a] want compensation, [b] want to allow the client to prevail in a legal dust up, and [c] push back but come to accept their clients’ positions.

So now the Wall Street Journal reveals that the US legal system works in a transparent, predictable, and straightforward way.

My view of the legal problems the US technology firms face is that these innovative firms rode the wave their products and services created among millions of people. As a person who has been involved in successful start ups, I know how the surprise, thrill, and opportunities become the drivers of business decisions. Most of the high technology start ups fail. The survivors believe their intelligence, decision making, and charisma made success happen. That’s a cultural characteristic of what I call the Sillycon Valley way. (I know about this first hand because I lived in Berkeley and experienced the carnival ride of a technological winner.)

Without exposure to how technologies like “online” work, it was and to some extent still difficult to comprehend the potential impacts of the shift from media anchored in non digital ecosystems to the there is not there there hot house of a successful technology. Therefore, neither the “users” of  the technology recognized the impact of consumerizing the most successful technologies nor the regulators could understand what was changing on a daily and sometimes hourly cadence. Even those involved at a fast-growing high technology company had no idea that the fizz of winning would override ethical and moral considerations.

Therefore:

  1. Not really news
  2. Standard operating procedure for big technology trials since the MSFT anti-trust matter
  3. The US ethical fabric plus the invincibility and super hero mindsets maps the future of legal dust ups in my opinion.

Net net: Sigh. William James’s quantum energy is definitely not buzzing.

Stephen E Arnold, June 5, 2025

India: Fair Use Can Squeeze YouTubers

June 5, 2025

Asian News International (ANI) seems to be leveraging the vagueness of India’s fair-use definition with YouTube’s draconian policies to hold content creators over a barrel. The Reporters’ Collective declares, “ANI Finds Business Niche in Copyright Claims Against YouTubers.” Writer Ayushi Kar recounts the story of Sumit, a content creator ANI accused of copyright infringement. The news agency reported more than three violations at once, a move that triggered an automatic takedown of those videos. Worse, it gave Sumit just a week to make good with ANI or lose his channel for good. To save his livelihood, he forked over between 1,500,000 and 1,800,000 rupees (about $17,600 – $21,100) for a one-year access license. We learn:

“Sumit isn’t the lone guy facing the aggressive copyright claims of ANI, which has adopted a new strategy to punitively leverage YouTube’s copyright policies in India to generate revenue. Using the death clause in YouTube policy and India’s vague provisions for fair use of copyrighted material, ANI is effectively forcing YouTube creators to buy expensive year-long licenses. The agency’s approach is to negotiate pricey licensing deals with YouTubers, including several who are strong critics of the BJP, even as YouTube holds a sword over the content producer’s channel for multiple claims of copyright violation.”

See the write-up for more examples of content creators who went through an ANI shake down. Kar continues:

“While ANI might be following a business it understands to be legal and fair, the episode has raised larger concern about copyright laws and the fair use rights in India by content producers who are worried about being squeezed out of their livelihoods – sometimes wiping out years of labor to build a community – between YouTube’s policies and copyright owners willingness to play hardball.”

What a cute tactic. Will it come to the US? Is it already here? YouTubers, feel free to comment. There is something special about India’s laws, though, that might make this scheme especially profitable there. Kar tells us:

“India’s Copyright Act 1957 allows … use of copyrighted material without the copyright owner’s permission for purposes such as criticism, comment, news, reporting and many more. In practice, there is a severe lack of specificity in law and regulations about how fair use doctrine is to be practiced.”

That means the courts decide what fair use means case by case. Bringing one’s case to court is, of course, expensive and time consuming, and victory is far from assured. It is no wonder content creators feel they must pay up. It would be a shame if something happened to that channel.

Cynthia Murrell, June 5, 2025

Telegram, a Stylish French Dog Collar, and Mom Saying, “Pavel Clean Up Your Room!”

June 4, 2025

Dino 5 18 25Just a dinobaby operating without AI. What do you expect? A free newsletter and an old geezer. Do those statements sound like dorky detritus?

Pavel Durov has a problem with France. The country’s judiciary let him go back home after an eight month stay-cation. However, Mr. Durov is not the type of person to enjoy having a ring in his nose and a long strand of red tape connecting him to his new mom back in Paris. Pavel wants to live an Airbnb life, but he has to find a way to get his French mom to say, “Okay, Pavel, you can go out with your friends but you have to be home by 9 pm Paris time.” If he does not comply, Mr. Durov is learning that the French government can make life miserable: There’s the monitoring. There’s the red tape. There’s the reminder that France has some wonderful prison facilities in France, North Africa, and Guiana (like where’s that, Pavel?). But worst of all, Mr. Durov does not have his beloved freedom.

He learned this when he blew off a French request to block certain content from Telegram into Romania. For details, click here. What happened?

The first reminder was a jerk on his stylish French when the 40 year old was told, “Pavel, you cannot go to the US.” The write up “France Denies Telegram Founder Pavel Durov’s Request to Visit US” reported on May 22, 2025:

France has denied a request by Telegram founder Pavel Durov to travel to the United States for talks with investment funds, prosecutors…

For an advocate of “freedom,” Mr. Durov has just been told, “Pavel, go to your room.”

Mr. Durov, a young-at-heart 40 year old with oodles of loving children, wanted to travel from Dubai to Oslo, Norway. The reason was for Mr. Durov to travel to a conference about freedom. The French, those often viewed as people who certify chickens for quality, told Mr. Durov, “Pavel, you are grounded. Go back to your room and clean it up.”

Then another sharp pull and in public, causing the digital poodle to yelp. The Human Rights Foundation’s PR team published “French Courts Block Telegram Founder Pavel from Attending Oslo Freedom Forum.” That write up explained:

A French court has denied Telegram founder Pavel Durov’s request to travel to Norway in order to speak at the Oslo Freedom Forum on Tuesday, May 27. Durov had been invited to speak at the global gathering of activists, hosted annually by the Human Rights Foundation (HRF), on the topic of free speech, surveillance, and digital rights.

I interpret this decision by the French judiciary as making clear to Pavel Durov that he is not “free” and that he may be at risk of being sent to a summer camp in one of France’s salubrious facilities for those who don’t like to follow the rules. He is a French citizen, and I assume that he is learning that being allowed to leave France is not a get-out-of-jail free card. I would suggest that not even his brother, the fellow with two PhDs or his colleagues in his “core” engineering team can come up with what I call the “French problem.” My hunch is that these very intelligent people have considered that the French might expand their scope of interest to include the legal entities for Telegram and the “gee, it is not part of our operation” TON Foundation, its executives, and their ancillary business interests. The French did produce some nifty math about probabilities, and I have a hunch that the probability of the French judiciary fuzzifying the boundary between Pavel Durov and these other individuals is creeping up… quickly.

Pavel Durov is on a bureaucratic leash. The French judiciary have jerked Mr. Durov’s neck twice and quite publicly.

The question becomes, “What’s Mr. Durov going to do?” The fellow has a French collar with a leasch connecting him to the savvy French judiciary?

Allow this dinobaby to offer several observations:

  1. He will talk with his lawyers Kaminski and learn that France’s legal and police system does indeed have an interest in high-quality chickens as well as a prime specimen like Pavel Durov. In short, that fowl will be watched, probed, and groomed. Mr. Durov is experiencing how those ducks, geese, and chickens on French farms live before the creatures find themselves in a pot after plucking and plucking forcefully.
  2. Mr. Durov will continue to tidy Telegram to the standards of cleanliness enforced at the French Foreign Legion training headquarters. He is making progress on the money laundering front. He is cleaning up pointers to adult and other interesting Telegram content which has had 13 years to plant roots and support a veritable forest of allegedly illegal products and services. More effort is likely to be needed. Did I mention that dog crates are used to punish trainees who don’t get the bed making and ironing up to snuff? The crates are located in front of the drill field to make it easy for fellow trainees to see who has created the extra duties for the squad. It can be warm near Marseille for dog crates exposed to the elements.
  3. The competition is beginning to become visible. The charming Mark Zuckerberg, the delightful Elon Musk, and the life-of-the-AI-party Sam Altman are accelerating their efforts to release an everything application with some Telegram “features.” One thing is certain, a Pavel Durov does not have the scope or “freedom” of operation he had before his fateful trip to Paris in August 2024. Innovation at Telegram seems to be confined to “gifts” and STARS. Exciting stuff as TONcoin disappoints

Net net: Pavel Durov faces some headwinds, and these are not the gusts blasting up and down the narrow streets of Dubai, the US, or Norway. He has a big wind machine planted in front of his handsome visage and the blades are not rotating at full speed. Will France crank up the RPMs, Pavel? Do goose livers swell under certain conditions? Yep, a lot.

Stephen E Arnold, June 4, 2025

Coincidence or No Big Deal for the Google: User Data and Suicide

May 27, 2025

Dino 5 18 25_thumbJust the dinobaby operating without Copilot or its ilk.

I have ignored most of the carnival noise about smart software. Google continues its bug spray approach to thwarting the equally publicity-crazed Microsoft and OpenAI. (Is Copilot useful? Is Sam Altman the heir to Steve Jobs?)

Two stories caught my attention. The first is almost routine. Armed with the Chrome Hoover, long-lived cookies, and the permission hungry Android play — The Verge published “Google Has a Big AI Advantage: It Already Knows Everything about You.” Sigh. another categorical affirmative: “Everything.” Is that accurate? “Everything” or is it just a scare tactic to draw readers? Old news.

But the sub title is more interesting; to wit:

Google is slowly giving Gemini more and more access to user data to ‘personalize’ your responses.

Slowly. Really? More access? More than what? And “your responses?” Whose?

The write up says:

As an example, Google says if you’re chatting with a friend about road trip advice, Gemini can search through your emails and files, allowing it to find hotel reservations and an itinerary you put together. It can then suggest a response that incorporates relevant information. That, Google CEO Sundar Pichai said during the keynote, may even help you “be a better friend.” It seems Google plans on bringing personal context outside Gemini, too, as its blog post announcing the feature says, “You can imagine how helpful personal context will be across Search, Gemini and more.” Google said in March that it will eventually let users connect their YouTube history and Photos library to Gemini, too.

No kidding. How does one know that Google has not been processing personal data for decades. There’s a patent *with a cute machine generated profile of Michael Jackson. This report generated by Google appeared in the 2007 patent application US2007/0198481:

image

The machine generated bubble gum card about Michael Jackson, including last known address, nicknames, and other details. See US2007/0198481 A1, “Automatic Object Reference Identification and Linking in a Browsable Fact Repository.”

The inventors Andrew W. Hogue (Ho Ho Kus, NJ) and Jonathan T. Betz (Summit, NJ) appear on the “final” version of their invention. The name of the patent was the same, but there was an important different between the patent application and the actual patent. The machine generated personal profile was replaced with a much less useful informative screen capture; to wit:

image

From Google Patent 7774328, granted in 2010 as “Browsable Fact Repository.”

Google wasn’t done “inventing” enhancements to its profile engine capable of outputting bubble gum cards for either authorized users or Google systems. Check out Extension US9760570 B2 “Finding and Disambiguating References to Entities on Web Pages.” The idea is that items like “aliases” and similarly opaque factoids can be made concrete for linking to cross correlated content objects.,

Thus, the “everything” assertion while a categorical affirmative reveals a certain innocence on the part of the Verge “real news” story.

Now what about the information in “Google, AI Firm Must Face Lawsuit Filed by a Mother over Suicide of Son, US Court Says.” The write up is from the trusted outfit Thomson Reuters (I know it is trusted because it says so on the Web page). The write up dated May 21, 2025, reports:

The lawsuit is one of the first in the U.S. against an AI company for allegedly failing to protect children from psychological harms. It alleges that the teenager killed himself after becoming obsessed with an AI-powered chatbot. A Character.AI spokesperson said the company will continue to fight the case and employs safety features on its platform to protect minors, including measures to prevent "conversations about self-harm." Google spokesperson Jose Castaneda said the company strongly disagrees with the decision. Castaneda also said that Google and Character.AI are "entirely separate" and that Google "did not create, design, or manage Character.AI’s app or any component part of it."

Absent from the Reuters’ report and the allegedly accurate Google and semi-Google statements, the company takes steps to protect users, especially children. With The profiling and bubble gum card technology Google invented, does it seem prudent for Google to identify a child, cross correlate the child’s queries with the bubble gum card and dynamically [a] flag an issue, [b] alert a parent or guardian, [c] use the “everything” information to present suggestions for mental health support? I want to point out that if one searches for words on a stop list, the Dark Web search engine Ahmia.fi presents a page providing links to Clear Web resources to assist the person with counseling. Imagine: A Dark Web search engine performing a function specifically intended to help users.

Google, is Ahmia,fi more sophisticated that you and your quasi-Googles? Are the statements made about Google’s AI capabilities in line with reality? My hunch is requests like “Google spokesperson Jose Castaneda said the company strongly disagrees with the decision. Castaneda also said that Google and Character.AI are "entirely separate" and that Google "did not create, design, or manage Character.AI’s app or any component part of it." made after the presentation of evidence were not compelling. (Compelling is a popular word in some AI generated content. Yeah, compelling: A kid’s death. Inventions by Googlers specifically designed to profile a user, disambiguate disparate content objects, and make available a bubble gum card. Yeah, compelling.

I am optimistic that Google knowing “everything,” the death of a child, a Dark Web search engine that can intervene, and the semi-Google lawyers  add up to comfort and support.

Yeah, compelling. Google’s been chugging along in the profiling vineyard since 2007. Let’s see that works out to longer than the 14 year old had been alive.

Compelling? Nah. Googley.

Stephen E Arnold, May 27, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta