Bending Reality or Creating a Question of Ownership and Responsibility for Errors
September 3, 2025
No AI. Just a dinobaby working the old-fashioned way.
The Google has may busy digital beavers working in the superbly managed organization. The BBC, however, seems to be agitated about what may be a truly insignificant matter: Ownership of substantially altered content and responsibility for errors introduced into digital content.
“YouTube secretly used AI to Edit People’s Videos. The Results Could Bend Reality” reports:
In recent months, YouTube has secretly used artificial intelligence (AI) to tweak people’s videos without letting them know or asking permission.
The BBC ignores a couple of issues that struck me as significant if — please, note the “if” — the assertion about YouTube altering content belonging to another entity. I will address these after some more BBC goodness.
I noted this statement:
the company [Google] has finally confirmed it is altering a limited number of videos on YouTube Shorts, the app’s short-form video feature.
Okay, the Google digital beavers are beavering away.
I also noted this passage attributed to Samuel Woolley, the Dietrich chair of disinformation studies at the University of Pittsburgh:
“You can make decisions about what you want your phone to do, and whether to turn on certain features. What we have here is a company manipulating content from leading users that is then being distributed to a public audience without the consent of the people who produce the videos…. “People are already distrustful of content that they encounter on social media. What happens if people know that companies are editing content from the top down, without even telling the content creators themselves?”
What about those issues I thought about after reading the BBC’s write up:
- If Google’s changes (improvements, enhancements, AI additions, whatever), will Google “own” the resulting content? My thought is that if Google can make more money by using AI to create a “fair use” argument, it will. How long will it take a court (assuming these are still functioning) to figure out if Google’s right or the individual content creator is the copyright holder?
- When, not if, Google’s AI introduces some type of error, is Google responsible or is it the creator’s problem? My hunch is that Google’s attorneys will argue that it provides a content creator with a free service. See the Terms of Service for YouTube and stop complaining.
- What if a content creator hits a home run and Google’s AI “learns” then outputs content via its assorted AI processes? Will Google be able to deplatform the original creator and just use it as a way to make money without paying the home-run hitting YouTube creator?
Perhaps the BBC would like to consider how these tiny “experiments” can expand until they shift the monetization methods further in favor of the Google. Maybe one reason is that BBC doesn’t think these types of thoughts. The Google, based on my experience, is indeed thinking these types of “what if” talks in a sterile room with whiteboards and brilliant Googlers playing with their mobile devices or snacking on goodies.
Stephen E Arnold, September 3, 2025
Google Anti-Competitive? No. No. No!
August 28, 2025
No AI. Just a dinobaby working the old-fashioned way.
I read another anti Google news release from a mere country. When I encounter statements that Google is anti competitive, I am flabbergasted. Google is search. Google is the Web. Google is great. Google is America. What’s with countries that don’t get with the program? The agenda has been crystal clear for more than 20 years. Is there as dumb drug in your water or your wheat?
“Google Admits Anti-Competitive Conduct Involving Google Search in Australia” reports that Google has been browbeaten, subjected to psychological pressure, and outrageous claims. Consequently, the wonderful Google has just said, “Okay, you are right. Whatever. How much?”
The write up from a nation state says:
Google has co-operated with the ACCC, admitted liability and agreed to jointly submit to the Court that Google should pay a total penalty of $55 million. It is a matter for the Court to determine whether the penalty and other orders are appropriate.
Happy now?
The write up crows about forcing Google to falter emotionally and make further statements to buttress the alleged anti competitive behavior; to wit:
Google and its US parent company, Google LLC, have also signed a court-enforceable undertaking which the ACCC has accepted to address the ACCC’s broader competition concerns relating to contractual arrangements between Google, Android phone manufacturers and Australian telcos since 2017. Google does not agree with all of the ACCC’s concerns but has acknowledged them and offered the undertaking to address these concerns.
And there is ample evidence that Google abandons any alleged improper behavior. Sure, there have been minor dust ups about accidental WiFi interception in Germany, some trivial issues with regards to the UK outfit Foundem, and the current misunderstanding in America’s judicial system. But in each of these alleged “issues,” Google has instantly and in good faith corrected any problem caused by a contractor, a junior employee, or a smart “robot.” Managing Google is tough even for former McKinsey consultants.
Mistakes happen.
The nation state issues word salad that does little to assuage the mental and financial harm Google has suffered. Here are the painful words which hang like a scimitar over the fair Google’s neck:
The ACCC remains committed to addressing anti-competitive conduct like this, as well as cartel conduct. Competition issues in the digital economy are a current priority area.
Google is America. America is good. Therefore, that which Google does is a benefit to America and anyone who uses its services.
How can countries not figure out who’s on first, what’s on second, and I don’t know’s on third.
Stephen E Arnold, August 28, 2025
Google! Manipulating Search Results? No Kidding
August 15, 2025
The Federal Trade Commission has just determined something the EU has been saying (and litigating) for years. The International Business Times tells us, “Google Manipulated Search Results to Bolster Own Products, FTC Report Finds.” Writer Luke Villapaz reports:
“For Internet searches over the past few years, if you typed ‘Google’ into Google, you probably got the exact result you wanted, but if you were searching for products or services offered by Google’s competitors, chances are those offerings were found further down the page, beneath those offered by Google. That’s what the U.S. Federal Trade Commission disclosed on Thursday, in an extensive 160-page report, which was obtained by the Wall Street Journal as part of a Freedom of Information Act request. FTC staffers found evidence that Google’s algorithm was demoting the search results of competing services while placing its own higher on the search results page, according to excerpts from the report. Among the websites affected: shopping comparison, restaurant review and travel.”
Villapaz notes Yelp has made similar allegations, estimating Google’s manipulation of search results may have captured some 20% of its potential users. So, after catching the big tech firm red handed, what will the FTC do about it? Nothing, apparently. We learn:
“Despite the findings, the FTC staffers tasked with investigating Google did not recommend that the commission issue a formal complaint against the company. However, Google agreed to some changes to its search result practices when the commission ended its investigation in 2013.”
Well OK then. We suppose that will have to suffice.
Cynthia Murrell, August 15, 2025
Lawyers Do What Lawyers Do: Revenues, AI, and Talk
July 22, 2025
A legal news service owned by LexisNexis now requires every article be auto-checked for appropriateness. So what’s appropriate? Beyond Search does not know. However, here’s a clue. Harvard’s NeimanLab reports, “Law360 Mandates Reporters Use AI Bias Detection on All Stories.” LexisNexis mandated the policy in May 2025. One of the LexisNexis professionals allegedly asserted that bias surfaced in reporting about the US government.The headline cited by VP Teresa Harmon read: “DOGE officials arrive at SEC with unclear agenda.” Um, okay.
Journalist Andrew Deck shares examples of wording the “bias” detection tool flagged in an article. The piece was a breaking story on a federal judge’s June 12 ruling against the administration’s deployment of the National Guard in LA. We learn:
“Several sentences in the story were flagged as biased, including this one: ‘It’s the first time in 60 years that a president has mobilized a state’s National Guard without receiving a request to do so from the state’s governor.’ According to the bias indicator, this sentence is ‘framing the action as unprecedented in a way that might subtly critique the administration.’ It was best to give more context to ‘balance the tone.’ Another line was flagged for suggesting Judge Charles Breyer had ‘pushed back’ against the federal government in his ruling, an opinion which had called the president’s deployment of the National Guard the act of ‘a monarchist.’ Rather than ‘pushed back,’ the bias indicator suggested a milder word, like ‘disagreed.’”
Having it sound as though anyone challenges the administration is obviously a bridge too far. How dare they? Deck continues:
“Often the bias indicator suggests softening critical statements and tries to flatten language that describes real world conflict or debates. One of the most common problems is a failure to differentiate between quotes and straight news copy. It frequently flags statements from experts as biased and treats quotes as evidence of partiality. For a June 5 story covering the recent Supreme Court ruling on a workplace discrimination lawsuit, the bias indicator flagged a sentence describing experts who said the ruling came ‘at a key time in U.S. employment law.’ The problem was that this copy, ‘may suggest a perspective.’”
Some Law360 journalists are not happy with their “owners.” Law360’s reporters and editors may not be on the same wave length as certain LexisNexis / Reed Elsevier executives. In June 2025, unit chair Hailey Konnath sent a petition to management calling for use of the software to be made voluntary. At this time, Beyond Search thinks that “voluntary” has a different meaning in leadership’s lexicon.
Another assertion is that the software mandate appeared without clear guidelines. Was there a dash of surveillance and possible disciplinary action? To add zest to this publishing stew, the Law360 Union is negotiating with management to adopt clearer guidelines around the requirement.
What’s the software engine? Allegedly LexisNexis built the tool with OpenAI’s GPT 4.0 model. Deck notes it is just one of many publishers now outsourcing questions of bias to smart software. (Smart software has been known for its own peculiarities, including hallucination or making stuff up.) For example, in March 2025, the LA Times launched a feature dubbed “Insights” that auto-assesses opinion stories’ political slants and spits out AI-generated counterpoints. What could go wrong? Who new that KKK had an upside?
What happens when a large publisher gives Grok a whirl? What if a journalist uses these tools and does not catch a “glue cheese on pizza moment”? Senior managers training in accounting, MBA get it done recipes, and (date I say it) law may struggle to reconcile cost, profit, fear, and smart software.
But what about facts?
Cynthia Murrell, July 22, 2025
BBC Warns Perplexity That the Beeb Lawyers Are Not Happy
July 10, 2025
The BBC has had enough of Perplexity AI gobbling up and spitting out its content. Sometimes with errors. The news site declares, “BBC Threatened AI Firm with Legal Action over Unauthorised Content Use.” Well, less a threat and more a strongly worded letter. Tech reporter Liv McMahon writes:
“The BBC is threatening to take legal action against an artificial intelligence (AI) firm whose chatbot the corporation says is reproducing BBC content ‘verbatim’ without its permission. The BBC has written to Perplexity, which is based in the US, demanding it immediately stops using BBC content, deletes any it holds, and proposes financial compensation for the material it has already used. … The BBC also cited its research published earlier this year that found four popular AI chatbots – including Perplexity AI – were inaccurately summarising news stories, including some BBC content. Pointing to findings of significant issues with representation of BBC content in some Perplexity AI responses analysed, it said such output fell short of BBC Editorial Guidelines around the provision of impartial and accurate news.”
Perplexity answered the BBC’s charges with an odd reference to a third party:
“In a statement, Perplexity said: ‘The BBC’s claims are just one more part of the overwhelming evidence that the BBC will do anything to preserve Google’s illegal monopoly.’ It did not explain what it believed the relevance of Google was to the BBC’s position, or offer any further comment.”
Huh? Of course, Perplexity is not the only AI firm facing such complaints, nor is the BBC the only publisher complaining. The Professional Publishers Association, which represents over 300 media brands, seconds the BBC’s allegations. In fact, the organization charges, Web-scraping AI platforms constantly violate UK copyrights. Though sites can attempt to block models with the Robots Exclusion Protocol (robots.txt), compliance is voluntary. Perplexity, the BBC claims, has not respected the protocol on its site. Perplexity denies that accusation.
Cynthia Murrell, July 10, 2025
Scattered Spider: Operating Freely Despite OSINT and Specialized Investigative Tools. Why?
July 7, 2025
No smart software to write this essay. This dinobaby is somewhat old fashioned.
I don’t want to create a dust up in the specialized software sector. I noted the July 2, 2025, article “A Group of Young Cybercriminals Poses the Most Imminent Threat of Cyberattacks Right Now.” That story surprised me. First, the Scattered Spider group was documented (more or less) by Trellix, a specialized software and services firm. You can read the article “Scattered Spider: The Modus Operandi” and get a sense of what Trellix reported. The outfit even has a Wikipedia article about their activities.
Last week I was asked a direct question, “Which of the specialized services firms can provide me with specific information about Telegram Groups and Channels, both public and private?” My answer, “None yet.”
Scattered Spider uses Telegram for some messaging functions, and if you want to get a sense of what the outfit does, just fire up your OSINT tools or better yet use one of the very expensive specialized services available to government agencies. The young cybercriminals appear to use the alias @ScatteredSpiderERC.” There is a Wikipedia article about this group’s activities.
So what? Let’s go back to the question addressed directly to me about firms that have content about Telegram. If we assume the Wikipedia write up is sort of correct, the Scattered Spider entity popped up in 2022 and its activities caught the attention of Trellix. The time between the Trellix post and the Wired story is about two years.
Why has a specialized services firm providing actionable data to the US government, the Europol investigators, and the dozens of others law enforcement operations around the world? Isn’t it a responsible act to use that access to Telegram data to take down outfits that endanger casinos and other organizations?
Apparently the answer is, “No.”
My hunch is that these specialized software firms talk about having tools to access Telegram. That talk is a heck of a lot easier than finding a reliable way to access private Groups and Channels, trace a handle back to a real live human being possibly operating in the EU or the US. I would suggest that France tried to use OSINT and the often nine figure systems to crack Telegram. Will other law enforcement groups realize that the specialized software vendors’ tools fall short of the mark and think about a France-type of response?
France seems to have made a dent in Telegram. I would hypothesize that the failure of OSINT and the specialized software tool vendors contributed to France’s decision to just arrest Pavel Durov. Mr. Durov is now ensnared in France’s judicial bureaucracy. To make the arrest more complex for Mr. Durov, he is a citizen of France and a handful of other countries, including Russia and the United Arab Emirates.
I mention this lack of Telegram cracking capability for three reasons:
- Telegram is in decline and the company is showing some signs of strain
- The changing attitude toward crypto in the US means that Telegram absolutely has to play in that market or face either erosion or decimation of its seven year push to create alternative financial services based on TONcoin and Pavel Durov’s partners’ systems
- Telegram is facing a new generation of messaging competitors. Like Apple, Telegram is late to the AI party.
One would think that at a critical point like this, the Shadow Server account would be a slam dunk for any licensee of specialized software advertising, “Telegram content.”
Where are those vendors who webinars, email blasts, and trade show demonstrations? Where are the testimonials that Company Nuco’s specialized software really did work. “Here’s what we used in court because the specialized vendor’s software generated this data for us” is what I want to hear. I would suggest that Telegram remains a bit of a challenge to specialized software vendors. Will I identify these “big hat, no cattle outfits”? Nope.
Just thought that a reminder that marketing and saying what government professionals want to hear are easier than just talking.
Stephen E Arnold, July 2025
Paper Tiger Management
June 24, 2025
An opinion essay written by a dinobaby who did not rely on smart software .
I learned that Apple and Meta (formerly Facebook) found themselves on the wrong side of the law in the EU. On June 19, 2025, I learned that “the European Commission will opt not to impose immediate financial penalties” on the firms. In April 2025, the EU hit Apple with a 500 million euro fine and Meta a 200 million euro fine for non compliance with the EU’s Digital Markets Act. Here’s an interesting statement in the cited EuroNews report the “grace period ends on June 26, 2025.” Well, not any longer.
What’s the rationale?
- Time for more negotiations
- A desire to appear fair
- Paper tiger enforcement.
I am not interested in items one and two. The winner is “paper tiger enforcement.” In my opinion, we have entered an era in management, regulation, and governmental resolve when the GenX approach to lunch. “Hey, let’s have lunch.” The lunch never happens. But the mental process follows these lanes in the bowling alley of life: [a] Be positive, [b] Say something that sounds good, [c] Check the box that says, “Okay, mission accomplished. Move on. [d] Forget about the lunch thing.
When this approach is applied on large scale, high-visibility issues, what happens? In my opinion, the credibility of the legal decision and the penalty is diminished. Instead of inhibiting improper actions, those who are on the receiving end of the punishment lean one thing: It doesn’t matter what we do. The regulators don’t follow through. Therefore, let’s just keep on moving down the road.
Another example of this type of management can be found in the return to the office battles. A certain percentage of employees are just going to work from home. The management of the company doesn’t do “anything”. Therefore, management is feckless.
I think we have entered the era of paper tiger enforcement. Make noise, show teeth, growl, and then go back into the den and catch some ZZZZs.
Stephen E Arnold, June 24, 2025
Hey, Creatives, You Are Marginalized. Embrace It
June 20, 2025
Considerations of right and wrong or legality are outdated, apparently. Now, it is about what is practical and expedient. The Times of London reports, “Nick Clegg: Artists’ Demands Over Copyright are Unworkable.” Clegg is both a former British deputy prime minister and former Meta executive. He spoke as the UK’s parliament voted down measures that would have allowed copyright holders to see when their work had been used and by whom (or what). But even that failed initiative falls short of artists’ demands. Writer Lucy Bannerman tells us:
“Leading figures across the creative industries, including Sir Elton John and Sir Paul McCartney, have urged the government not to ‘give our work away’ at the behest of big tech, warning that the plans risk destroying the livelihoods of 2.5 million people who work in the UK’s creative sector. However, Clegg said that their demands to make technology companies ask permission before using copyrighted work were unworkable and ‘implausible’ because AI systems are already training on vast amounts of data. He said: ‘It’s out there already.’”
How convenient. Clegg did say artists should be able to opt out of AI being trained on their works, but insists making that the default option is just too onerous. Naturally, that outweighs the interests of a mere 2.5 million UK creatives. Just how should artists go about tracking down each AI model that might be training on their work and ask them to please not? Clegg does not address that little detail. He does state:
“‘I just don’t know how you go around, asking everyone first. I just don’t see how that would work. And by the way if you did it in Britain and no one else did it, you would basically kill the AI industry in this country overnight. … I think expecting the industry, technologically or otherwise, to preemptively ask before they even start training — I just don’t see. I’m afraid that just collides with the physics of the technology itself.’”
The large technology outfits with the DNA of Silicon Valley has carried the day. So output and be quiet. (And don’t think any can use Mickey Mouse art. Different rules are okay.)
Cynthia Murrell, June 20, 2025
Will the EU Use an AI Agent to Automate Fines?
June 10, 2025
Just a dinobaby and no AI: How horrible an approach?
Apple, at least to date, has not demonstrated adeptness in lashing smart software to its super secure and really user friendly system. How many times do I have to dismiss “log in to iCloud” and “log in to Facetime”? How frequently will Siri wander in dataspace? How often do I have to dismiss “two factor authentication” for the old iPad I use to read Kindle books? How often? The answer is, “As many times as the European Union will fine the company for failure to follow its rules, guidelines, laws, and special directives.
I read “EU Ruling: Apple’s App Store Still in Violation of DMA, 30 Days to Comply” and I really don’t know what Apple has blown off. I vaguely recall that the company ignored a US court order in the US. However, the EU is not the US, and the EU can make quite miserable for the company, its employees residing in the EU, and its contractors with primary offices in member countries. The tools can be trivial: A bit of friction at international airports. The machinery can be quite Byzantine when financial or certification activities can be quite entertaining to an observer.
The write up says:
Following its initial €500 million fine in April, the European Commission is now giving Apple 30 days to fully align its App Store rules with the Digital Markets Act (DMA). If it fails to comply, the EU says it will start imposing “periodic penalty payments” until Apple [follows the rules]…
For me, the operative word is “periodic.” I think it means a phenomenon that repeats at regular intervals of time. Okay, a fine like the most recent €500 would just occur in a heart beat fashion. One example would be every month. After one year, the fines total €6,000,000,000. What happens if the EU gets frisky after a bottle of French burgundy from a very good year? The fine would be levied for each day in a calendar year and amount to €2,190,000,000,000 or two trillion one hundred ninety billion euros. Even for a high flier like Apple and its pilot Tim Apple, stakeholders might suggest, “Just obey the law, please.”
I wonder if the EU might consider using Telegram bots to automate the periodic fines. The system developed by France’s favorite citizen Pavel Durov is robust, easily extensible, and essentially free. The “FineApple_bot” could fire on a schedule and message Tim Apple, his Board of Directors, the other “leadership” of Apple, and assorted news outlets. The free service operates quickly enough for most users, but by paying a nominal monthly fee, the FineApple_bot could issues 1,000 instructions a second. But that’s probably overkill unless the EU decides to fine Apple by the minute. In case you were wondering the annual fine would be in the neighborhood of €52,560,000,000,000 (or fifty-two trillion five hundred sixty billion euros).
My hunch is that despite Apple’s cavalier approach to court orders, some less intransigent professional in the core of Apple would find a way to resolve the problem. But I personally quite like the Telegram bot approach.
Stephen E Arnold, June 10, 2025
Lawyers Versus Lawyers: We Need a Spy Versus Spy Cartoon Now
June 5, 2025
Just the dinobaby operating without Copilot or its ilk.
Rupert Murdoch, a media tycoon with some alleged telephone intercept activity, owns a number of “real” news outfits. One of these published “What Is Big Tech Trying to Hide? Amazon, Apple, Google Are All Being Accused of Abusing Legal Privilege in Battles to Strip Away Their Power.” As a dinobaby in rural Kentucky, I have absolutely no idea if the information in the write up is spot on, close enough for horseshoes, or dead solid slam dunk in the information game.
What’s interesting is that the US legal system is getting quite a bit of coverage. Recently a judge in a fly over state found herself in handcuffs. Grousing about biased and unfair judges pops up in social media posts. One of my contacts in Manhattan told me that some judges have been receiving communications implying kinetic action.
Yep, lawyers.
Now the story about US big technology companies using the US legal system in a way that directly benefits these firms reveals “news” that I found mildly amusing. In rural Kentucky, when one gets in trouble or receives a call from law enforcement about a wayward sibling, the first action is to call one of the outstanding legal professionals who advertise in direct mail blasts on the six pm news and put memorable telephone numbers on the sides of the mostly empty bus vehicles that slowly prowl the pot-holed streets.
The purpose of the legal system is to get paid to represent the client. The client pays money or here in rural Kentucky a working pinball machine was accepted as payment by my former, deceased, and dearly beloved attorney. You get the idea: Pay money, get professional services. The understanding in my dealings with legal professionals is that the lawyers listen to their paying customers, discuss options among themselves or here in rural Kentucky with a horse in their barn, and formulate arguments to present their clients’ sides of cases or matters.
Obviously a person with money wants attorneys who [a] want compensation, [b] want to allow the client to prevail in a legal dust up, and [c] push back but come to accept their clients’ positions.
So now the Wall Street Journal reveals that the US legal system works in a transparent, predictable, and straightforward way.
My view of the legal problems the US technology firms face is that these innovative firms rode the wave their products and services created among millions of people. As a person who has been involved in successful start ups, I know how the surprise, thrill, and opportunities become the drivers of business decisions. Most of the high technology start ups fail. The survivors believe their intelligence, decision making, and charisma made success happen. That’s a cultural characteristic of what I call the Sillycon Valley way. (I know about this first hand because I lived in Berkeley and experienced the carnival ride of a technological winner.)
Without exposure to how technologies like “online” work, it was and to some extent still difficult to comprehend the potential impacts of the shift from media anchored in non digital ecosystems to the there is not there there hot house of a successful technology. Therefore, neither the “users” of the technology recognized the impact of consumerizing the most successful technologies nor the regulators could understand what was changing on a daily and sometimes hourly cadence. Even those involved at a fast-growing high technology company had no idea that the fizz of winning would override ethical and moral considerations.
Therefore:
- Not really news
- Standard operating procedure for big technology trials since the MSFT anti-trust matter
- The US ethical fabric plus the invincibility and super hero mindsets maps the future of legal dust ups in my opinion.
Net net: Sigh. William James’s quantum energy is definitely not buzzing.
Stephen E Arnold, June 5, 2025