Google Trial: An Interesting Comment Amid the Yada Yada

May 8, 2024

dinosaur30a_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I read “Google’s Antitrust Trial Spotlights Search Ads on the Final Day of Closing Arguments.” After decades of just collecting Google tchotchkes, US regulators appear to be making some progress. It is very difficult to determine if a company is a monopoly. It was much easier to count barrels of oil, billets of steel, and railroad cars than digital nothingness, wasn’t it?

image

A giant whose name is Googzilla has most of the toys. He is reminding those who want the toys about his true nature. I believe Googzilla. Do you? Thanks, Microsoft Copilot. Good enough.

One of the many reports of the Google monopoly legal activity finally provided to me a quite useful, clear statement. Here’s the passage which caught my eye:

a coalition of state attorneys said Google’s search advertising business has trapped advertisers into its ecosystem while higher ad prices haven’t led to higher returns.

I want to consider this assertion. Please, read the original write up on Digiday to get the “real” news report. I am not a journalist; I am a dinobaby, and I have some thoughts to capture.

First, the Google has been doing Googley things for about a quarter of a century. A bit longer if one counts the Backrub service in an estimable Stanford computer building. From my point of view, Google has been doing “clever.” That means to just apologize, not ask permission. That means seek inspiration from others; for example, the IBM Clever system, the Yahoo-Overture advertising system, and the use of free to gain access to certain content like books, and pretty much doing what it wants. After figuring out that Google had to make money, it “innovated” with advertising, paid a fine, and acquired people and technology to match ads to queries. Yep, Oingo (Applied Semantics) helped out. The current antitrust matter will be winding down in 2024 and probably drag through 2025. Appeals for a company with lots of money can go slowly. Meanwhile Google’s activity can go faster.

Second, the data about Google monopoly are not difficult to identify. There is the state of the search market. Well, Eric Schmidt said years ago, Qwant kept him awake at night. I am not sure that was a credible statement. If Mr. Schmidt were awake at night, it might be the result of thinking about serious matters like money. His money. When Google became widely available, there were other Web search engines. I posted a list on my Web site which had a couple of hundred entries. Now the hot new search engines just recycle Bing and open source indexes, tossing in a handful of “special” sources like my mother jazzing up potato salad. There is Google search. And because of the reach of Google search, Google can sell ads.

Third, the ads are not just for search. Any click on a Google service is a click. Due to cute tricks like Chrome and ubiquitous services like maps, Google can slap ads many place. Other outfits cannot unless they are Google “partners.” Those partners are Google’s sales force. SEO customers become buyers of Google ads because that’s the most effective way to get traffic. Does a small business owner expect a Web site to be “found” without Google Local and maybe some advertising juice. Nope. No one but OSINT experts can get Google search to deliver useful results. Google Dorks exists for a reason. Google search quality drives ad sales. And YouTube ads? Lots of ads. Want an alternative? Good luck with Facebook, TikTok, ok.ru, or some other service.

Where’s the trial now? Google has asserted that it does not understand its own technology. The judge says he is circling down the drain of the marketing funnel. But the US government depends on the Google. That may be a factor or just the shadow of Googzilla.

Stephen E Arnold, May 8, 2024

Security Conflation: A Semantic Slippery Slope to Persistent Problems

May 2, 2024

dinosaur30a_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

My view is that secrets can be useful. When discussing who has what secret, I think it is important to understand who the players / actors are. When I explain how to perform a task to a contractor in the UK, my transfer of information is a secret; that is, I don’t want others to know the trick to solve a problem that can take others hours or day to resolve. The context is an individual knows something and transfers that specific information so that it does not become a TikTok video. Other secrets are used by bad actors. Some are used by government officials. Commercial enterprises — for example, pharmaceutical companies wrestling with an embarrassing finding from a clinical trial — have their secrets too. Blue-chip consulting firms are bursting with information which is unknown by all but a few individuals.

image

Good enough, MSFT Copilot. After “all,” you are the expert in security.

I read “Hacker Free-for-All Fights for Control of Home and Office Routers Everywhere.” I am less interested in the details of shoddy security and how it is exploited by individuals and organizations. What troubles me is the use of these words: “All” and “Everywhere.” Categorical affirmatives are problematic in a today’s datasphere. The write up conflates any entity working for a government entity with any bad actor intent on committing a crime as cut from the same cloth.

The write up makes two quite different types of behavior identical. The impact of such conflation, in my opinion, is to suggest:

  1. Government entities are criminal enterprises, using techniques and methods which are in violation of the “law”. I assume that the law is a moral or ethical instruction emitted by some source and known to be a universal truth. For the purposes of my comments, let’s assume the essay’s analysis is responding to some higher authority and anchored on that “universal” truth. (Remember the danger of all and everywhere.)
  2. Bad actors break laws just like governments and are, therefore, both are criminals. If true, these people and entities must be punished.
  3. Some higher authority — not identified in the write up — must step in and bring these evil doers to justice.

The problem is that there is a substantive difference among the conflated bad actors. Those engaged in enforcing laws or protecting a nation state are, one hopes, acting within that specific context; that is, the laws, rules, and conventions of that nation state. When one investigator or analyst seeks “secrets” from an adversary, the reason for the action is, in my opinion, easy to explain: The actor followed the rules spelled out by the context / nation state for which the actor works. If one doesn’t like how France runs its railroad, move to Saudi Arabia. In short, find a place to live where the behaviors of the nation state match up with one’s individual perceptions.

When a bad actor — for example a purveyor of child sexual abuse material on an encrypted messaging application operating in a distributed manner from a country in the Middle East — does his / her business, government entities want to shut down the operation. Substitute any criminal act you want, and the justification for obtaining information to neutralize the bad actor is at least understandable to the child’s mother.

The write up dances into the swamp of conflation in an effort to make clear that the system and methods of good and bad actors are the same. That’s the way life is in the datasphere.

The real issue, however, is not the actors who exploit the datasphere, in my view, the problems begins with:

  • Shoddy, careless, or flawed security created and sold by commercial enterprises
  • Lax, indifferent, and false economies of individuals and organizations when dealing with security their operating environment
  • Failure of regulatory authorities to certify that specific software and hardware meet requirements for security.

How does the write up address fixing the conflation problem, the true root of security issues, and the fact that exploited flaws persist for years? I noted this passage:

The best way to keep routers free of this sort of malware is to ensure that their administrative access is protected by a strong password, meaning one that’s randomly generated and at least 11 characters long and ideally includes a mix of letters, numbers, or special characters. Remote access should be turned off unless the capability is truly needed and is configured by someone experienced. Firmware updates should be installed promptly. It’s also a good idea to regularly restart routers since most malware for the devices can’t survive a reboot. Once a device is no longer supported by the manufacturer, people who can afford to should replace it with a new one.

Right. Blame the individual user. But that individual is just one part of the “problem.” The damage done by conflation and by failing to focus on the root causes remains. Therefore, we live in a compromised environment. Muddled thinking makes life easier for bad actors and harder for those who are charged with enforcing rules and regulations. Okay, mom, change your password.

Stephen E Arnold, May 2, 2024

A Modern Spy Novel: A License to Snoop

April 29, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

UK’s Investigatory Powers Bill to Become Law Despite Tech World Opposition” reports the Investigatory Powers Amendment Bill or IPB is now a law. In a nutshell, the law expands the scope of data collection by law enforcement and intelligence services. The Register, a UK online publication, asserts:

Before the latest amendments came into force, the IPA already allowed authorized parties to gather swathes of information on UK citizens and tap into telecoms activity – phone calls and SMS texts. The IPB’s amendments add to the Act’s existing powers and help authorities trawl through more data, which the government claims is a way to tackle “modern” threats to national security and the abuse of children.

image

Thanks, Copilot. A couple of omissions from my prompt, but your illustration is good enough.

One UK elected official said:

“Additional safeguards have been introduced – notably, in the most recent round of amendments, a ‘triple-lock’ authorization process for surveillance of parliamentarians – but ultimately, the key elements of the Bill are as they were in early versions – the final version of the Bill still extends the scope to collect and process bulk datasets that are publicly available, for example.”

Privacy advocates are concerned about expanding data collections’ scope. The Register points out that “big tech” feels as though it is being put on the hot seat. The article includes this statement:

Abigail Burke, platform power program manager at the Open Rights Group, previously told The Register, before the IPB was debated in parliament, that the proposals amounted to an “attack on technology.”

Several observations:

  1. The UK is a member in good standing of an intelligence sharing entity which includes Australia, Canada, New Zealand, and the US. These nation states watch one another’s activities and sometimes emulate certain policies and legal frameworks.
  2. The IPA may be one additional step on a path leading to a ban on end-to-end-encrypted messaging. Such a ban, if passed, would prove disruptive to a number of business functions. Bad actors will ignore such a ban and continue their effort to stay ahead of law enforcement using homomorphic encryption and other sophisticated techniques to keep certain content private.
  3. Opportunistic messaging firms like Telegram may incorporate technologies which effectively exploit modern virtual servers and other technology to deploy networks which are hidden and effectively less easily “seen” by existing monitoring technologies. Bad actors can implement new methods forcing LE and intelligence professionals to operate in reaction mode. IPA is unlikely to change this cat-and-mouse game.
  4. Each day brings news of new security issues with widely used software and operating systems. Banning encryption may have some interesting downstream and unanticipated effects.

Net net: I am not sure that modern threats will decrease under IPA. Even countries with the most sophisticated software, hardware, and humanware security systems can be blindsided. Gaffes in Israel have had devastating consequences that an IPA-type approach would remedy.

Stephen E Arnold, April 29, 2024

Will Google Fix Up On-the-Blink Israeli Intelligence Capability?

April 18, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Voyager Labs “value” may be slipping. The poster child for unwanted specialized software publicity (NSO Group) finds itself the focal point of some legal eagles. The specialized software systems that monitor, detect, and alert — quite frankly — seemed to be distracted before and during the October 2023 attack. What’s happening to Israel’s advanced intelligence capabilities with its secret units, mustered out wizards creating intelligence solutions, and doing the Madison Avenue thing at conferences? What’s happening is that the hyperbole seems to be a bit more advanced than some of the systems themselves.

image

Government leaders and military intelligence professionals listen raptly as the young wizard explains how the online advertising company can shore up a country’s intelligence capabilities. Thanks, MidJourney. You are good enough, and the modified free MSFT Copilot is not.

What’s the fix? Let me share one wild idea with you: Let Google do it. Time (once the stablemate of the AI-road kill Sports Illustrated) published this write up with this title:

Exclusive: Google Contract Shows Deal With Israel Defense Ministry

The write up says:

Google provides cloud computing services to the Israeli Ministry of Defense, and the tech giant has negotiated deepening its partnership during Israel’s war in Gaza, a company document viewed by TIME shows. The Israeli Ministry of Defense, according to the document, has its own “landing zone” into Google Cloud—a secure entry point to Google-provided computing infrastructure, which would allow the ministry to store and process data, and access AI services. [The wonky capitalization is part of the style manual I assume. Nice, shouting with capital letters.]

The article then includes this paragraph:

Google recently described its work for the Israeli government as largely for civilian purposes. “We have been very clear that the Nimbus contract is for workloads running on our commercial platform by Israeli government ministries such as finance, healthcare, transportation, and education,” a Google spokesperson told TIME for a story published on April 8. “Our work is not directed at highly sensitive or classified military workloads relevant to weapons or intelligence services.”

Does this mean that Google shaped or weaponized information about the work with Israel? Probably not: The intent strikes me as similar to the “Senator, thank you for the question” lingo offered at some US government hearings. That’s just the truth poorly understood by those who are not Googley.

I am not sure if the Time story has its “real” news lens in focus, but let’s look at this interesting statement:

The news comes after recent reports in the Israeli media have alleged the country’s military, controlled by the Ministry of Defense, is using an AI-powered system to select targets for air-strikes on Gaza. Such an AI system would likely require cloud computing infrastructure to function. The Google contract seen by TIME does not specify for what military applications, if any, the Ministry of Defense uses Google Cloud, and there is no evidence Google Cloud technology is being used for targeting purposes. But Google employees who spoke with TIME said the company has little ability to monitor what customers, especially sovereign nations like Israel, are doing on its cloud infrastructure.

The online story included an allegedly “real” photograph of a bunch of people who were allegedly unhappy with the Google deal with Israel. Google does have a cohort of wizards who seem to enjoy protesting Google’s work with a nation state. Are Google’s managers okay with this type of activity? Seems like it.

Net net: I think the core issue is that some of the Israeli intelligence capability is sputtering. Will Google fix it up? Sure, if one believes the intelware brochures and PowerPoints on display at specialized intelligence conferences, why not perceive Google as just what the country needs after the attack and amidst increasing tensions with other nation states not too far from Tel Aviv? Belief is good. Madison Avenue thinking is good. Cloud services are good. Failure is not just bad; it could mean zero warning for another action against Israel. Do brochures about intelware stop bullets and missiles?

Stephen E Arnold, April 18, 2024

Google: The DMA Makes Us Harm Small Business

April 11, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I cannot estimate the number of hours Googlers invested in crafting the short essay “New Competition Rules Come with Trade-Offs.” I find it a work of art. Maybe not the equal of Dante’s La Divina Commedia, but is is darned close.

image

A deity, possibly associated with the quantumly supreme, reassures a human worried about life. Words are reality, at least to some fretful souls. Thanks MSFT Copilot. Good enough.

The essay pivots on unarticulated and assumed “truths.” Particularly charming are these:

  1. “We introduced these types of Google Search features to help consumers”
  2. “These businesses now have to connect with customers via a handful of intermediaries that typically charge large commissions…”
  3. “We’ve always been focused on improving Google Search….”

The first statement implies that Google’s efforts have been the “help.” Interesting: I find Google search often singularly unhelpful, returning results for malware, biased information, and Google itself.

The second statement indicates that “intermediaries” benefit. Isn’t Google an intermediary? Isn’t Google an alleged monopolist in online advertising?

The third statement is particularly quantumly supreme. Note the word “always.” John Milton uses such verbal efflorescence when describing God. Yes, “always” and improving. I am tremulous.

Consider this lyrical passage and the elegant logic of:

We’ll continue to be transparent about our DMA compliance obligations and the effects of overly rigid product mandates. In our view, the best approach would ensure consumers can continue to choose what services they want to use, rather than requiring us to redesign Search for the benefit of a handful of companies.

Transparent invokes an image of squeaky clean glass in a modern, aluminum-framed window, scientifically sealed to prevent its unauthorized opening or repair by anyone other than a specially trained transparency provider. I like the use of the adjective “rigid” because it implies a sturdiness which may cause the transparent window to break when inclement weather (blasts of hot and cold air from oratorical emissions) stress the see-through structures. The adult-father-knows-best reference in “In our view, the best approach”. Very parental. Does this suggest the EU is childish?

Net net: Has anyone compiled the Modern Book of Google Myths?

Stephen E Arnold, April 11, 2024

Tennessee Sends a Hunk of Burnin’ Love to AI Deep Fakery

April 11, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Leave it the state that houses Music City. NPR reports, “Tennessee Becomes the First State to Protect Musicians and Other Artists Against AI.” Courts have demonstrated existing copyright laws are inadequate in the face of generative AI. This update to the state’s existing law is named the Ensuring Likeness Voice and Image Security Act, or ELVIS Act for short. Clever. Reporter Rebecca Rosman writes:

“Tennessee made history on Thursday, becoming the first U.S. state to sign off on legislation to protect musicians from unauthorized artificial intelligence impersonation. ‘Tennessee is the music capital of the world, & we’re leading the nation with historic protections for TN artists & songwriters against emerging AI technology,’ Gov. Bill Lee announced on social media. While the old law protected an artist’s name, photograph or likeness, the new legislation includes AI-specific protections. Once the law takes effect on July 1, people will be prohibited from using AI to mimic an artist’s voice without permission.”

Prominent artists and music industry groups helped push the bill since it was introduced in January. Flanked by musicians and state representatives, Governor Bill Lee theatrically signed it into law on stage at the famous Robert’s Western World. But what now? In its write-up, “TN Gov. Lee Signs ELVIS Act Into Law in Honky-Tonk, Protects Musicians from AI Abuses,” The Tennessean briefly notes:

“The ELVIS Act adds artist’s voices to the state’s current Protection of Personal Rights law and can be criminally enforced by district attorneys as a Class A misdemeanor. Artists—and anyone else with exclusive licenses, like labels and distribution groups—can sue civilly for damages.”

While much of the music industry is located in and around Nashville, we imagine most AI mimicry does not take place within Tennessee. It is tricky to sue someone located elsewhere under state law. Perhaps this legislation’s primary value is as an example to lawmakers in other states and, ultimately, at the federal level. Will others be inspired to follow the Volunteer State’s example?

Cynthia Murrell, April 11, 2024

Another Bottleneck Issue: Threat Analysis

April 8, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

My general view of software is that it is usually good enough. You just cannot get ahead of the problems. For example, I recall doing a project to figure out why Visio (an early version) simply did not do what the marketing collateral said it did. We poked around, and in short order, we identified features that were not implemented or did not work as advertised. Were we surprised? Nah. That type of finding holds for consumer software as well as enterprise software. I recall waiting for someone who worked at Fast Search & Transfer in North Carolina to figure out why hit boosting was not functioning. The reason, if memory serves, was that no one had completed the code. What about security of the platform? Not discussed: The enthusiastic worker in North Carolina turned his attention to the task, but it took time to address the issue. The intrepid engineer encountered “undocumented dependencies.” These are tough to resolve when coders disappear, change jobs, or don’t know how to make something work. These functional issues stack up, and many are never resolved. Many are not considered in terms of security. Even worse, the fix applied by a clueless intern fascinated with Foosball screws something up because… the “leadership team” consists of former consultants, accountants, and lawyers. Not too many professionals with MBAs, law degrees and expertise in SEC accounting requirements are into programming, security practices, and technical details. These stellar professionals gain technical expertise watching engineers with PowerPoint presentations. The meetings feature this popular question: “Where’s the lunch menu?”

image

The person in the row boat is going to have a difficult time dealing with software flaws and cyber security issues which emulate the gusher represented in the Microsoft Copilot illustration. Good enough image, just like good enough software security.

I read “NIST Unveils New Consortium to Operate National Vulnerability Database.” The focus is on software which invites bad actors to the Breach Fun Park. The write up says:

In early March, many security researchers noticed a significant drop in vulnerability enrichment data uploads on the NVD website that had started in mid-February. According to its own data, NIST has analyzed only 199 Common Vulnerabilities and Exposures (CVEs) out of the 2957 it has received so far in March. In total, over 4000 CVEs have not been analyzed since mid-February. Since the NVD is the most comprehensive vulnerability database in the world, many companies rely on it to deploy updates and patches.

The backlog is more than 3,800 vulnerability issues. The original fix was to shut down the US National Vulnerability Database. Yep, this action was kicked around at the exact same time as cyber security fires were blazing in a certain significant vendor providing software to the US government and when embedded exploits in open source software were making headlines.

How does one solve the backlog problem. In the examples I mentioned in the first paragraph of this essay, there was a single player and a single engineer who was supposed to solve the problem. Forget dependences, just make the feature work in a manner that was good enough. Where does a government agency get a one-engineer-to-one-issue set up?

Answer: Create a consortium, a voluntary one to boot.

I have a number of observations to offer, but I will skip these. The point is that software vulnerabilities have overwhelmed a government agency. The commercial vendors issue news releases about each new “issue” a specific team of a specific individual in the case of Microsoft have identified. However, vendors rarely stumble upon the same issue. We identified a vector for ransomware which we will explain in our April 24, 2024, National Cyber Crime Conference lecture.

Net net: Software vulnerabilities illustrate the backlog problem associated with any type of content curation or software issue. The volume is overwhelming available resources. What’s the fix? (You will love this answer.) Artificial intelligence. Yep, sure.

Stephen E Arnold, April 8, 2024

India: AI, We Go This Way, Then We Go That Way

April 3, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

In early March 2024, the India said it would require all AI-related projects still in development receive governmental approval before they were released to the public. India’s Ministry of Electronics and Information Technology stated it wanted to notify the public of AI technology’s fallacies and its unreliability. The intent was to label all AI technology with a “consent popup” that informed users of potential errors and defects. The ministry also wanted to label potentially harmful AI content, such as deepfakes, with a label or unique identifier.

The Register explains that it didn’t take long for the south Asian country to rescind the plan: “India Quickly Unwinds Requirement For Government Approval Of AIs.” The ministry issued a update that removed the requirement for government approval but they did add more obligations to label potentially harmful content:

"Among the new requirements for Indian AI operations are labelling deepfakes, preventing bias in models, and informing users of models’ limitations. AI shops are also to avoid production and sharing of illegal content, and must inform users of consequences that could flow from using AI to create illegal material.”

Minister of State for Entrepreneurship, Skill Development, Electronics, and Technology Rajeev Chandrasekhar provided context for the government’s initial plan for approval. He explained it was intended only for big technology companies. Smaller companies and startups wouldn’t have needed the approval. Chandrasekhar is recognized for his support of boosting India’s burgeoning technology industry.

Whitney Grace, April 3, 2024

How to Fool a Dinobaby Online

March 29, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Marketers take note. Forget about gaming the soon-to-be-on-life-support Google Web search. Embrace fakery. And who, you may ask, will teach me? The answer is The Daily Beast. To begin your life-changing journey, navigate to “Facebook Is Filled With AI-Generated Garbage—and Older Adults Are Being Tricked.”

image

Two government regulators wonder where the Deep Fakes have gone? Thanks, MSFT Copilot. Keep on updating, please.

The write up explains:

So far, the few experiments to analyze seniors’ AI perception seem to align with the Facebook phenomenon…. The team found that the older participants were more likely to believe that AI-generated images were made by humans.

Okay, that’s step one: Identify your target market.

What’s next? The write up points out:

scammers have wielded increasingly sophisticated generative AI tools to go after older adults. They can use deepfake audio and images sourced from social media to pretend to be a grandchild calling from jail for bail money, or even falsify a relative’s appearance on a video call.

That’s step two: Weave in a family or social tug on the heart strings.

Then what? The article helpfully notes:

As of last week, there are more than 50 bills across 30 states aimed to clamp down on deepfake risks. And since the beginning of 2024, Congress has introduced a flurry of bills to address deepfakes.

Yep, the flag has been dropped. The race with few or no rules is underway. But what about government rules and regulations? Yeah, those will be chugging around after the race cars have disappeared from view.

Thanks for the guidelines.

Stephen E Arnold, March 29, 2024

Google: Practicing But Not Learning in France

March 22, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I had to comment on this Google synthetic gems. The online advertising company with the Cracker Jack management team is cranking out titbits every days or two. True, none of these rank with the Microsoft deal to hire some techno-management wizards with DeepMind experience, but I have to cope with what flows into rural Kentucky.

image

Those French snails are talkative — and tasty. Thanks, MSFT Copilot. Are you going to license, hire, or buy DeepMind?

Google Fined $270 Million by French Regulatory Authority” delivers what strikes me a Lego block information about the estimable company. The write up presents yet another story about Google’s footloose and fancy free approach to French laws, rules, and regulations. The write up reports:

This latest fine is the result of Google’s artificial intelligence training practices. The [French regulatory] watchdog said in a statement that Google’s Bard chatbot — which has since been rebranded as Gemini —”used content from press agencies and publishers to train its foundation model, without notifying either them” or the Authority.

So what did the outstanding online advertising company do? The news story asserts:

The watchdog added that Google failed to provide a technical opt-out solution for publishers, obstructing their ability to “negotiate remuneration.”

The result? Another fine.

Google has had an interesting relationship with France. The country was the scene of the outstanding presentation of the Sundar and Prabhakar demonstration of the quantumly supreme Bard smart software. Google has written checks to France in the past. Now it is associated with flubbing what are relatively straightforward for France requirements to work with publishers.

Not surprisingly, the outfit based in far off California allegedly said, according to the cited news story:

Google criticized a “lack of clear regulatory guidance,” calling for greater clarity in the future from France’s regulatory bodies.  The fine is linked to a copyright case that began in 2020, when the French Authority found Google to be acting in violation of France’s copyright and related rights law of 2019.

My experience with France, French laws, and the ins and outs of working with French organizations is limited. Nevertheless, my son — who attended university in France — told me an anecdote which illustrates how French laws work. Here’s the tale which I assume is accurate. He is a reliable sort.

A young man was in the immigration office in Paris. He and his wife were trying to clarify a question related to her being a French citizen. The bureaucrat had not accepted her birth certificate from a municipal French government, assorted documents from her schooling from pre-school to university, and the oddments of electric bills, rental receipts, and medical records. The husband who was an American told me son, “This office does not think my wife is French. She is. And I think we have it nailed this time. My wife has a photograph of General De Gaulle awarding her father a medal.” My son told me, “Dad, it did not work. The husband and wife had to refile the paperwork to correct an error made on the original form.”

My takeaway from this anecdote is that Google may want to stay within the bright white lines in France. Getting entangled in the legacy of Napoleon’s red tape can be an expensive, frustrating experience. Perhaps the Google will learn? On the other hand, maybe not.

Stephen E Arnold,  March 22, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta