European Parliament Embraces the Regulatory PEZ Dispenser Model for Fines on Big Tech

January 24, 2022

I read about the Digital Services Act. “European Parliament Passes Huge Clampdown on Tracking Ads” states:

The European Parliament, the legislative body for the European Union (EU), has voted in favor of its Digital Services Act (DSA), which seeks to limit the power of American internet giants such as Facebook, Amazon and Google.

That’s mostly on the money. What’s not spelled out is that the procedure of identifying a tracking instance, building a case, adjudicating, appealing, and levying a fine is now official. It’s a procedure. Perhaps a bright French artificial intelligence professional will use Facebook or Google AI components to make the entire process automatic, efficient, and – obviously – without bias. No discrimination! But the DSA is aimed at outfits like Amazon, Facebook, and Google. Nope. Not discriminatory and also not yet a really official thing…yet.

I found this paragraph memorable:

According to the EU, the DSA covers several key areas, including introducing mechanisms by which companies have to remove “illegal” content in a timely manner in a bid to reduce misinformation, increasing requirements on so-called very large online platforms (VLOPs), regulating online ad targeting, and clamping down on dark patterns. The scope and scale of the DSA (and associated DMA) are huge, perhaps the biggest effort yet by a substantial world power (outside of China) to regulate what happens in cyberspace.

How does one redistribute “wealth”? Easy. Create a legal PEZ dispenser, push the plastic likeness of Mr. Bezos, Mr. Zuckerberg, or Mr. Pichai (who is the only one of the PEZ dispensers with AI in his name).

Stephen E Arnold, January 24, 2022

How Not to Get Hired by Alphabet, Google, YouTube, Et Al

January 21, 2022

I have a sneaking suspicion that the author / entity / bot responsible for “Unreddacted Antitrust Complain Shows Google’s Ad Business Even Scummier than Many Imagined.” For the record, I want to point out this definition of scum, courtesy of none other than Google:

a layer of dirt or froth on the surface of a liquid. “green scum found on stagnant pools” Colorful, particularly the dirt combined with the adjective green and stagnant

It follows that the context and connotation of the article views Google as a less than pristine outfit. I ask, “How can that be true?”

The write up states:

… the complaint paints a damning picture of how Google has monopolized all of the critical informational choke points in the online ad business between publishers and advertisers; as one employee put it, it’s as if Google owned a bank and the New York Stock Exchange, only more so. Google shamelessly engages in fraud…

These are words which an Alphabet, Google, YouTube, et al attorney might find sufficiently magnetic to pull the legal eagles to their nest to plot a legal maneuver to prevent the author / entity / bot responsible for the write up from having a day without a summons and a wearying visit to a courthouse for months, maybe years.

If you want to know how one of Silicon Valley’s finest does business, you will want to check out the cited article. Some of the comments are fascinating. I quite liked the one that suggested the matter would be a slam dunk for prosecutors. Ho ho ho.

Personally I find Alphabet / Google / YouTube et all the cat’s pajamas. However, I do not think the author / entity / bot creating the write up will get a chance to apply for a job at the online ad company and its affiliated firms. 

Stephen E Arnold, January 21, 2022

Google Identifies Smart Software Trends

January 18, 2022

Straight away the marketing document “Google Research: Themes from 2021 and Beyond” is more than 8,000 words. Anyone familiar with Google’s outputs may have observed that Google prefers short, mostly ambiguous phraseology. Here’s an example from Google support:

Your account is disabled

If you’re redirected to this page, your Google Account has been disabled.

When a Google document is long, it must be important. Furthermore, when that Google document is allegedly authored by Dr. Jeff Dean, a long time Googler, you know it is important. Another clue is the list of contributors which includes 32 contributors helpfully alphabetized by the individual’s first name. Hey, those traditional bibliographic conventions are not useful. Chicago Manual of Style? Balderdash it seems.

Okay, long. Lots of authors. What are the trends? Based on my humanoid processes, it appears that the major points are:

TREND 1: Machine learning is cranking out “more capable, general purpose machine learning models.” The idea, it seems, that the days of hand-crafting a collection of numerical recipes, assembling and testing training data, training the model, fixing issues in the model, and then applying the model are either history or going to be history soon. Why’s this important? Cheaper, faster, and allegedly better machine learning deployment. What happens if the model is off a bit or drifts, no worries. Machine learning methods which make use of a handful of human overseers will fix up the issues quickly, maybe in real time.,

TREND 2: There is more efficiency improvements in the works. The idea is the more efficiency is better, faster, and logical. One can look at the achievements of smart software in autonomous automobiles to see the evidence of these efficiencies. Sure, there are minor issues because smart software is sometimes outputting a zero when a one is needed. What’s a highway fatality in the total number of safe miles driven? Efficiency also means it is smarter to obtain machine learning, ready to roll models and data sets from large efficient, high technology outfits. One source could be Google. No kidding? Google?

TREND 3: “Machine learning is becoming more personally and communally beneficial.” Yep, machine learning helps the community. Now is the “community” the individual who works on deep dives into Google’s approach to machine learning or a method that sails in a different direction. Is the community the advertisers who rely on Google to match in an intelligent and efficient manner the sales’ messages to users human and system communities? Is the communally beneficial group the users of Google’s ad supported services? The main point is that Google and machine learning are doing good and will do better going forward. This is a theme Google management expresses each time it has an opportunity to address a concern in a hearing about the company’s activities in a hearing in Washington, DC.

TREND 4: Machine learning is going to have “growing impact” on science, health, and sustainability. This is a very big trend. It implicitly asserts that smart software will improve “science.” In the midst of the Covid issue, humans appear to have stumbled. The trend is that humans won’t make such mistakes going forward; for example, Theranos-type exaggeration, CDC contradictory information, or Google and the allegations of collusion with Facebook. Smart software will make these examples shrink in number. That sounds good, very good.

TREND 5: A notable trend is that there will be a “deeper and broader understanding of machine learning.” Okay, who is going to understand? Google-certified machine learning professionals, advertising intermediaries, search engine optimization experts, consumers of free Google Web search, Google itself, or some other cohort? Will the use of off the shelf, pre packaged machine learning data sets and models make it more difficult to figure out what is behind the walls of a black box? Anyway, this trend sounds a suitable do good, technology will improve the world that appears to promise a bright, sunny day even though a weathered fisherperson says, “A storm is a-coming.”

The write up includes art, charts, graphs, and pictures. These are indeed Googley. Some are animated. Links to YouTube videos enliven the essay.

The content is interesting, but I noted several omissions:

  1. No reference to making making decisions which do not allegedly contravene one or more regulations or just look like really dicey decisions. Example: “Executives Personally Signed Off on Facebook-Google Ad Collusion Plot, States Claim
  2. No reference to the use of machine learning to avoid what appear to be ill-conceived and possibly dumb personnel decisions within the Google smart software group. Example: “Google Fired a Leading AI Scientist but Now She’s Founded Her Own Firm
  3. No reference to anti trust issues. Example: “India Hits Google with Antitrust Investigation over Alleged Abuse in News Aggregation.”

Marketing information is often disconnected from the reality in which a company operates. Nevertheless, it is clear that the number of words, the effort invested in whizzy diagrams, and the over-wrought rhetoric are different from Google’s business-as-usual-approach.

What’s up or what’s covered up? Perhaps I will learn in 2022 and beyond?

Stephen E Arnold, January 18, 2022

Alleged Collusion Between Meta and Google: Shocking Sort Of

January 17, 2022

Google and Facebook’s Top Execs Allegedly Approved Dividing Ad Market among Themselves” reports:

The alleged 2017 deal between Google and Facebook to kill header bidding, a way for multiple ad exchanges to compete fairly in automated ad auctions, was negotiated by Facebook COO Sheryl Sandberg, and endorsed by both Facebook CEO Mark Zuckerberg (now with Meta) and Google CEO Sundar Pichai, according to an updated complaint filed in the Texas-led antitrust lawsuit against Google.

Fans of primary research can read the 242 page amended filing at this link.

One question arises: How could two separate companies engage is discussions to divide a market? Perhaps one clue is the presence of the estimable lean in professional Sheryl Sandberg, who joined Google 2001 after blazing a trail in economics, McKinsey-type thinking familiar to many today as the pharma brain machine, and then some highly productive US government work.

At the Google she was a general manager. Her Googley behavior earned her a promotion. She was one of the thinkers shaping the outstanding revenue generation system known as AdWords. She added her special touch of McKinsey-ness to AdSense to the Gil Ebaz smart system packaged as Applied Semantics aka Oingo. The important point about applied semantics is that the technology included what I think of as steering or directionality; that is, one uses semantic information to herd the doggies (users) down the trail (consumption of ad inventory. For more on this notion of steering yo8u will want to listen to my interview with Dr. Donna Ingram who addresses this issue in the DarkCyber, 4th series, Number 1 video program to be released on January 18, 2022.

In 2007, chatting at the party helped her migrate from the Google to the company formerly known as Facebook. Ms. Sandberg, Harvard graduate with a chubby contact list, joined the scintillating management team as the social network engineering machine. In 2012, she became a member of the company’s board of directors. She leaned in to her role until some “real” news outfits flipped over the mossy rock of Cambridge Analytica’s benchmark marketing methods.

Ms. Sandberg was recognized by Professor Shoshana Zuboff as the Typhoid Mary of surveillance capitalism. Is that a Meta T shirt yet? He book is a must read. It is called Lean In: Women, Work, and the Will to Lead. It appeared in 2013 and may be due for an update to include the Cambridge Analytica misunderstanding, the Frances Haugen revelations, and, of course, the current Texas-sized legal matter.

The write up cited above points out a statement from the Google. The main idea is that the idea is “full of inaccuracies and lacks legal merit.”

I believe everything I read on the Internet. I accept the Google search output when I query “Silicon Valley ethics” – Theranos. I trust in the Meta thing because how could two outfits collude? I think such interactions are highly improbable in Silicon Valley, the home of straight shooting.

Stephen E Arnold, January 17, 2022

Google and Its Management: More Excitement

January 7, 2022

In western countries, the technology industry is predominantly white and male. This has led to AI algorithms are accidentally programmed with “racial bias.” These awkward and humorous incidents include a “racist” soap dispenser that could not sense dark pigmented skin and photo recognition software identifying black people as gorillas. AI algorithms can easily be fixed when they are fed more diverse data, however, it is harder to fix human habits. Google is once again under fire for its treatment of minority employees, specifically, “Google Facing Probe For How It Treats Black Female Workers,” says Daiji World.

Google’s recent diversity report stated that only 1.8% of its work force consists of black women. The tech giant explains that it wants to be a viewed as a welcoming environment for black people. The California Department of Fair Employment and Housing has questioned Google employees about harassment and discrimination in response to complaints. Google has a known history of harassment and discrimination:

“Artificial Intelligence (AI) researcher Timnit Gebru, who was fired from Google after sending an email of concern to her Ethical AI team, has now set up her own research institute that will be an independent, community-rooted institute set to counter Big Tech’s pervasive influence on the research, development and deployment of AI.

Gebru was the technical co-lead of Google’s Ethical Artificial Intelligence team. She was fired over an email where she expressed her doubts about Google’s commitment to inclusion and diversity. Two Google engineers, including one of Indian-origin, quit Google over the abrupt firing of Gebru.

While engineering director David Baker said that Gebru’s dismissal “extinguished” his will to work at the company, software engineer Vinesh Kannan announced that he was quitting because Gebru and April Christina Curley, a diversity recruiter, were “wronged”.”

All industries should be merit-based, but allowances must be given for sex and ethnicity as these factors heavily weigh on society. All ethnicities and sexes want acceptance, respect, and inclusion in the workplace. This means racist, sexist, discriminatory, and harassing behaviors are taboo. If they do occur, the perpetrator should be punished, not the victim.

Here is a big secret about women in the workplace: they want to work. Here is a big secret about ethnic minorities in the workplace: they want to work too. Why is it so hard to curb rude behavior and treat women and ethnic minorities like everyone else?

The tech industry is like a huge good old boys club. When the male club members are confronted with change, they do not want to relinquish their power. Toxic male behaviors are not the only problem. As a whole, society still pushes women towards more traditional female roles. These roles stem away from science, math, and technology.

Things are better, but they can and will improve. The biggest holdups are old predilections that will face as older generations pass. Once Generation Z reaches adult hood, society will have improved. The biggest downside is the present.

Whitney Grace, January 7, 2022

Perhaps Someone Wants to Work at Google?

January 7, 2022

I read another quantum supremacy rah rah story. What’s quantum supremacy? IBM and others want it whatever it may be. “Google’s Time Crystals Could Be the Greatest Scientific Achievement of Our Lifetimes” slithers away from the genome thing, whatever the Nobel committee found interesting, and dark horses like the NSO Group’s innovation for seizing an iPhone user’s mobile device just by sending the target a message.

None of these is in the running. What we have it, according to The Next Web, is what may  be:

the world’s first time crystal inside a quantum computer.

Now the quantum computer is definitely a Gartner go-to technology magnet. Google is happy with DeepMind’s modest financial burn rate to reign supreme. The Next Web outfit is doing its part. Two questions?

What’s a quantum computer? A demo, something that DARPA finds worthy of supporting, or a financial opportunity for clever physicists and assorted engineers eager to become the Seymour Crays of 2022.

What’s a time crystal? Frankly I have no clue. Like some hip phrases — synaptic plasticity, phubbing, and vibrating carbon nanohorns, for instance — time crystal is definitely evocative. The write up says:

Time crystals don’t give a damn what Newton or anyone else thinks. They’re lawbreakers and heart takers. They can, theoretically, maintain entropy even when they’re used in a process.

The write up includes a number of disclaimers, but the purpose of the time crystal strikes me as part of the Google big PR picture. Whether time crystals are a thing like yeeting alphabet boys or hyperedge replacement graph grammars, the intriguing linkage of Google, quantum computing, and zippy time crystals further cements the idea that Google is a hot bed of scientific research, development, and innovation.

My thought is that Google is better at getting article writers to make their desire to work at Google evident. Google has not quite mastered the Timnit Gebru problem, however.

And are the Google results reproducible? Yeah, sure.

Stephen E Arnold, January 7, 2022

France Punches Its Googzilla-Type Pez Dispenser Again

January 6, 2022

Some government have figured out how to generate some cash. Target Facebook and Google. Fine them. Collect the money. This is a new spin on the Pez dispenser. Punch a lever and get a snack.


We could not locate an official Googzilla Pez dispenser. However, we spotted this creature from the Black Lagoon on the Antiques Navigator Web site here. The idea is to push a button and get a healthful, nutricious sugar pellet. The digital version requires a legal document finding the target guilty of an infraction. After some legal fancy dancing, the target pays the nation state. Efficient and fun. More EU states and Russia are fascinated with the digital Pez method.

The write up “Google, Facebook Face Big Private Fines in France” explains:

French data regulator the CNIL is set to fine Google €150 million and Facebook €60 million for violating EU privacy rules…. The CNIL will fine Google’s United States and Irish operations €90 million and €60,

France is not particularly worried about the opinion of nation states like Ireland.

But the point is that the approach yields cash, bad publicity for certain US technology outfits, and fees for lawyers. Yes, lawyers.

Punch that button? Sounds like a plan.

Stephen E Arnold, January 6, 2022

Is Waymo a Proxy for Alphabet Google in China?

January 4, 2022

Remember 2006. Google launched its China search engine. In 2010, Google caught a flight back to SFO. The issues revolved around control, and the Google was not about to be controlled by a mere nation state. The Google was the new big thing. For some color on this remarkable example of techno hubris, check out How Google Took on China and Lost.” (Note: You may have to pay to read this okay write up from the outfit which found the humanist Jeffrey Epstein A-OK.)

Flash forward to “Future Autonomous Waymo EV Will Be Custom Built for Ride-Hailing with No Steering Wheel.” Tucked into this write up is an item of information I find quite suggestive about China, the Google, and the adage “time heals all wounds.” Well, that’s the adage’s point of view.

The write up’s interesting item is expressed this way:

Waymo today announced an OEM collaboration with Geely, a Chinese automotive company that has several subsidiary brands like Volvo, Lotus, and Smart.

Presumably both the Chinese government sensitive Geely and the money sensitive Google are going to go on these outfits’ version of a date.

And the misunderstanding of 2006 and Dragonfly, the aborted Chinese centric search engine project (allegedly just a distant memory), is just a another Google project without wood behind it.

What online service will provide maps to the nifty new auto? Who will have access to the data the helpful vehicles will generate? What is one of these slick vehicles routes toward a facility in the US which is covertly owned by a China-affiliated entity or picks up one of those Harvard type academics who is on China’s payroll?

So many questions with what may be obvious answers.

Stephen E Arnold, January 4, 2022

Google: Who Us? Oh, We Are Sorry

January 4, 2022

One sign of a decent human being is when they admit their mistakes and accept their responsibilities. When people accept their mistakes, the situation blows over quicker. Google leaders, however, are reluctant to accept the consequences of their poor actions and it is not generating good PR. The lo shares the story in: “Google CEO Blames Employee Leaks To The Press For Reduced ‘Trust and Candor’ At The Company.”

During a recent end-of-the year meeting, Google employees could submit questions via the internal company system Dory. They then can vote on the questions they wish management to answer. The following questioned received 673 votes:

“The question was: ‘It seems like responses to Dory have gotten increasingly more lawyer-like with canned phrases or platitudes, which seem to ignore the questions being ask [sic]. Are we planning on bringing candor, honesty, humility and frankness back to Dory answers or continuing down a bureaucratic path?’”

Google CEO Sundar Pichai was exasperated and he blamed employees leaking information to the media for the inflated, artificial answers. He said the following:

“‘Sometimes, I do think that people are unforgiving for small mistakes. I do think people realize that answers can be quoted anywhere, including outside the company. I think that makes people very careful,’ he said. ‘Trust and candor has to go both ways,’ Pichai added.”

Pichai also explained that the poor relationship between employees and the top brass is a direct result of Google’s large size and the pandemic. Google employees were upset with their leaders prior the the COVID-19 pandemic. They stated they were frustrated with how Google handled sexual harassment complaints, lack of diversity issues, and sexism. Google employees formed their first union in January 2020.

Pichai would do better to admit Google has problems and actively work on fixing them. It would make him and the company appear positive in the media, not to mention better relationships with his employees.

Whitney Grace, January 4, 2022

How about That Smart Software?

January 3, 2022

In the short cut world of training smart software, minor glitches are to be expected. When an OCR program delivers 95 percent accuracy, that works out to five mistakes in every 100 words. When Alexa tells a child to put a metal object into a home electrical outlet, what do you expert? This is close enough for horse shoes.

Now what about the Google Maps of today, a maps solution which I find almost unusable. “Google Maps May Have Led Tahoe Travelers Astray During Snowstorm” quoted a Tweet from a person who is obviously unaware of the role probabilities play in the magical world of Google. Here’s the Tweet:

This is an abject failure. You are sending people up a poorly maintained forest road to their death in a severe blizzard. Hire people who can address winter storms in your code (or maybe get some of your engineers who are stuck in Tahoe right now on it).

Big deal? Of course not, Amazon and Google are focused on the efficiencies of machine-centric methods for identifying relevant, on point information. The probability is that most of the Amazon and Google outputs will be on the money. Google Maps rarely misses on pizza or the location of March Madness basketball games.

Severely injured children? Well, that probably won’t happen. Individuals lost in a snow storm? Well, that probably won’t happen.

The flaw in these giant firms’ methods are correct from these companies’ point of view in the majority of cases. A terminated humanoid or a driver wondering if a friendly forest ranger will come along the logging road? Not a big deal.

What happens when these smart systems output decisions which have ever larger consequences? Autonomous weapons, anyone?

Stephen E Arnold, January 3, 2021

Next Page »

  • Archives

  • Recent Posts

  • Meta