A Xoogler May Question the Google about Responsible and Ethical Smart Software

Write a research paper. Get colleagues to provide input. Well, ask colleagues do that work and what do you get. How about “Looks good.” Or “Add more zing to that chart.” Or “I’m snowed under so it will be a while but I will review it…” Then the paper wends its way to publication and a senior manager type reads the paper on a flight from one whiz kid town to another whiz kid town and says, “This is bad. Really bad because the paper points out that we fiddle with the outputs. And what we set up is biased to generate the most money possible from clueless humans under our span of control.” Finally, the paper is blocked from publication and the offending PhD is fired or sent signals that your future lies elsewhere.


Will this be a classic arm wrestling match? The winner may control quite a bit of conceptual territory along with knobs and dials to shape information.

Could this happen? Oh, yeah.

Ex Googler Timnit Gebru Starts Her Own AI Research Center” documents the next step, which may mean that some wizards undergarments will be sprayed with eau de poison oak for months, maybe years. Here’s one of the statements from the Wired article:

“Instead of fighting from the inside, I want to show a model for an independent institution with a different set of incentive structures,” says Gebru, who is founder and executive director of Distributed Artificial Intelligence Research (DAIR). The first part of the name is a reference to her aim to be more inclusive than most AI labs—which skew white, Western, and male—and to recruit people from parts of the world rarely represented in the tech industry. Gebru was ejected from Google after clashing with bosses over a research paper urging caution with new text-processing technology enthusiastically adopted by Google and other tech companies.

The main idea, which Wired and Dr. Gebru delicately sidestep, is that there are allegations of an artificial intelligence or machine learning cabal drifting around some conference hall chatter. On one side is the push for what I call the SAIL approach. The example I use to illustrate how this cost effective, speedy, and clever short cut approach works is illustrated in some of the work of Dr. Christopher Ré, the captain of the objective craft SAIL. Oh, is the acronym unfamiliar to you? SAIL is short version of Stanford Artificial Intelligence Laboratory. SAIL fits on the Snorkel content diving gear I think.

On the other side of the ocean, are Dr. Timnit Gebru’s fellow travelers. The difference is that Dr. Gebru believes that smart software should not reflect the wit, wisdom, biases, and general bro-ness of the high school science club culture. This culture, in my opinion, has contributed to the fraying of the social fabric in the US, caused harm, and erodes behaviors that are supposed to be subordinated to “just what people do to make a social system function smoothly.”

Does the Wired write up identify the alleged cabal? Nope.

Does the write up explain that the Ré / Snorkel methods sacrifice some precision in the rush to generate good enough outputs? (Good enough can be framed in terms of ad revenue, reduced costs, and faster time to market testing in my opinion.) Nope.

Does Dr. Gebru explain how insidious the short cut training of models is and how it will create systems which actively harm those outside the 60 percent threshold of certain statistical yardsticks? Heck, no.

Hopefully some bright researchers will explain what’s happening with a “deep dive”? Oh, right, Deep Dive is the name of a content access company which uses Dr. Ré’s methods. Ho, ho, ho. You didn’t know?

Beyond Search believes that Dr. Gebru has important contributions to make to applied smart software. Just hurry up already.

Stephen E Arnold, December 2, 2021


DarkCyber, March 29, 2022: An Interview with Chris Westphal, DataWalk

Chris Westphal is the Chief Analytics Officer of DataWalk, a firm providing an investigative and analysis tool to commercial and government organizations. The 12-minute interview covers DataWalk’s unique capabilities, its data and information resources, and the firm’s workflow functionality. The video can be viewed on YouTube at this location.

Stephen E Arnold, March 29, 2022

Latest News

The Zuckbook Smart Chatbot: It May Say Bad Things Like the Delightful Tay.ai?

I read an amusing article called “Meta Warns Its New Chatbot May Not Tell You the Truth.” The write up states: Meta warns that BlenderBot 3 is also capable of... Read more »

August 10, 2022 | Comment

The Expanding PR Challenge for Cyber Threat Intelligence Outfits

Companies engaged in providing specialized services to law enforcement and intelligence entities have to find a way to surf on the building wave of NSO Group  backlash. What... Read more »

August 10, 2022 | Comment

Oracle: Marketing Experience or MX = Zero?

How does one solve the problem MX = 0? One way is to set M to zero and X to zero and bingo! You have zero. If the information in the super select, restricted, juicy... Read more »

August 10, 2022 | Comment

How about a Decade of Vulnerability? Great for Bad Actors

IT departments may be tired of dealing with vulnerabilities associated with Log4j, revealed late last year, but it looks like the problem will not die down any time... Read more »

August 10, 2022 | Comment

Machine Learning: Cheating Is a Feature?

I read “MIT Boffins Make AI Chips 1 Million Times Faster Than the Synapses in the Human Brain. Plus: Why ML Research Is Difficult to Produce – and Army Lab Extends... Read more »

August 9, 2022 | Comment

DARPA Works to Limit Open Source Security Threats

Isn’t it a little late? Open-source code has become an integral part of nearly every facet of modern computing, including military and critical infrastructure... Read more »

August 9, 2022 | Comment

Microsoft and Linux: All Your Base Belong to Us

Microsoft has traditionally been concerned about Linux and has never hidden its indigestion — until the original top dogs went to the kennel. Microsoft actually... Read more »

August 9, 2022 | Comment

TikTok: Is It a Helpful Service for Bad Actors?

Do you remember the Silicon Valley cheerleaders who said, “TikTok is no big deal. Not to worry.” Well, worry. “TikTok: Suspected Gangs Tout English Channel... Read more »

August 9, 2022 | Comment

YouTube: Latent Power and a Potential Flash Point within Russia?

I read the estimable Murdoch write up called “How YouTube Keeps Broadcasting Inside Russia’s Digital Iron Curtain.” And how about this subtitle? Access to... Read more »

August 8, 2022 | Comment

Xoogler on AI Ethics at the Google: Ethics? Explain, Please

I read a write up which seems to be information I have seen cycled and recycled. Nevertheless” “An Engineer Who Was Fired by Google Says Its AI Chatbot Is Pretty... Read more »

August 8, 2022 | Comment

  • Archives

  • Recent Posts

  • Meta