Featured

A Xoogler May Question the Google about Responsible and Ethical Smart Software

Write a research paper. Get colleagues to provide input. Well, ask colleagues do that work and what do you get. How about “Looks good.” Or “Add more zing to that chart.” Or “I’m snowed under so it will be a while but I will review it…” Then the paper wends its way to publication and a senior manager type reads the paper on a flight from one whiz kid town to another whiz kid town and says, “This is bad. Really bad because the paper points out that we fiddle with the outputs. And what we set up is biased to generate the most money possible from clueless humans under our span of control.” Finally, the paper is blocked from publication and the offending PhD is fired or sent signals that your future lies elsewhere.

image

Will this be a classic arm wrestling match? The winner may control quite a bit of conceptual territory along with knobs and dials to shape information.

Could this happen? Oh, yeah.

Ex Googler Timnit Gebru Starts Her Own AI Research Center” documents the next step, which may mean that some wizards undergarments will be sprayed with eau de poison oak for months, maybe years. Here’s one of the statements from the Wired article:

“Instead of fighting from the inside, I want to show a model for an independent institution with a different set of incentive structures,” says Gebru, who is founder and executive director of Distributed Artificial Intelligence Research (DAIR). The first part of the name is a reference to her aim to be more inclusive than most AI labs—which skew white, Western, and male—and to recruit people from parts of the world rarely represented in the tech industry. Gebru was ejected from Google after clashing with bosses over a research paper urging caution with new text-processing technology enthusiastically adopted by Google and other tech companies.

The main idea, which Wired and Dr. Gebru delicately sidestep, is that there are allegations of an artificial intelligence or machine learning cabal drifting around some conference hall chatter. On one side is the push for what I call the SAIL approach. The example I use to illustrate how this cost effective, speedy, and clever short cut approach works is illustrated in some of the work of Dr. Christopher Ré, the captain of the objective craft SAIL. Oh, is the acronym unfamiliar to you? SAIL is short version of Stanford Artificial Intelligence Laboratory. SAIL fits on the Snorkel content diving gear I think.

On the other side of the ocean, are Dr. Timnit Gebru’s fellow travelers. The difference is that Dr. Gebru believes that smart software should not reflect the wit, wisdom, biases, and general bro-ness of the high school science club culture. This culture, in my opinion, has contributed to the fraying of the social fabric in the US, caused harm, and erodes behaviors that are supposed to be subordinated to “just what people do to make a social system function smoothly.”

Does the Wired write up identify the alleged cabal? Nope.

Does the write up explain that the Ré / Snorkel methods sacrifice some precision in the rush to generate good enough outputs? (Good enough can be framed in terms of ad revenue, reduced costs, and faster time to market testing in my opinion.) Nope.

Does Dr. Gebru explain how insidious the short cut training of models is and how it will create systems which actively harm those outside the 60 percent threshold of certain statistical yardsticks? Heck, no.

Hopefully some bright researchers will explain what’s happening with a “deep dive”? Oh, right, Deep Dive is the name of a content access company which uses Dr. Ré’s methods. Ho, ho, ho. You didn’t know?

Beyond Search believes that Dr. Gebru has important contributions to make to applied smart software. Just hurry up already.

Stephen E Arnold, December 2, 2021

Interviews

DarkCyber for June 9, 2020, Is Now Available: AI and Music Composition

The DarkCyber for June 9, 2020, presents a critical look at music generated by artificial intelligence. The focus is the award-winning song in the Eurovision AI 2020 competition. The interview discusses the characteristics of AI-generated music, its impact on music directors, how professional musicians deal with machine-created music, and the implications of non-numan music. The program is a criticism of the state-of-the-art for smart software. Instead of focusing on often over-hyped start ups and large companies making increasingly exaggerated claims, the Australian song and the two musicians make clear that AI is a work in progress. You can view the video at https://vimeo.com/427227666.

Kenny Toth, June 9, 2020

Latest News

Amazon: Engendering Excitement and Questions about Failover and Reliability

Amazon’s big-bang conference is mostly a memory. I don’t think the conference announcements or the praise sung by the choir of Amazon faithful can top this story:... Read more »

December 8, 2021 | Comment

New Management Method: High School Science Club Wants to Run the School District

I love those confident, ever youthful, and oh-so enthusiastic high school science club members. Many of these individuals maintain their youthful insights into adulthood.... Read more »

December 8, 2021 | Comment

Who Says Teachers Do Not Understand Social Media? Not TikTok

It can be difficult to keep up with what the kids are doing on video-sharing app TikTok, and much of it is just playful fun. However, here is an unnerving shift... Read more »

December 8, 2021 | Comment

US Government Procurement: Diagram the Workflow: How Many Arrows Point Fingers?

I want to keep this short. For a number of years, I have pointed out that current Federal procurement procedures and the policies the steps are supposed to implement... Read more »

December 8, 2021 | Comment

Smart Software and Cartels: Another View of the Question To Google or Not to Google?

I read “A Cartel of Influential Datasets Is Dominating Machine Learning Research New Study Suggests.” The “team” beavering away is an impressive one to the... Read more »

December 7, 2021 | Comment

If One Thinks One Is Caesar, Is That Person Caesar? Thumbs Up or Thumbs Down

I read a story which may or may not be spot on. Nevertheless, I found it amusing, and if true, not so funny. The story is “Facebook Refuses to Recognize Biden’s... Read more »

December 7, 2021 | Comment

Smart Software Is Innovative: Two Marketing Examples, You Doltish Humanoids

I zipped through the news releases, headlines, and emails which accumulate in my system. I spotted two stories. Each made the case that smart software — created... Read more »

December 7, 2021 | Comment

Content Control: More and More Popular

A couple recent articles emphasize there is at least some effort being made to control harmful content on social media platforms. Are these examples of responsible... Read more »

December 7, 2021 | Comment

More AI Foibles: Inheriting Biases

Artificial intelligence algorithms are already implemented in organizations, but the final decisions are still made by humans. It is fact that algorithms are unfortunately... Read more »

December 7, 2021 | Comment

A Digital Don Quixote Saddles Up and Sallies Forth

I read ”Apple Takes Russia to Court over App Store Ruling.” Wow, not since my high school days have I encountered such an enchanting slice of fiction. The guide... Read more »

December 6, 2021 | Comment


  • Archives

  • Recent Posts

  • Meta