Smart Software: Chess Is One Thing, Bias Another

December 13, 2017

I enjoyed learning how Google’s smart software taught itself chess in four hours and was able to perform at a high level against mere humans. I also got a kick out of the news that Google’s smart software cannot filter YouTube videos for objectionable material. Google is in the process of hiring 10,000 humans to wade through the hours of video uploaded every minute to YouTube. Ironic? No, just different PR teams.

I read “Researchers Combat Gender and Racial Bias in Artificial Intelligence.” The write up assumes that everyone knows that algorithms contain biases. Sure, that’s a good assumption for most people.

The reality is that a comparatively few algorithmic approaches dominate smart software today. The building blocks are arranged in different sequences. The Facebookers and Googlers chug away with setting thresholds working with subsets to chop Big Data down to something affordable, and other algorithmic donkey work.

But it appears that some folks have now realized that smart software contains biases. I would toss in ethics, but that’s another epistemological challenge to keep “real” journalists on the hunt for stories.

The write up asserts:

While the AI algorithms did a credible job of predicting income levels and political leanings in a given area, Gebru [a Stanford AI wizard] says her work was susceptible to bias — racial, gender, socioeconomic.

Well, Microsoft and IBM are tackling this interesting challenge:

Researchers at Microsoft, IBM and the University of Toronto identified the need for fairness in AI systems back in 2011. Now in the wake of several high-profile incidents — including an AI beauty contest that chose predominantly white faces as winners — some of the best minds in the business are working on the bias problem.

I was tickled to learn that the smart software outfit Google has a different approach:

Google researchers are studying how adding some manual restrictions to machine learning systems can make their outputs more understandable without sacrificing output quality, an initiative nicknamed GlassBox.

Yep, humans. Chess is an easier problem to solve than bias. But in comparison to ethics, bias strikes me as a lower hurdle.

Ah, the irony. Humans instead of software at the GOOG.

Stephen E Arnold, December 13, 2017


Comments are closed.

  • Archives

  • Recent Posts

  • Meta