Algorithms Are Neutral. Well, Sort of Objective Maybe?

October 12, 2018

I read “Amazon Trained a Sexism-Fighting, Resume-Screening AI with Sexist Hiring data, So the Bot Became Sexist.” The main point is that if the training data are biased, the smart software will be biased.

No kidding.

The write up points out:

There is a “machine learning is hard” angle to this: while the flawed outcomes from the flawed training data was totally predictable, the system’s self-generated discriminatory criteria were surprising and unpredictable. No one told it to downrank resumes containing “women’s” — it arrived at that conclusion on its own, by noticing that this was a word that rarely appeared on the resumes of previous Amazon hires.

Now the company discovering that its smart software became automatically biased was Amazon.

That’s right.

The same Amazon which has invested significant resources in its SageMaker machine learning platform. This is part of the infrastructure which will, Amazon hopes, will propel the US Department of Defense forward for the next five years.

Hold on.

What happens if the system and method produces wonky outputs when a minor dust up is automatically escalated?

Discriminating in hiring is one thing. Fluffing a global matter is a another.

Do the smart software systems from Google, IBM, and Microsoft have similar tendencies? My recollection is that this type of “getting lost” has surfaced before. Maybe those innovators pushing narrowly scoped rule based systems were on to something?

Stephen E Arnold, October 11, 2018

Comments

One Response to “Algorithms Are Neutral. Well, Sort of Objective Maybe?”

  1. ?????? ?????? on October 14th, 2018 3:48 am

    ?????? ??????

    Algorithms Are Neutral. Well, Sort of Objective Maybe? : Stephen E. Arnold @ Beyond Search

  • Archives

  • Recent Posts

  • Meta