Thoughts about AI Bias: Are Data Non-Objective?

December 10, 2021

I read “Breaking Bias — Ensuring Fairness in Artificial Intelligence.” The substance of the write up is an interview with Alix Melchy, VP of AI at Jumio. Okay.

I did note a couple of interesting statements in the interview.

First, Mr. Melchy takes aim at Snorkel-type systems and methods. These are efficient and do away with most of the expensive human intensive training data set work. Here’s his statement:

… fairness bias …enters into AI systems through training data that contains skewed human decisions or represents historical or social prejudices.

Data sets which are not woke are, its seems, going to be biased.

Second, Mr. Melchy says:

bias can be damaging to the credibility of AI as a whole,

Does the AI methods manifested by big tech care? Nope, not as long as the money flows into the appropriate bank account in my opinion.

Third, Mr. Melchy notes:

… companies that don’t build an AI system with bias considerations from the start are never going to catch up to an industry-standard level of accuracy.

Okay, Google. Alexa, are you listening?

Stephen E Arnold, December 10, 2021

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta