More on Biased Algorithms: Humans in the Mix

September 6, 2016

I read “When Computers Learn Human Languages, They Also Learn Human Prejudices.” The write up makes a point which seems obvious to me and the goslings. Numbers may be neutral in the ivory tower of a mathematician in Minsk or Midland. But in the world of smart software, the human influence may be inescapable like death. Oh, Google will solve death, and I suppose at some point Google will eliminate the human element in its fancy math.

For all others, I learned:

Implicit biases are a well-documented and pernicious feature of human languages.

Okay.

In the write up full of revelations, I highlighted this passage:

New research from computer scientists at Princeton suggests that computers learning human languages will also inevitably learn those human biases.

What’s the fix? The write up and the wizards have an answer:

The solution to these problems is probably not to train algorithms to be speakers of a more ideal English language (or believers in a more ideal world), but rather in ensuring “algorithmic accountability” (pdf), which calls for layers of accountability for any decisions in which an algorithm is involved….It may be necessary to override the results to compensate—a sort of “fake it until you make it” strategy for erasing the biases that creep into our algorithms.

I love the “fake it until you make it” idea.

Who will analyze the often not-so-accessible numerical recipes in use at the centralized online services? Will it be “real” journalists? Will it be legal eagles? Will it be self regulation just like the banking sector enforces with such assiduousness?

My hunch is that this algorithm bias thing will be a problem a bit like death; that is, no solution for now.

Stephen E Arnold, September 6, 2016

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta