AI: Black Boxes ‘R Us

November 23, 2022

Humans design and make AI. Because humans design and make AI, we should know how they work. For some reason, humans do not know how AI works. Motherboard on Vice explains that, “Scientists Increasingly Can’t Explain How AI Works.” AI researchers are worried that AI developers focus too much on the end results of an algorithm than how and why it arrives at said results.

In other words, developers cannot explain how an AI algorithm works. AI algorithms are built from layers and layers of deep neural networks (DNNs). These networks are designed to replicate human neural pathways. They are almost like real neural pathways, because neurologists are unaware of how the entire brain works and AI developers do not know how AI algorithms work. AI developers are concerned with the inputs and outputs, but the in-between is the mythical black box. Because AI developers do not worry about how they receive the outputs, they cannot explain why they receive biased, polluted results.

“‘If all we have is a ‘black box’, it is impossible to understand causes of failure and improve system safety,’ Roman V. Yampolskiy, a professor of computer science at the University of Louisville, wrote in his paper titled “Unexplainability and Incomprehensibility of Artificial Intelligence.” ‘Additionally, if we grow accustomed to accepting AI’s answers without an explanation, essentially treating it as an Oracle system, we would not be able to tell if it begins providing wrong or manipulative answers.’”

It sounds like the Schrödinger’s cat of black boxes.

Developers’ results are driven by tight deadlines and small budgets so they concentrate on accuracy over explainability. Algorithms are also (supposedly) more accurate than humans, so it is easy to rely on them. Making the algorithms less biased is another black box, especially when the Internet is skewed one way:

“Debiasing the datasets that AI systems are trained on is near impossible in a society whose Internet reflects inherent, continuous human bias. Besides using smaller datasets, in which developers can have more control in deciding what appears in them, experts say a solution is to design with bias in mind, rather than feign impartiality.”

Couldn’t training an algorithm be like teaching a pet to do tricks with positive reinforcement? What would an algorithm consider a treat? But did a guy named Gödel bring up incompleteness? Clicks, clicks, and more clicks.

Whitney Grace, November 23, 2022

Comments

Got something to say?





  • Archives

  • Recent Posts

  • Meta