The Crux of the Smart Software Challenge

February 24, 2021

I read “There Is No Intelligence without Human Brains.” The essay is not about machine learning, artificial intelligence, and fancy algorithms. One of the points which I found interesting was:

But, humans can opt for long-term considerations, sacrificing to help others, moral arguments, doing unhelpful things as a deep scream for emotional help, experimenting to learn, training themselves to get good at something, beauty over success, etc., rather than just doing what is comfortable or feels nice in the short run or simply pro-survival.

However, one sentence focused my thinking on the central problem of smart software and possibly explains the odd, knee jerk, and high profile personnel problems in Google’s AI ethics unit. Here’s the sentence:

Poisoning may greatly hinder our flexible intelligence.

Smart software has to be trained. The software system can be hand fed training sets crafted by fallible humans or the software system can ingest whatever is flowing into the system. There are smart software systems which do both. One of the first commercial products to rely on training sets and “analysis on the fly” was the Autonomy system. The phrase “neurolinguistic programming” was attached by a couple of people to the Autonomy black box.

What’s stirring up dust at Google may be nothing more than fear; for example:

  • Revelations by those terminated reveal that the bias in smart software is a fundamental characteristic of Google’s approach to artificial intelligence; that is, the datasets themselves are sending smart software off the rails
  • The quest for the root of the bias is to shine a light on the limitations of current commercial approaches to smart software; that is, vendors make outrageous claims into order to maintain a charade about software’s capabilities which may be quite narrow and biases
  • The data gathered by the Xooglers may reveal that Google’s approach is not as well formed as the company wants competitors and others to believe; that is, marketers and MBAs outpace what the engineers can deliver.

The information by which an artificial intelligence system “learns” may be poisoning the system. Check out the Times of Israel essay. It is thought provoking and may have revealed the source of Google’s interesting personnel management decisions.

Fear can trigger surprising actions.

Stephen E Arnold, February 23, 2021

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta