Alphabet Google Smart Software Cannot Define Hate

March 7, 2017

Again. More artificial intelligence news. Or is it “fake news.” Many people want smart software to generate big revenues. Salesforce has Einstein. Vendors of keyword search centric information retrieval systems suddenly have smart software. At a meeting this week, a vendor of plastic piping said his new inventory system was intelligent. Yep, plastic pipes.

I read “Alphabet’s Hate Fighting AI Doesn’t Understand Hate Yet.” That struck me as odd. I learned in “Google’s AI Learned to Be Highly Aggressive When Stressed.” I assumed that an aggressive AI would take on an online dictionary, wrest the definition of hate from the Web site, and stuff the bits into the voracious multi petabyte storage system available to Deep Mind.

The issue of hate is relevant to hate speech. I think this is a gentle way of saying that unless text has some cue like a known entity which outputs nasty grams or a list of words likely to be used to convey hate, the smart software is performing like most smart software; that is, somewhere in the 40 to 65 percent accuracy range. Toss in human help, assorted dictionaries, a curated set of hateful content objects, and patient tuning and smiles all around.

The write up sidesteps my views and offered:

Google and its sister Alphabet company Jigsaw announced Perspective, a tool that uses machine learning to police the internet against hate speech.

And then noted:

Computer scientists and others on the internet have found the system unable to identify a wide swath of hateful comments, while categorizing innocuous word combinations like “hate is bad” and “garbage truck” as overwhelmingly toxic.

We know about Google’s track record in releasing early versions of software which maybe will sort of work a little bit someday.

The Googlers are busy doing what Autonomy Software did in the 1990s and other vendors of smart software have done in the subsequent quarter century: Teach the software by spoon feeding information into the system.

The write up points out:

Like all machine-learning algorithms, the more data the Perspective API has, the better it will work. The Alphabet subsidiary worked with partners like Wikipedia and The New York Times to gather hundreds of thousands of comments, and then crowd sourced 10 answers for each comment on whether they were toxic or not. The effort was intended to kick start the deep neural network that makes the backbone of the Perspective API.

I love the “all.” The truth of the matter is that self learning software just doesn’t work as well as the more narrowly defined artificial intelligence systems. Buy a book and Amazon looks in its database to see what other book buyers with a statistically generated profile seem to have bought. Bang. A smart recommendation. Skip the fact that books already purchased and stored in an Amazon database appear again and again on my list of recommended books. Smart but stupid, and that’s reasonably good implementation of smart software.

The write up works through examples of hate speech. Consult the source document for the lists. The write up works overtime to paint the lily gold and put some stage make up on what seems to be a somewhat dowdy implementation of Deep Mind / Google’s artificial intelligence.

Hey, I don’t want to drag another cat into the kitchen, but why not ask Watson what hate means. My hunch is that either the Google / Deep Mind engineers or the IBM Watson engineers will have a laugh over that idea. The smart software, on the other hand, might try to knock some sense into the competitor’s system.

Stephen E Arnold, March 7, 2017

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta