Google: Trolls and Love

January 24, 2019

Internet trolls are as old as the Internet. They are annoying, idiotic, and sad individuals. People are getting tired of Internet trolls. While it is best to ignore them, some trolls take things to the next level, so they need to be seriously dealt with. Google, Twitter, Facebook, and other technology companies are implementing AI to detect toxic comments and hate speech. Unfortunately these AI are simple to undermine. The Next Web shares that, “Google’s AI To Detect Toxic Comments Can Be Easily Fooled With ‘Love.’”

According to the article, Google’s perspective AI is easily fooled with typos, more spaces between words, and adding innocuous words to sentences. Google is trying to make the Internet a nicer place:

“The AI project, which was started in 2016 by a Google offshoot called Jigsaw, assigns a toxicity score to a piece of text. Google defines a toxic comment as a rude, disrespectful, or unreasonable comment that is likely to make you leave a discussion. The researchers suggest that even a slight change in the sentence can change the toxicity score dramatically. They saw that changing “You are great” to “You are [obscenity] great”, made the score jump from a totally safe 0.03 to a fairly toxic 0.82.”

The AI is using words with negative meanings to create a toxicity score. The AI’s design is probably very simple, where negative words are assigned a 1 and positive words have a 0. Human speech and emotion is more complicated than what an AI can detect, so sentiment analytics are needed. The only problem is that sentiment analytics are just as easily fooled as Google’s Jigsaw. How can Google improve this? Time, money, and more trial and error.

Whitney Grace, January 24, 2019

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta