Smart Software Writes: Fake News May Improve Its Fakiness

March 17, 2018

Humans are the worst critics, but is it possible that AI could become even worse? We think that AI are still too dumb to read stories and understand the context. AI is still not smart enough to provide context to words, it is called sentiment analysis, but it can recognize news stories from personal opinions and experiences. Motherboard on Vice wrote an article that discusses how many news stories are really new and noteworthy called, “AI System Sorts News Articles By Whether Or Not They Contain Actual Information.”

In order to separate the white noise from the correct frequency, the machine learning AI would need an objective metric of content density and an objective way to evaluate news stories within that density. An AI that could read deduce real stories from fake would be programmed like any other machine learning program: get a bunch of data and identify the different the data by splitting it into appropriate groups. One team built an AI based on this model and came back with decent return:

“In a recent paper published in the Journal of Artificial Intelligence Research, computer scientists Ani Nenkova and Yinfei Yang, of Google and the University of Pennsylvania, respectively, describe a new machine learning approach to classifying written journalism according to a formalized idea of “content density.” With an average accuracy of around 80 percent, their system was able to accurately classify news stories across a wide range of domains, spanning from international relations and business to sports and science journalism, when evaluated against a ground truth dataset of already correctly classified news articles.”

When the evaluated data was set against subset that had been labeled for validation purposes, the return was only about fifty percent.

Our view of this “progress” is that it seems that the software can be trained by feeding the system fake news. The smart software will then, if the information in the article is accurate, be able to improve the fake news. Does this mean that the improved fake news will be “better”?

The problem is that AI still has trouble deciphering human intent and the true meaning behind words. It is similar to how some autistic people have trouble understanding human social cues. AI needs to become much more human to understand human language intricacies.

Whitney Grace, March 17, 2018


Comments are closed.

  • Archives

  • Recent Posts

  • Meta