The Discovery of the “Adversarial” Image Blind Spot in Neural Networks

July 18, 2014

The article titled Does Deep Learning Have Deep Flaws on KDnuggets explains the implications of the results to a recent study of neural networks and image classification. The study, completed by Google, NYU and the University of Montreal, found that an as yet unknown flaw exists in neural networks when it comes to recognizing images that may be identical to the human eye. Neural networks can generate misclassified “adversarial” images that look exactly the same as a correctly classified image. The article goes on to explain,

“The network may misclassify an image after the researchers applied a certain imperceptible perturbation. The perturbations are found by adjusting the pixel values to maximize the prediction error. For all the networks we studied (MNIST, QuocNet, AlexNet), for each sample, we always manage to generate very close, visually indistinguishable, adversarial examples that are misclassified by the original network… The continuity and stability of deep neural networks are questioned. The smoothness assumption does not hold for deep neural networks any more.”

The article makes this statement and later links it to the possibility of these “adversarial” images existing even in the human brain. Since the study found that one perturbation can cause misclassification in separate networks, trained for different datasets, it suggests that these “adversarial” images are universal. Most importantly, the study suggests that AI has blind spots that have not been addressed. They may be rare, but as our reliance on technology grows, they must be recognized and somehow accounted for.

Chelsea Kerwin, July 18, 2014

Sponsored by ArnoldIT.com, developer of Augmentext

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta