Great Moments in Image Recognition: Rifle or Turtle?

November 7, 2017

I read “AI Image Recognition Fooled by Single Pixel Change.” The write up explains:

In their research, Su Jiawei and colleagues at Kyushu University made tiny changes to lots of pictures that were then analyzed by widely used AI-based image recognition systems…The researchers found that changing one pixel in about 74% of the test images made the neural nets wrongly label what they saw. Some errors were near misses, such as a cat being mistaken for a dog, but others, including labeling a stealth bomber a dog, were far wider of the mark.

Let’s assume that these experts are correct. My thought is that neural networks may need a bit of tweaking.

What about facial recognition? I don’t want to elicit the ire of Xooglers, Apple iPhone X users, or the savvy folks at universities honking the neural network horns. Absolutely not. My goodness. What if I at age 74 wanted to apply via LinkedIn and its smart software for a 9 to 5 job sweeping floors?

Years ago I prepared a series of lectures pointing out how widely used algorithms were vulnerable to directed flows of shaped data. Exciting stuff.

The write up explains that the mavens are baffled:

There is certainly something strange and interesting going on here, we just don’t know exactly what it is yet.

May I suggest an assumption that methods work as sci fi and tech cheerleaders say they do is incorrect?

Stephen E Arnold, November 7, 2017

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta