Recognition (People and Things) Not 100 Percent Yet

November 24, 2021

It may sound like a good idea—use technology to find illegal images, like those of child sexual abuse, and report the criminals who perpetuate them. Apple, for example, proposed placing such a tool on all its personal devices but postponed the plan due to privacy concerns. And some law enforcement agencies are reportedly considering using the technology. However, researchers at the Imperial College London have found “Proposed Illegal Image Detectors on Devices Are ‘Easily Fooled’.” Reporter Caroline Brogan writes:

“Researchers who tested the robustness of five similar algorithms found that altering an ‘illegal’ image’s unique ‘signature’ on a device meant it would fly under the algorithm’s radar 99.9 per cent of the time. The scientists behind the peer-reviewed study say their testing demonstrates that in its current form, so-called perceptual hashing based client-side scanning (PH-CSS) algorithms will not be a ‘magic bullet’ for detecting illegal content like CSAM [Child Sexual Abuse Material] on personal devices. It also raises serious questions about how effective, and therefore proportional, current plans to tackle illegal material through on-device scanning really are. The findings are published as part of the USENIX Security Conference in Boston, USA. Senior author Dr Yves-Alexandre de Montjoye, of Imperial’s Department of Computing and Data Science Institute, said: ‘By simply applying a specifically designed filter mostly imperceptible to the human eye, we misled the algorithm into thinking that two near-identical images were different. Importantly, our algorithm is able to generate a large number of diverse filters, making the development of countermeasures difficult. Our findings raise serious questions about the robustness of such invasive approaches.’”

The write-up includes several examples of (innocuous) images before and after such cloaking filters were applied. They are less crisp, to be sure, but still clear as day to the human eye. The research team has wisely decided not to make their filtering technique public lest bad actors use it to fool PH-CSS algorithms. Their results do make one wonder if the use of these detection tools is worth the privacy trade-off. Perhaps not, at least until the algorithms learn to interpret filtered photos.

Cynthia Murrell, November 23, 2021

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta