Deepfakes and Other AI Threats

August 19, 2020

As AI technology matures it has greater and greater potential to facilitate bad actors. Now, researchers at the University College London have concluded that falsified audio and video content poses the greatest danger. The university announces its results on its news page in, “‘Deepfakes’ Ranked as Most Serious AI Crime Threat.” The post relates:

“The study, published in Crime Science and funded by the Dawes Centre for Future Crime at UCL (and available as a policy briefing), identified 20 ways AI could be used to facilitate crime over the next 15 years. These were ranked in order of concern – based on the harm they could cause, the potential for criminal profit or gain, how easy they would be to carry out and how difficult they would be to stop. Authors said fake content would be difficult to detect and stop, and that it could have a variety of aims – from discrediting a public figure to extracting funds by impersonating a couple’s son or daughter in a video call. Such content, they said, may lead to a widespread distrust of audio and visual evidence, which itself would be a societal harm.”

Is the public ready to take audio and video evidence with a grain of salt? And what happens when we do? It is not as though first-hand witnesses are more reliable. The rest of the list presents five more frightening possibilities: using driverless vehicles as weapons; crafting more specifically tailored phishing messages; disrupting AI-controlled systems (like power grids, we imagine); large-scale blackmail facilitated by raking in data from the Web; and one of our favorites, realistic AI-generated fake news. The post also lists some crimes of medium- and low-concern. For example, small “burglar bots” could be thwarted by measures as simple as a letterbox cage. The write-up describes the study’s methodology:

“Researchers compiled the 20 AI-enabled crimes from academic papers, news and current affairs reports, and fiction and popular culture. They then gathered 31 people with an expertise in AI for two days of discussions to rank the severity of the potential crimes. The participants were drawn from academia, the private sector, the police, the government and state security agencies.”

Dawes Centre Director Shane Johnson notes that, as technology evolves, we must predict with potential threats so policy makers and others can keep up. Yes, that would be nice. She promises more reports are in her organization’s future. Stay tuned.

Cynthia Murrell, August 19, 2020

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta