Handwaving at Light Speed: Control Smart Software Now!
June 13, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Here is an easy one: Vox ponders, “What Will Stop AI from Flooding the Internet with Fake Images?” “Nothing” is the obvious answer. Nevertheless, tech companies are making a show of making an effort. Writer Shirin Ghaffary begins by recalling the recent kerfuffle caused by a realistic but fake photo of a Pentagon explosion. The spoof even affected the stock market, though briefly. We are poised to see many more AI-created images swamp the Internet, and they won’t all be so easily fact checked. The article explains:
“This isn’t an entirely new problem. Online misinformation has existed since the dawn of the internet, and crudely photoshopped images fooled people long before generative AI became mainstream. But recently, tools like ChatGPT, DALL-E, Midjourney, and even new AI feature updates to Photoshop have supercharged the issue by making it easier and cheaper to create hyper realistic fake images, video, and text, at scale. Experts say we can expect to see more fake images like the Pentagon one, especially when they can cause political disruption. One report by Europol, the European Union’s law enforcement agency, predicted that as much as 90 percent of content on the internet could be created or edited by AI by 2026. Already, spammy news sites seemingly generated entirely by AI are popping up. The anti-misinformation platform NewsGuard started tracking such sites and found nearly three times as many as they did a few weeks prior.”
Several ideas are being explored. One is to tag AI-generated images with watermarks, metadata, and disclosure labels, but of course those can be altered or removed. Then there is the tool from Adobe that tracks whether images are edited by AI, tagging each with “content credentials” that supposedly stick with a file forever. Another is to approach from the other direction and stamp content that has been verified as real. The Coalition for Content Provenance and Authenticity (C2PA) has created a specification for this purpose.
But even if bad actors could not find ways around such measures, and they can, will audiences care? So far it looks like that is a big no. We already knew confirmation bias trumps facts for many. Watermarks and authenticity seals will hold little sway for those already inclined to take what their filter bubbles feed them at face value.
Cynthia Murrell, June 13, 2023