Bias and Deep Fake Concerns Limit AI Image Applications

July 14, 2022

This is why we can’t have nice things. AI-generated image technology has reached dramatic heights, but even its makers agree giving the public unfettered access is still a very bad idea. CNN Business explains, “AI Made These Stunning Images. Here’s Why Experts Are Worried.” In a nutshell: bias and deep fakes.

OpenAI’s DALL-E 2 and Google’s Imagen have been showing off their impressive abilities, and even venturing into some real-world applications, but within very careful limitations. Reporter Rachel Metz reveals:

“Neither DALL-E 2 nor Imagen is currently available to the public. Yet they share an issue with many others that already are: they can also produce disturbing results that reflect the gender and cultural biases of the data on which they were trained — data that includes millions of images pulled from the internet. The bias in these AI systems presents a serious issue, experts told CNN Business. The technology can perpetuate hurtful biases and stereotypes. They’re concerned that the open-ended nature of these systems — which makes them adept at generating all kinds of images from words — and their ability to automate image-making means they could automate bias on a massive scale. They also have the potential to be used for nefarious purposes, such as spreading disinformation. ‘Until those harms can be prevented, we’re not really talking about systems that can be used out in the open, in the real world,’ said Arthur Holland Michel, a senior fellow at Carnegie Council for Ethics in International Affairs who researches AI and surveillance technologies. … Holland Michel is also concerned that no amount of safeguards can prevent such systems from being used maliciously, noting that deepfakes — a cutting-edge application of AI to create videos that purport to show someone doing or saying something they didn’t actually do or say — were initially harnessed to create faux pornography.”

That is indeed perfectly awful. So, for now, both OpenAI and Google Research are mostly sticking to animals and other cute subjects while prohibiting the depiction of humans or anything might be disturbing. Even so, bias can creep in. For example, Imagen was tasked with depicting oil paintings of a “royal raccoon” king and queen. What could go wrong? Alas, the AI demonstrated its Western bias by interpreting “royal” in a distinctly European style. Oh well. At least the regal raccoons are cuties, just not in one’s attic.

Cynthia Murrell, July 14, 2022

Comments

One Response to “Bias and Deep Fake Concerns Limit AI Image Applications”

  1. Psycho AI: Seems Possible : Stephen E. Arnold @ Beyond Search on September 29th, 2022 5:05 am

    […] While some point to synthetic data as the solution, that approach has its own problems. Despite the dangers, the world is being increasingly run by algorithms. We are unlikely to reverse course, so each […]

  • Archives

  • Recent Posts

  • Meta