Flawed AI Makes Robots As Bad As Humans
July 21, 2022
Humans have bad behavior, especially on the Internet. Whenever a company releases a new chatbot, the Internet considers it a challenge to transform the chatbot into a racist, homophobic, sexist, etc. AI. It usually takes less than twenty-four hours for this to happen. While it is done for sh*ts and giggles, it also proves a lasting problem with AI. The Eurasia Review explores why in “Robots Turn Racist And Sexist With Flawed AI.”
Researchers at Johns Hopkins University, Georgia Institute of Technology, and the University of Washington studied robots loaded with accepted and widely-used data. They basically discovered the above scenario, except this was not done to fill time. The research will be presented at the 2022 Conference on Fairness, Accountability, and Transparency (ACM FAccT).
The robots have learned toxic behavior and that could hinder technological advancement. Robots should not be sexist, racist, etc. nor should humans believe that it is okay for robots to “behave” in those manners.
” Those building artificial intelligence models to recognize humans and objects often turn to vast datasets available for free on the Internet. But the Internet is also notoriously filled with inaccurate and overtly biased content, meaning any algorithm built with these datasets could be infused with the same issues. Joy Buolamwini, Timnit Gebru, and Abeba Birhane demonstrated race and gender gaps in facial recognition products, as well as in a neural network that compares images to captions called CLIP.
Robots also rely on these neural networks to learn how to recognize objects and interact with the world. Concerned about what such biases could mean for autonomous machines that make physical decisions without human guidance…”
The team conducted a study that confirmed what the Internet already knew: AI is biased and needs to be fixed.
These studies are needed to ensure better datasets are available to program socially acceptable and (dare we say) nice robots. But why did a research team need to investigate this? Many facial recognition companies and other AI startups discovered this already. It is in their favor to create better databases for nicer robots. Maybe it was to gather evidence from a non-commercial entity?
Smart AI is biased AI. Have I interpreted the write up correctly?
Whitney Grace, July 21, 2022