AI Failures: Fast and Furious Arrivals
June 18, 2020
What can go wrong with AI? Quite a lot, actually, including errors and biases that can cause harm if left unchecked. ImmuniWeb’s Security Blog discusses what it considers the “Top 10 Failures of AI.” Entries range from “AI fails to do image recognition” to “AI that hated humans.” Examples of racial bias and misogyny are included, as well dangerously flawed medical advice from IBM’s famous Watson. See the article for details on these cases and more.
The post goes on to discuss reasons AI fails: bad or insufficient data, bad engineering, or the wrong area of application. To avoid these perils, we’re advised:
“Never overestimate the capabilities of AI. It doesn’t make miracles and it is nowhere close to those ‘strong AI’ smarties from Hollywood blockbusters. You need a lot of relevant, cleaned and verified data to train an adequate model. The data is crucial for machine learning, but it is not all you need. Choosing a correct algorithm and tuning its parameters need a lot of tests and trials by a team of highly qualified experts. Most importantly, an AI system has a very limited capability of replacing humans. It can replace humans in simple, but tedious tasks, that consist of a lot of repeating routines. Any complex task that requires non-trivial approach to solution may lead to a high level of errors by AI. The best role an AI can play now is an assistant to humans who use AI as a tool to do a lot of routines and repeating operations.”
The article concludes, sensibly, by tooting ImmuniWeb’s own horn. It mentions a couple of awards, and emphasizes it views AI as a way to augment, not replace, human capabilities. We’re told it tests and updates its AI models “relentlessly. Focused on AI for application security, the small company was founded just last year in Geneva, Switzerland.
Cynthia Murrell, June 18, 2020