More AI Foibles: Inheriting Biases
December 7, 2021
Artificial intelligence algorithms are already implemented in organizations, but the final decisions are still made by humans. It is fact that algorithms are unfortunately programmed with biases towards minorities and marginalized communities. It might appear that these are purposefully built into the AI, it is not. The problem is that the AI designers lack sufficient diverse data to feed algorithms. Biases are discussed in The Next Web’s article, “Worried About AI Ethics? Worry About Developers’ Ethics First.”
The article cites Asimov’s famous three laws of robotics and notes that ethics change depending on the situation and human individual. AI are unable to distinguish these variables like humans, so they must be taught. The question is what ethics are AI developers “teaching” to their creations.
Autonomous cars are a great example, because they rely on human and AI input to make decisions to avoid accidents. Is there a moral obligation to program autonomous cars to override a driver’s decision to prevent collisions? Medicine is another worrisome field. Doctors still make critical choices, but will AI remove the human factor in the not too distant future? There are also weaponized drones and other military robots that could prolong warfare or be hacked.
The philosophical trolley problem is cited, followed by this:
People often struggle to make decisions that could have a life-changing outcome. When evaluating how we react to such situations, one study reported choices can vary depending on a range of factors including the respondent’s age, gender and culture.
When it comes to AI systems, the algorithms training processes are critical to how they will work in the real world. A system developed in one country can be influenced by the views, politics, ethics and morals of that country, making it unsuitable for use in another place and time.
If the system was controlling aircraft, or guiding a missile, you’d want a high level of confidence it was trained with data that’s representative of the environment it’s being used in.”
The United Nations has called for a “a comprehensive global standard-setting instrument” for a global ethical AI network. It is a step in the right direction, especially when it comes to ethnic diversity problems. AI that does not take into account eye shape, skin color, or other biological features are understandably overlooked by developers without them. These can be fixed with broadened data collections.
A bigger problem would be differentials between sexes and socioeconomic background. Women are viewed as less than second class citizens in many societies and socioeconomic status determines nearly everything in all countries. How are developers going to address these ethical issues? How about a deep dive with a snorkel to investigate?
Whitney Grace, December 7, 2021
Comments
One Response to “More AI Foibles: Inheriting Biases”
[…] note the emphasis on AI; it seems the security field is not letting concerns over algorithmic bias slow it down. Siren execs call this version a huge step forward and hopes it will position their […]