Can You Leave Your AI Home Alone?
May 5, 2022
An article at ZDNet takes a brief but wide-ranging look at the current state of AI. The theme throughout the piece, titled “AI Can Be Creative, Ethical When Applied Humanly,” is that algorithms still cannot be left unsupervised. Writer Elleen Yu begins by exploring ways AI is being “creative,” long thought a talent limited to biological life forms. So far, examples mostly involve marketing campaigns and, of course, must be checked by humans before being implemented. Then there is the metaverse, the virtual world(s) seemingly perfect for algorithmic stewardship. Even there, AI requires transparency and human guidance when applying and enforcing rules. Yu’s highest stakes example, though, is the realm of law enforcement. She writes:
“Humans, too, cannot be removed from the equation where ethics are central to the AI discourse, such as in law enforcement. In making a decision, humans would consider the morals behind it, said David Hardoon, managing director at Aboitiz Data Innovation (ADI), the Singapore-based data science and AI arm of Philippine conglomerate, the Aboitiz Group. He also is chief data and AI officer for UnionBank Philippines. ‘Can AI help us make a decision? Yes. Can it decide the morality of a decision? Absolutely not. This distinction is important,’ said Hardoon, who was previously chief data office and data analytics head of Monetary Authority of Singapore. Commenting on why AI should be applied with care in certain areas such as law enforcement, he stressed the need to ensure the technology could be deployed in a robust manner. This currently was not the case, he said, pointing to the use of AI in facial recognition.”
Excellent example. Yu points to a 2017 study from MIT which found that darker-skinned females were 32 times more likely to be misclassified than lighter-skinned males. She also notes some of the most prominent tech companies acknowledge the problem:
“Vendors such as IBM, Microsoft, and Amazon have banned the sale of facial recognition technology to police and law enforcement, citing human rights concerns and racial discrimination. Most have urged governments to establish stronger regulations to govern and ensure the ethical use of facial recognition tools.”
Unfortunately, large as they are, those three companies are but a drop in the facial recognition bucket. With that and other AI tech currently in use by law enforcement, transparency has a lot of catching up to do. If the issues of bias could be resolved, and that is a big if, such tools could be a force for good with the right human oversight and accountability.
Cynthia Murrell, May 5, 2022
Comments
One Response to “Can You Leave Your AI Home Alone?”
[…] unsupervised on Twitter. It seems Imagen will only be released to the public when that vexing bias problem is solved. So, maybe […]