True or False: AI Algorithms Are Neutral Little Puppies
August 11, 2020
The answer, according to CanIndia News, is false. (I think some people believe this.) “Google IBM, Microsoft AI Models Fail to Curb Gender Bias” reports:
new research has claimed that Google AI datasets identified most women wearing masks as if their mouths were covered by duct tapes. Not just Google. When put to work, artificial intelligence-powered IBM Watson virtual assistant was not far behind on gender bias. In 23 per cent of cases, Watson saw a woman wearing a gag while in another 23 per cent, it was sure the woman was “wearing a restraint or chains”.
Before warming up the tar and chasing geese for feathers, you may want to note that the sample was 265 men and 265 females. Note: The subjects were wearing covid masks or personal protective equipment.
Out of the 265 images of men in masks, Google correctly identified 36 per cent as containing PPE. It also mistook 27 per cent of images as depicting facial hair.
The researchers learned that 15 per cent of images were misclassified as duct tape.
The write up highlights this finding:
Overall, for 40 per cent of images of women, Microsoft Azure Cognitive Services identified the mask as a fashion accessory compared to only 13 per cent of images of men.
Surprised? DarkCyber is curious about:
- Sample size. DarkCyber’s recollection is that the sample should have been in the neighborhood of 2,000 or so with 1,000 possible women and 1,000 possible men
- Training. How were the models trained. Were “masks” represented in the training set? What percentage of training images had masks?
- Image quality. What steps were taken to ensure that the “images” were of consistent quality; that is, focus, resolution, color, etc.
DarkCyber is interested in the “bias” allegation. But DarkCyber may be biased with regard to studies which make it possible to question sample size, training, and data quality/consistency. The models may have flaws, but the bias thing? Maybe, maybe not.
Stephen E Arnold, August 11, 2020