Forget AI Bias: Outputs May Just Be Wrong
March 29, 2022
I read a “real” news story which caused me to whip out my stack of 4×6 note cards and jot down some statements and phrases. May I share six examples? (Permission denied? Well, too bad, gentle reader, too too bad.)
- “Too good to be true”
- “Overly optimistic results”
- “The images may look excellent, but they are inaccurate.”
- “Important details… could be completely missing”
- “Results cannot be reproduced”
- “Data crimes”
If you want to see these statements in allegedly objective context, navigate to “‘Off Label’ Use of Imaging Databases Could Lead to Bias in AI Algorithms, Study Finds.” For the intellectually hardy, the original “research” paper is at this link, at least as of March 25, 2022, at 0500 am US Eastern.
The main idea is that short cuts, use of widely used public data, and eagerness to be a winner appear to be a characteristic of some Fancy Dan machine learning methods. (Hello, Stanford AI Laboratory, wearing a Snorkel today?)
Implications? One’s point of view colors the information in the article. Is the article accurate, chock full of reproducible results?
There’s something called a viewshed or its quizlet. Depending on one’s location, certain important objects or concepts may not be in the picture. Does it matter? Only if the autonomous agent sends a smart system through one’s garden party. No manners, no apologies, and no consequences or at least significant consequences, right? Sail on, tally ho, and so forth.
Stephen E Arnold, March 29, 2022