AI That Judges Us on Our Appearance

January 29, 2021

An echo of the long-debunked field of phrenology has emerged in today’s AI. American Scientist delves into “The Dark Past of Algorithms that Associate Appearance and Criminality.” It seems makers of these new technologies missed an old lesson: do not judge a book by its cover. Despite these long-accepted words of wisdom recent algorithms have popped up that purport to identify criminals, or potential criminals, by appearance alone. We’re told several schools have installed cameras that supposedly identify cheaters and inattentive students. There is even an AI out of Stanford University that claims to be an accurate gaydar. Writer Catherine Stinson outlines not only why such algorithms are prone to error, from but also the dangers they pose to certain segments of society. I recommend interested readers check out the article. Here is one excerpt that summarizes the basics:

“Complex personal traits such as a tendency to commit crimes are exceedingly unlikely to be genetically linked to appearance in such a way as to be readable from photographs. First, criminality would have to be determined to a significant extent by genes rather than environment. There may be some very weak genetic influences, but any that exist would be washed out by the much larger influence of environment. Second, the genetic markers relevant to criminality would need to be linked in a regular way to genes that determine appearance. This link could happen if genes relevant to criminality were clustered in one section of the genome that happens to be near genes relevant to face shape. For a complex social trait such as criminality, this clustering is extremely unlikely. A much more likely hypothesis is that any association that exists between appearance and criminality works in the opposite direction: A person’s appearance influences how other people treat them, and these social influences are what drives some people to commit crimes (or to be found guilty of them).”

The points about flawed data sets are also very important—consider the differences between a mug shot and selfies posted to social media. Between the historic ridicule of phrenology and more recent discussions around AI bias, it is surprising developers would even consider teaching their algorithms to make assumptions based on appearance.

Cynthia Murrell, January 29, 2021

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta