Can Bias Be Eliminated from Medical AI?

May 25, 2021

It seems like a nearly insurmountable problem. Science Magazine reports, “Researchers Call for Bias-Free Artificial Intelligence.” Humans are biased. Humans build AI. It seems like bias and AI are joined. Nevertheless, the stakes in healthcare are high enough that we must try, insist two Stanford University faculty members in a paper recently published in the journal EBioMedicine. We learn:

“Clinicians and surgeons are increasingly using medical devices based on artificial intelligence. These AI devices, which rely on data-driven algorithms to inform health care decisions, presently aid in diagnosing cancers, heart conditions and diseases of the eye, with many more applications on the way. Given this surge in AI, two Stanford University faculty members are calling for efforts to ensure that this technology does not exacerbate existing heath care disparities. In a new perspective paper, Stanford faculty discuss sex, gender and race bias in medicine and how these biases could be perpetuated by AI devices. The authors suggest several short- and long-term approaches to prevent AI-related bias, such as changing policies at medical funding agencies and scientific publications to ensure the data collected for studies are diverse, and incorporating more social, cultural and ethical awareness into university curricula. ‘The white body and the male body have long been the norm in medicine guiding drug discovery, treatment and standards of care, so it’s important that we do not let AI devices fall into that historical pattern.’”

The Science Magazine write-up discusses ways AI is being used in medicine today and how failure to account for race, sex, and socioeconomic status can have disastrous results. Its example is pulse oximeters. Melanin can interfere with their ability to read light passing through skin; they also tend to misstate women’s oxygen levels than men’s. As a result, Blacks and women, and especially Black women, often do not get oxygen when they need it in the hospital.

The article summarizes the paper’s recommendations. One example is to require funding recipients at agencies like the National Institutes of Health to include sex and race as biological variables in their research. Another suggestion is for biomedical publications to set policies that require sex and gender analyses where appropriate. Then there is the idea that would inform medical professionals before they even enter the field—that medical schools include the ways AI can reinforce social inequities in the curriculum. These are all viable options, but will they be enough?

Cynthia Murrell, May 25, 2021

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta