Research Alert

Newswise — In a new review, Yale researchers provide an in-depth analysis of how biases at different stages of AI development can lead to poor clinical outcomes and exacerbate health disparities. The authors say their results reflect an old adage in the computing world: "Garbage in, garbage out."

“Bias in; bias out,” said John Onofrey, PhD, assistant professor of radiology & biomedical imaging and of urology at Yale School of Medicine (YSM) and senior author of the study. “The same idea absolutely applies.”

Published November 7 in PLOS Digital Health, the article provides examples, both hypothetical and real, to illustrate how bias impacts health care outcomes and provide mitigation strategies. “Having worked in the machine learning/AI field for many years now, the idea that bias exists in algorithms is not surprising,” Onofrey said. “However, listing all the potential ways bias can enter the AI learning process is incredible. This makes bias mitigation seem like a daunting task."

Study authors identified sources of biases at each stage of medical AI development - training data, model development, publication, and implementation – and provided illustrative examples and bias mitigation strategies for each.

In one example, prior research has found using race as a factor in estimating kidney function can lead to longer wait times for Black transplants to get onto transplant lists. The Yale team noted numerous recommendations that future algorithms use more precise measures, such as socioeconomic factors and zip code. “Greater capture and use of social determinants of health in medical AI models for clinical risk prediction will be paramount,” said James L. Cross, a first-year medical student at YSM and the study's first author.

“Bias is a human problem,” added associate professor adjunct of radiology & biomedical imaging and study co-author Michael Choma, MD, PhD. ”When we talk about ‘bias in AI,’ we must remember that computers learn from us.”

 

Journal Link: PLOS Digital Health, Nov-2024