Newswise — Scientists are issuing a cautionary message regarding the use of artificial intelligence (AI) models like ChatGPT in healthcare applications for ethnic minority populations. In an article published in the Journal of the Royal Society of Medicine, epidemiologists from the University of Leicester and the University of Cambridge highlight the potential risks of perpetuating existing inequalities for ethnic minorities due to systemic biases present in the data utilized by healthcare AI tools.

The training of AI models necessitates the use of data gathered from various sources, such as healthcare websites and scientific research. However, the researchers have found that ethnicity data is frequently missing from healthcare research, and ethnic minorities are underrepresented in research trials.

Mohammad Ali, a PhD Fellow in Epidemiology at the College of Life Sciences, University of Leicester, points out that this significant underrepresentation of ethnic minorities in research has demonstrated harmful effects. For instance, it has led to the development of ineffective drug treatments or treatment guidelines that could be interpreted as discriminatory.

The concern lies in the fact that if the published literature already contains biases and lacks precision, it is highly likely that future AI models will perpetuate and potentially amplify these biases. Therefore, it is crucial to address these issues before AI is extensively used in healthcare for ethnic minority populations to avoid exacerbating existing disparities.

The researchers are expressing concern about the potential exacerbation of health inequalities in low- and middle-income countries (LMICs) by AI models. They note that these AI models are primarily developed in wealthier nations such as the USA and Europe, leading to a significant research and development gap between high- and low-income countries.

According to the researchers, most published research tends to neglect the specific health challenges faced by LMICs, particularly regarding healthcare provision. As a result, AI models may offer recommendations based on data from populations vastly different from those in LMICs.

While acknowledging these potential challenges, the researchers emphasize the importance of seeking solutions. Mr. Ali remarks that we must exercise caution but not halt the progress AI brings.

To overcome the risk of exacerbating health inequalities, the researchers propose several measures. Firstly, AI models should transparently describe the data used in their development. Additionally, efforts are needed to address ethnic health disparities in research, including better recruitment and recording of ethnicity information. Data used to train AI models must be representative, considering key factors such as ethnicity, age, sex, and socioeconomic status. Further research is also required to understand the application of AI models in ethnically diverse populations.

By addressing these considerations, the researchers believe that AI models can be leveraged to drive positive change in healthcare while promoting fairness and inclusivity. The focus should be on harnessing the power of AI to benefit all, especially those in LMICs with unique healthcare needs.

 

Journal Link: Journal of the Royal Society of Medicine