Newswise — How would you feel if you learned that decisions about your clinical care were supported by artificial intelligence? In high-stakes situations, introducing AI into the process may sound like a gamble. But these new tools may outperform human doctors in predicting some medical outcomes, accounting for the complexity of each patient’s individual circumstances while reducing costs for those who don’t need specialized care.
So what are the real risks of using artificial intelligence to help doctors and patients make medical decisions—and are they any worse than the risks we already face? Take the threat of stillbirth, for example—a shockingly common outcome that affects about one in 160 pregnancies nationwide.
For patients who are most at risk, intensive monitoring and medical testing can offer the best odds of delivering a healthy baby. But much remains unknown about the causes of stillbirth, and figuring out what makes a pregnancy risky is challenging.
Predicting the unpredictable
One of the biggest warning signs for stillbirth, for instance, is an underweight fetus. But even among people whose fetus weighs less, most pregnancies go fairly smoothly, and the baby is born healthy. Does a patient with a lower-weight fetus need intensive and stressful medical surveillance, or can they go about their pregnancy largely as normal?
“It’s an area of tremendous clinical uncertainty,” says Nathan Blue, MD. As an assistant professor in obstetrics and gynecology in the Spencer Fox Eccles School of Medicine at the University of Utah, Blue specializes in caring for people with high-risk pregnancies, including many patients with babies that are smaller than expected. Even for specialists like Blue, the complexity of risk factors for stillbirth makes it hard to know which patients would benefit from higher levels of medical surveillance.
But a new strategy is emerging that seems ideally suited to this kind of complex problem: artificial intelligence. At its best, AI can learn from massive databases of previous patients to predict medical outcomes faster and more accurately than even the most experienced doctors can do alone. In recent years, a host of AI-based tools have demonstrated their power in the clinic, from programs that rapidly sift through the genetic information of NICU patients to make diagnoses to tools that spot subtle warning signs in MRI scans.
Blue thinks that artificial intelligence could make a crucial difference in delivering better answers—and better care—to people facing an increased risk of stillbirth. He’s designing an AI-based tool that will scour massive databases of past pregnancy outcomes to find the hidden patterns of warning signs—from underlying genetic risk factors to environmental factors and clinical measurements—that mark the difference between a risky pregnancy and a relatively safe one.
The tool could then use those patterns to estimate the risk of stillbirth for future pregnancies. When a new patient arrives in the clinic with a fetus that is smaller than expected, their doctor could input their unique risk factors into the tool and it would calculate a personalized estimate of their risk of stillbirth.
Armed with that knowledge, a pregnant person and their doctor could make an informed decision about next steps: people with high-risk pregnancies could know to keep an extra close eye on their symptoms, and people at lower risk could have their worries relieved and not undergo unnecessary medical procedures.
“It could help people manage the stress and the burden of this experience, help reduce costs for people who don’t need tons of extra care, and prioritize who needs genetic testing,” Blue says. He is cautiously optimistic about the increasing use of AI in medicine, saying that it has the potential to personalize care while avoiding human assumptions and biases that may color doctors’ judgments.
What are the risks?
Of course, AI tools can be biased as well. The real-world information they use to learn is sometimes affected by gender- and race-based biases that can influence past patterns of care. That can skew an AI’s interpretation of that data, too. But shifting toward tools that show which factors influenced their predictions can help shed light on these biases, Blue believes, ultimately promoting equitable care.
As for other concerns about AI, Blue says they’re similar to any other tool: making sure that good clinical care goes beyond a single number. “The role of a clinician is not just to give information but to try to help the person in front of them,” he says. Whether risks are calculated by AI, by simpler equations, or by a doctor’s individual judgment, each patient’s unique priorities and values must be taken into account.
A rapidly growing suite of AI tools, some just invented and many still in furious progress, aim to work with doctors to improve diagnoses and support patients’ decisions with comprehensive and personalized information. In situations like high-risk pregnancies, where the complexity of risk factors makes predicting outcomes uncertain even for the most experienced human doctors, artificial intelligence could offer a path to an answer.