Newswise — DURHAM, N.H.—It can be challenging to gauge the quality of online news—questioning if it is real or if it is fake. When it comes to health news and press releases about medical treatments and procedures the issue can be even more complex, especially if the story is not complete and still doesn’t necessarily fall into the category of fake news. To help identify the stories with inflated claims, inaccuracies and possible associated risks, researchers at the University of New Hampshire developed a new machine learning model, an application of artificial intelligence, that news services, like social media outlets, could easily use to better screen medical news stories for accuracy.
“The way most people think about fake news is something that's completely fabricated, but, especially in healthcare, it doesn't need to be fake. It could be that maybe they're not mentioning something,” said Ermira Zifla, assistant professors of decision sciences at UNH’s Peter T. Paul College of Business and Economics. “In the study, we’re not making claims about the intent of the news organizations that put these out. But if things are left out, there should be a way to look at that.”
Zifla and study co-author Burcu Eke Rubini, assistant professors of decision sciences, found in their research, published in Decision Support Systems, that since most people don’t have the medical expertise to understand the complexities of the news, the machine learning models they developed outperformed the evaluations of laypeople in assessing the quality of health stories. They used data from Health News Review that included news stories and press releases on new healthcare treatments published in various outlets from 2013 to 2018. The articles had already been evaluated by a panel of healthcare experts—medical doctors, healthcare journalists and clinical professors—using ten different evaluation criteria the experts had developed. The criteria included cost and benefits of the treatment or test, any possible harm, the quality of arguments, the novelty and availability of the procedure and the independence of the sources. The researchers then developed an algorithm based on the same expert criteria, and trained the machine models to classify each aspect of the news story, matching that criteria as "satisfactory" or "not satisfactory".
The model's performance was then compared against layperson evaluations obtained through a separate survey where participants rated the same articles as "satisfactory" or "not satisfactory" based on the same criteria. The survey revealed an "optimism bias," with most of the 254 participants rating articles as satisfactory, markedly different from the model's more critical assessments.
Researchers stress that they are by no means looking to replace expert opinion but are hoping to start a conversation about evaluating news based on multiple criteria and offering an easily accessible and low-cost alternative via open-source software to better evaluate health news.
The University of New Hampshire inspires innovation and transforms lives in our state, nation and world. More than 16,000 students from 49 states and 82 countries engage with an award-winning faculty in top-ranked programs in business, engineering, law, health and human services, liberal arts and the sciences across more than 200 programs of study. A Carnegie Classification R1 institution, UNH partners with NASA, NOAA, NSF, and NIH, and received over $210 million in competitive external funding in FY23 to further explore and define the frontiers of land, sea and space.
###