Newswise — ANN ARBOR, Mich. – When a child peruses YouTube, the content recommended to them is not always age appropriate, a new study suggests.

Researchers mimicked search behaviors of children using popular search terms, such as memes, Minecraft and Fortnite, and captured video thumbnails recommended at the end of each video.

Among the 2,880 thumbnails analyzed, many contained problematic click bait, such as violence or frightening images, according to the Michigan Medicine led research in JAMA Network Open.

“Children spend a significant amount of time on free video sharing platforms that include user-generated content,” said lead author Jenny Radesky, M.D., developmental behavioral pediatrician at University of Michigan Health C.S. Mott Children’s Hospital.

“It’s important to understand that platforms with billions of hours of content can’t perform human review of everything suggested to children and use algorithms that are imperfect. Parents and children need to be aware of the risks of exposure to inappropriate content and develop strategies to avoid it.”

Some research suggests children eight years and younger spend about 65 % of their online time on video sharing sites, many averaging an hour a day, Radesky says.

With hundreds of videos uploaded to such platforms every minute, most content moderation sites rely on automated systems to flag videos that violate policies or depict violent or dangerous content. In response, some platforms like YouTube have created made-for-kids labels to identify content appropriate for younger viewers.

But recent research suggests that many young children seek out videos that don’t fall in the “child-friendly” categories, searching for influencers, video games or funny videos.

Among thumbnails yielded in searches, more than half were identified as including “shocking, dramatic or outrageous” messaging, the study suggests. A little less than a third included violence, peril and pranks while 29 % included “creepy, bizarre and disturbing” imagery.

Researchers flagged other content suggestions for “visual loudness,” or using attention capturing designs, as well as manufactured drama and intrigue and depictions of far-fetched luxury items, such as cars, jewelry and houses. A smaller percentage of automated suggestions included gender stereotypes.

“These findings contribute to growing research on how digital designs aim to capture and keep users’ attention,” Radesky said. “We need more research on children’s interactions with these platforms to guide better polices that protect them from negative media experiences.”

Study cited: “Algorithmic Content Recommendations on a Video-sharing platform used by children,” JAMA Network Open, doi:10.1001/jamanetworkopen.2024.13855