Tim Weninger, associate professor in the Department of Computer Science and Engineering at the University of Notre Dame, says Facebook’s newly announced ban on deepfakes is good news for democracy but presents a number of challenges in the fight against the spread of misinformation.
Weninger is an expert in disinformation and fake news, web and social media, data mining and machine learning.
“This is good news for democracy and a good business policy for Facebook, whose users don’t want to be lied to by the content they see,” Weninger said. “If Facebook becomes flooded by fake or misleading content, then users will abandon the site.”
But, Weninger adds, the policy presents a host of problems and challenges.
“Most obvious is the technological question of how will Facebook determine which content is AI faked and which is not. It’s clear that deepfake technology will soon be usable by the masses. And when that happens, Facebook won’t have the capacity to filter fake videos manually. Notre Dame and others are working on deepfake detectors, but these automatic detectors won’t catch everything.
“Second is the actual effect that this deepfake ban will have on the actual problem. It’s often said that ‘a lie can travel around the world before the truth can get its pants on.’ So, if a deepfake is created, shared and quickly taken down, the damage is done — it will live forever. And there is little that a maligned political candidate or brand can do to fix it.
“In my opinion, deepfakes are some mix of identity theft and slander. And there ought to be a legal remedy or judicial recourse available to the victims of deepfakes.”