WASHINGTON — Facebook said that it would ban videos that were heavily manipulated by artificial intelligence, the latest in a string of changes by the company to combat the flow of false information on its site.
A company executive said in a blog post posted late Monday that the social network would remove videos altered by artificial intelligence, often called deepfakes, in ways that “would likely mislead someone into thinking that a subject of the video said words that they did not actually say.” The videos will also be banned in ads.
The policy will have a limited effect on slowing the spread of false videos, since the vast majority are edited in more traditional ways: cutting out context or changing the order of words. The policy will not extend to those videos, or to parody or satire, said the executive, Monika Bickert.
Bickert said all videos posted would still be subject to Facebook’s system for fact-checking potentially deceptive content. Content that is found to be factually incorrect appears less prominently on the site’s news feed and is labeled false.
But the announcement by Facebook underscores how the social network, by far the world’s largest, is trying to thwart one of the latest tricks used by purveyors of disinformation before this year’s presidential election. False information spread furiously on the platform during the 2016 campaign, leading to widespread criticism of the company.
By banning deepfakes before the technology becomes widespread, Facebook is attempting to calm lawmakers, academics and political campaigns who remain frustrated by how the company handles political posts and videos about politics and politicians.
But some Democratic politicians said the new policy does not go nearly far enough. Last year, Facebook refused to take down a video of Speaker Nancy Pelosi that was edited to make her appear to be slurring her words. At the time, the company defended its decision despite furious criticism, saying that it had subjected the video to its fact-checking process and had reduced its reach on the social network.
The new policy, though, does not apply to the video of Pelosi. Disinformation researchers have referred to similar videos as “cheapfakes” or “shallowfakes,” or deceptive content edited with simple video-editing software, in contrast to the more sophisticated deepfakes videos generated by artificial intelligence.
Pelosi’s deputy chief of staff, Drew Hammill, said in a statement that Facebook “wants you to think the problem is video-editing technology, but the real problem is Facebook’s refusal to stop the spread of disinformation.”
Facebook would also keep up a video that widely circulated last week, in which a long response that former Vice President Joe Biden gave to a voter in New Hampshire was heavily edited to wrongly suggest that he made racist remarks.
Bill Russo, deputy communications director of Biden’s presidential campaign, said that Facebook’s new policy was not meant “to fix the very real problem of disinformation that is undermining faith in our electoral process, but is instead an illusion of progress.”
“Banning deepfakes should be an incredibly low floor in combating disinformation,” Russo said.
The company’s new policy was first reported by The Washington Post.
Computer scientists have long warned that new techniques used by machines to generate images and sounds that are indistinguishable from the real thing can vastly increase the volume of false and misleading information online.
Deepfakes — a term that generally describes videos doctored with cutting-edge artificial intelligence — have become much more prevalent in recent months, especially on social media. And they have already begun challenging the public’s assumptions about what is real and what is not.
Last year, for instance, a Facebook video released by the government of Gabon, a country in Central Africa, was meant to show proof of life for its president, who was out of the country for medical care. But the president’s critics claimed it was fake.
In December 2017, the technology site Motherboard reported that people were using AI technologies to graft the heads of celebrities onto nude bodies in pornographic videos. Websites like Pornhub, Twitter and Reddit suppressed the videos, but according to the research firm Deeptrace Labs, these videos still made up 96% of deepfakes found in the last year.
Tech companies are researching new techniques to detect deepfake videos and stop their spread on social media, even as the technology to create them quickly evolves. Last year, Facebook participated in a “Deepfake Detection Challenge,” and along with other tech firms like Google and Microsoft, offered a bounty for outside researchers who develop the best tools and techniques to identify AI-generated deepfake videos.
Because Facebook is the No. 1 platform for sharing false political stories, according to disinformation researchers, it has an added urgency to spot and halt novel forms of digital manipulation. Renee DiResta, the technical research manager for the Stanford Internet Observatory, which studies disinformation, pointed out that a challenge of the policy is that the deepfake content “is likely to have already gone viral prior to any takedown or fact check.”
On Wednesday, Bickert, Facebook’s vice president of global policy management, is expected join other experts to testify on “manipulation and deception in the digital age” before the House Energy and Commerce Committee.
DiResta urged lawmakers to “delve into the specifics around how quickly the company envisions it could detect or respond to a viral deepfake, or to the ‘shallowfakes’ material which it won’t take down but has committed to fact-checking.”
Subbarao Kambhampati, a professor of computer science at Arizona State University, described Facebook’s effort to detect deepfakes as “a moving target.” He said Facebook’s automated systems for detecting such videos would have limited reach, and there would be “significant incentive” for people to develop fakes that would fool Facebook’s systems.
There are many ways to manipulate videos with the help of artificial intelligence, added Matthias Niessner, a professor of computer science at the Technical University of Munich, who works with Google on its deepfake research. There are deepfake videos in which faces are swapped, for instance, or in which a person’s expression and lip movement are altered, he said.
“The question is where you draw the line,” Niessner said. “Eventually, it raises the question of intent and semantics.”