A couple short video clips, a voice actor and an artificial intelligence algorithm.
That’s all two artists and a technology start-up needed to produce a video of Facebook Chief Executive Mark Zuckerberg bragging about abusing “stolen data” from users – and now it’s testing the social media platform’s policies on how it manages the spread of fake content and misinformation online.
The “deepfake,” a term used to describe the sophisticated computer-altered videos, was uploaded to Instagram and appears to show Zuckerberg delivering an ominous message in a segment from CBS News’ streaming channel about the power he and Facebook wield. The doctored clip, which was posted last week, also features news graphics bearing the CBS logo.
“Imagine this for a second: one man with total control of billions of people’s stolen data,” the image of Zuckerberg says. “All their secrets, their lives, their futures.”
He later adds, “Whoever controls the data, controls the future.”
The post, shared by Bill Posters, one of the artists, comes as Facebook is still recovering from the widespread backlash over its decision last month to not delete a viral video of House Speaker Nancy Pelosi, D-Calif., that was edited to make her sound as if she were drunkenly slurring her words.
At the time, Facebook acknowledged that third-party fact-checkers had found the Pelosi video to be “false,” but said, “We don’t have a policy that stipulates that the information you post on Facebook must be true,” The Washington Post’s Drew Harwell reported. Instead of taking the video down, Facebook said it would “heavily reduce” its appearances on news feeds, add an informational box linking to two fact-check sites and open a pop up box with links to “additional reporting” whenever the content is shared.
In the aftermath of the Pelosi video, Neil Potts, Facebook’s director of public policy, was asked if an altered video of Zuckerberg would receive the same treatment and he said yes, the Toronto Star’s Alex Boutilier reported. On Tuesday, it appeared that the company was staying true to its word.
“We will treat this content the same way we treat all misinformation on Instagram,” a spokesperson told The Post in a statement late Tuesday. “If third-party fact-checkers mark it as false, we will filter it from Instagram’s recommendation surfaces like Explore and hashtag pages.”
The Zuckerberg video, which as of early Wednesday had been viewed more than 23,000 times, will not be removed from Posters’ Instagram account. In the video’s caption, Posters notes that it was made using video dialogue replacement technology created by Canny AI, a Tel Aviv-based company.
However, a CBS spokesperson told The Post that the network has asked Facebook to take the video down, citing a “fake, unauthorized use of the CBSN trademark.”
The short video, which begins with Zuckerberg staring intently into the camera, is in reality, just 21 seconds taken from a much longer 2017 video of the Facebook chief addressing Russian interference in the 2016 election.
The words that appeared to be coming out of his mouth were actually spoken by a voice actor reading from a script provided by Posters and another artist, Daniel Howe. The Zuckerberg “deepfake” is part of an exhibition called “Spectre,” which uses the edited videos of well-known figures such as President Trump, Kim Kardashian and Morgan Freeman to “demonstrate the power of computational propaganda.” The installation was featured this month at the Sheffield Doc/Fest, a documentary film festival in the U.K.
Posters told The Post that he and Howe started working on “Spectre” after the Cambridge Analytica scandal, which revealed that the British data firm had improperly collected and used Facebook users’ information.
“We’ve used deepfake technology as way of narrative storytelling to engage audiences in some of the tensions that exist, like a cautionary tale of technology and democracy,” Posters said. He described the people featured in the installation as “deepfake avatars,” who “touch on alternative truths.”
He added that the Zuckerberg clip is intended for viewers to “engage in the key issues around data, privacy, technology and democracy.”
Posters said the CBSN chyrons were part of the original source video of Zuckerberg and not added to the edited video.
“This is a digital recreation and we consider these videos pieces of art that exist online now,” he said. “We hope that CBS can see that and grant the usage of some of their graphics within this art piece.”
Omer Ben-Ami, co-founder of Canny AI, told The Post his company chose to partner with the artists because “it’s our responsibility to expose people to what they can achieve with the tech.” Canny AI recently released another “deepfake” video – a roughly two-minute clip of world leaders singing John Lennon’s “Imagine.”
“We think people should know when a video was manipulated,” Ben-Ami said, adding that the public needs to be aware this type of AI technology has the potential to be used maliciously.
To create the Zuckerberg “deepfake,” Ben-Ami said his company trained its AI algorithm on the original video of the Facebook chief and a video of the voice actor for 12 to 24 hours.
“It just recreates the facial movements of Zuckerberg from the facial movements of the voice actor,” Ben-Ami said. “We don’t touch the audio itself, we just change the facial movements to match the new voice.”
The final product is visually realistic, but as many, including Ben-Ami have pointed out, the voice speaking is clearly not Zuckerberg.
Still, even though the video isn’t a perfect fake, Ben-Ami said AI and even simpler forms of editing, such as speeding up or slowing down footage, have already made it “quite difficult” for people to identify manipulated clips. Last year, White House press secretary Sarah Sanders shared a video of CNN reporter Jim Acosta appearing to make an aggressive movement toward a White House intern. Upon closer examination, it was revealed that the video had been subtly altered to make Acosta’s actions seem more exaggerated.
“In the near future, it is going to be somewhat undetectable,” Ben-Ami said. “If a small company like us can do something that means governments can probably do better and people should know about this.”
_ _ _
The Washington Post’s Drew Harwell and Isaac Stanley-Becker contributed to this report.