Facebook is banning (most) deepfakes

Facebook is banning (most) deepfakes thumbnail

Social media —

The policy doesn’t cover videos doctored using more conventional techniques.

Mark Zuckerberg.

Enlarge / Mark Zuckerberg.

Drew Angerer/Getty Images

Facebook is opening a new front in its endless war on problematic content with an announcement that it is banning most deepfake videos from its platforms. Under the new policy, a video will be taken down if it is “the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video” and if it is likely to “mislead someone into thinking that a subject of the video said words that they did not actually say.”

Parody and satire is still permitted, Facebook says.

Facebook has been the target of pointed protest and criticism in recent months. Last June, Internet pranksters uploaded a deepfake of Mark Zuckerberg supposedly gloating about being “one man, with total control of billions of people’s stolen data.” He added that “I owe it all to SPECTRE”—a fictional evil organization from the James Bond franchise.

At the time, Facebook-owned Instagram declined to take the video down. Around the same time, Facebook refused to take down a video of House Speaker Nancy Pelosi that was slowed down, making Pelosi appear drunk or senile. Pelosi was reportedly furious with Facebook for refusing to remove the deceptive video.

The new policy would clearly apply to videos like the one of Zuckerberg crediting SPECTRE for his career success. But the new policy doesn’t appear to apply to videos like the Pelosi clip. It was manipulated using more conventional tools—not “artificial intelligence or machine learning.” And it didn’t alter Pelosi’s words; it just altered the way she said them. We’ve asked Facebook for more information and will update if we hear back.

Another important caveat that wasn’t explicitly mentioned in Facebook’s Tuesday announcement: a Facebook spokesperson told Buzzfeed’s Ryan Mac that “Facebook will not ban deepfakes in political ads.” That’s consistent with Facebook’s broader stance that it would not engage in fact-checking of political ads—but is likely to be unpopular with those who feel Facebook isn’t doing enough to police misinformation on its platform.

Read More