When Facebook and Twitter cracked down on President Donald Trump in the wake of the riot that breached the U.S. Capitol last week, the world took notice.

Trump’s exile from social media, a resounding act of enforcement after a long history of decisions that permitted world leaders to use hateful and charged language, put tech companies in new territory.

Silicon Valley companies’ newfound willingness to suspend a president may place them under renewed pressure to take down inflammatory posts or remove the accounts of world leaders who push boundaries the companies themselves have set.

Supporters of the move pointed to Trump’s long history of posts that ordinary users might not have gotten away with, and questioned why Facebook and Twitter, among other companies, did not respond sooner. But the president’s supporters in the U.S. and globally, along with some advocates for freedom of expression online, blasted the decision as an attack on free speech and an overreach by the corporations that oversee much of the 21st century’s public sphere.

German Chancellor Angela Merkel called the banishment “problematic,” because the private sector made the calls. She urged governments to regulate social media companies instead – a more practical possibility, some experts argue, for countries without the free speech protections guaranteed under the U.S. Constitution’s First Amendment. On Thursday, Mexico’s president said he would lead an international effort to curb what he called widespread censorship by social media companies. Twitter’s own CEO said he was troubled by the action.

Facebook in particular has made efforts to ally itself with governments and those in power. The company, along with its peer Twitter, crafted policies and applied rules in ways that have proved beneficial to world leaders known for ultranationalist and incendiary remarks, as well as their associates and supporters. That list includes Brazilian President Jair Bolsonaro, Hungarian Prime Minister Viktor Orbán, Philippine President Rodrigo Duterte and Indian Prime Minister Narendra Modi.

Advertising

“For years Facebook and Twitter have been incredibly inconsistent in how they treat global leaders on their platform,” said Gennie Gebhart, acting activism director at the Electronic Frontier Foundation, a digital rights group. “They give them one set of rules and exemptions, whereas all other users on these platforms are out of luck.”

Facebook and Twitter have long held that public officials should have greater latitude than everyday people due to the public’s right to hear their views. In practice, this “newsworthiness exemption” means that the companies often give a pass to inflammatory posts by world leaders and other powerful people who may have broken rules, including those on hate speech.

The Washington Post reported last year that Facebook devised its exception for public figures in response to then-candidate Trump’s comments attacking Muslims during his 2016 presidential campaign.

In deciding last week to suspend Trump indefinitely, Facebook chief executive Mark Zuckerberg said that the president’s unprecedented behavior overrode newsworthiness concerns.

The “current context is now fundamentally different, involving use of our platform to incite violent insurrection against a democratically elected government,” Zuckerberg wrote on Facebook.

But such determinations involved a wide gray area. For example, unlike Twitter, Facebook did not categorize as incitement a comment by Trump in May widely seen as an invitation to violence against racial justice protesters: “when the looting starts, the shooting starts.”

Advertising

The newsworthiness exception never applied to direct encouragement of violence, which has always been banned by social media companies for all users.

The lines remain subjective, said Allie Funk, senior research analystfor technology and democracy at Freedom House, a nonpartisan advocacy organization. “There’s not a one size fits all response to hate speech and disinformation.”

World leaders have often put the rules to the test. In 2018, Facebook reversed a decision to take down a video posted by a top aide to Hungarian Prime Minister Viktor Orban that blamed crime on immigrants, saying it was making a newsworthiness exception to its usual ban on hate speech.

Former Facebook engineer David Thiel encountered the subjectivity behind such decisions in January 2020 after he reported a post by Brazil’s Bolsonaro. “Indians are undoubtedly changing. They are increasingly becoming human beings just like us,” Bolsonaro wrote on Facebook, referring to indigenous people. Thiel thought that the post violated the company’s guidelines against “dehumanizing speech,” or generalizations or comparisons that would indicate the “subhumanity” of a group of people.

But Thiel said his colleagues refused to take down the post, and he was told that the statement alone was not enough to qualify as racism under the hate policy. Facebook’s policy team argued that the statement could potentially have been intended as a positive statement about indigenous people, according to internal correspondence viewed by The Post.

Thiel resigned in protest. Bolsonaro has since had one post removed by Facebook.

Advertising

In another incident last year, a senior Facebook policy executive refused to apply the company’s hate speech rules to T. Raja Singh, and Indian politician and a member of Indian Prime Minister Narendra Modi’s Hindu nationalist party, who has said Rohingya Muslim immigrants should be shot, called Muslims traitors, and threatened to raze mosques, the Wall Street Journal reported last year.

India is Facebook’s largest market, and human rights groups have repeatedly accused Modi and his Bharatiya Janata Party of using the platform and WhatsApp, which is owned by Facebook, to spread misinformation and stoke violence against Muslims and activists. Modi and BJP have denied the claims.

Facebook has denied there was any political interference in the case of Singh, who was deemed a “dangerous individual” and removed from Facebook the following month.

In the Philippines in 2019, veteran journalist Maria Ressa was arrested in a move experts said was retaliation for an expose revealing violence-inciting fake accounts on Facebook linked to President Rodrigo Duterte’s administration. Duterte was not a major Facebook user, but his team made extensive use of the service to attack political opponents.

Ressa had initially provided information on the hate-filled accounts to Facebook, intending to write a story after they were taken down. But they remained up, so the news site she co-founded, Rappler, published it anyway. Facebook ultimately banned some of the accounts, though critics questioned why there was such a delay.

“If Facebook had taken action in 2016, I wouldn’t be in this position,” Ressa said after her arrest.

Sponsored

Facebook has since partnered with Rappler as part of its news verification program to combat disinformation.

World leaders only pose a sliver of the overall challenge companies face in policing speech online. But banning of a world leader has an impact orders of magnitude more visible than the banning of ordinary users. And Trump commanded a social media presence perhaps more formidable than any other user in the history of the medium, and he has used that rapt audience to spread hundreds of lies and conspiracies, and to bully private citizens.

His banishment, by the companies that helped him reach his audience unfiltered, could prove a turning point.

“Tech companies, who have in recent years moved to protect the speech of the powerful more than the speech of the general public, must actively reverse these policies and reassure the world that no matter how politically powerful a leader, their speech is no more protected than anyone else’s,” said Elizabeth Linder, a former Facebook executive and founder of the policy firm Brooch Associates.

– – –

Berger reported from Washington, Dwoskin from San Francisco.