Facebook’s Oversight Board on Wednesday upheld the social network’s ban on former president Donald Trump but punted the ultimate decision back to the company, bringing into focus the regulatory vacuum around social media and galvanizing Facebook’s critics.
The ruling opens a new chapter in the global debate over the largely unchecked power of social media giants, whose platforms have become the default political megaphone for many world leaders even as they have fomented misinformation and hate. Regulatory action is also on the horizon, with lawmakers promising that by the end of the year new legislation will hold companies to account for how they have policed or have not policed disinformation during the pandemic and the 2020 presidential election.
Facebook first suspended Trump for encouraging violence during the Capitol riot Jan. 6, before saying the next day that the ban was “indefinite.” Two weeks later, it referred the case to its 20-member oversight board, which is largely independent and funded by the social network. The board on Wednesday handed the decision back to Facebook, recommending that it either permanently ban or reinstate the president within six months — and write clear rules to explain the rationale.
“They cannot invent new unwritten rules when it suits them,” board co-chair and former Danish Prime Minister Helle Thorning-Schmidt said in an interview.
In the ruling, the board agreed that Trump’s comments on the day of the insurrection “created an environment where a serious risk of violence was possible.” The board noted the president’s references to the mob members as “patriots” and “special,” and his instructions to them to “Remember this day forever.”
It took issue with Facebook’s “indefinite” suspension of Trump, saying it was “vague and uncertain.”
The board also recommended that Facebook publish a report explaining its own role in fomenting the Jan. 6 attack.
After the announcement, Facebook emphasized that Trump would remain off the social network for the time being, in accordance with the board’s order. The company also seemed noncommittal in its response.
“We will now consider the board’s decision and determine an action that is clear and proportionate,” Nick Clegg, Facebook’s vice president of global affairs and communication, said in a blog post Wednesday, after canceling all planned interviews. “In the meantime, Mr. Trump’s accounts remain suspended.”
Trump said Facebook, Google and Twitter embarrassed the United States.
“Free Speech has been taken away from the President of the United States because the Radical Left Lunatics are afraid of the truth,” he said in a statement. ” … These corrupt social media companies must pay a political price, and must never again be allowed to destroy and decimate our Electoral Process.”
Twitter and Google’s YouTube followed Facebook in suspending Trump after his comments. Twitter’s ban is permanent, and YouTube’s is indefinite. Facebook and Twitter declined to comment on Trump’s statement. YouTube did not immediately respond to a request for comment.
Critics are calling into question the legitimacy and value of the Facebook oversight board, an experimental entity set up by the company to help hold it accountable in making such calls. The board is only able to offer Facebook recommendations on its policies, which the social network can take or leave, and the company has a hand in selecting members. Because of the board’s limited powers, some critics see the body as a distraction from developing new laws or government oversight of social media companies.
“The practical effect of this decision will be that Facebook — and possibly other platforms that might have been watching the Oversight Board for unofficial guidance — will have to continue to grapple themselves with the problem of what to do about political leaders who abuse social media to spread lies and incite violence,” Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights, said in a statement.
In Washington, Democrats have said they will update existing antitrust laws, crack down on misinformation and pass federal privacy legislation. Facebook is also the target of a landmark Federal Trade Commission lawsuit, which focuses on the company’s practice of buying rivals.
The board’s push for Facebook to create more transparent rules and consistently follow them echoes the criticisms of lawmakers on both sides of the aisle. Wednesday’s ruling renewed calls for the government to take on a greater regulatory role and to continue with efforts underway in the United States to limit the social media giant’s power. Some also called into question why the decision focused almost solely on one person, not on the powerful algorithms that spread hateful content virally.
“Policymakers ultimately must address the root of these issues, which includes pushing for oversight and effective moderation mechanisms to hold platforms accountable for a business model that spreads real-world harm,” Sen. Mark Warner, D-Va., said in a statement.
Trump allies swiftly condemned the decision.
“Facebook’s status as a monopoly has led its leaders to believe it can silence and censor Americans’ speech with no repercussions,” said Rep. Ken Buck of Colorado, the top Republican on the House Judiciary antitrust subcommittee. “Now more than ever we need aggressive antitrust reform to break up Facebook’s monopoly.”
Critics have long argued that Facebook should have barred Trump at different points throughout his presidency, saying that his inflammatory language and frequent promotion of misinformation — about the coronavirus in particular — constituted an abuse of his office and of Facebook’s community standards. But Facebook chief executive Mark Zuckerberg said politicians should be given wide latitude because their speech was in the public interest.
Facebook referred its decision about Trump to the oversight board shortly after it barred Trump in January. The board, which is less than a year old and had yet to decide a case at the time, was first conceived by Zuckerberg in 2018 as a way to outsource the thorniest content-moderation decisions without having the government intervene.
Over the past few months, members spanning time zones from Taiwan to San Francisco connected on videoconference calls to pore over more than 9,000 public comments on the matter, including from Trump himself, according to the board.
In a letter submitted to the board on Trump’s behalf, asking the board to reconsider the suspension, Trump’s allies said it was “inconceivable that either of those two posts can be viewed as a threat to public safety, or an incitement to violence.”
In its decision, the board faulted Facebook for making “arbitrary” decisions on the fly, and said the company had no published criteria for suspending a user indefinitely. Facebook’s normal penalties are to remove a comment, enact a time-limited suspension or disable a user’s account permanently, the board said.
“It is not permissible for Facebook to keep a user off the platform for an undefined period, with no criteria for when or whether the account will be restored,” the board said.
Facebook exempts political figures from some hate-speech rules on the grounds that those comments are newsworthy.
The board took issue with that exemption, noting that “it is not always useful to draw a firm distinction between political leaders and other influential users,” and that such users have greater power than others to cause harm.
The ruling appeared to nudge Facebook in the direction of being more aggressive when making determinations of what counts as imminent harm.
“Facebook must assess posts by influential users in context according to the way they are likely to be understood, even if their incendiary message is couched in language designed to avoid responsibility,” the board wrote.
It noted that Trump’s sharing of misinformation about the election created a context for violence. A higher standard for harm — if Facebook adopted it — could lead to more severe penalties for world leaders whose harmful statements have been met with mild penalties from Facebook.
“This doesn’t begin and end with Donald Trump,” said Nathaniel Persily, a professor at Stanford University’s law school. “They’ve got all kinds of elections coming up around the world.”
While the board told Facebook that the risk of harm should outweigh free-speech considerations, it did not give Facebook new guidance on how to write new policies or treat political figures who many come up to the line.
Critics said the ruling probably would not have any real effect.
“The oversight board is not an oversight board, it’s a PR device,” said Shoshana Zuboff, a member of a group of Facebook critics self-dubbed “The Real Facebook Oversight Board” and author of “The Age of Surveillance Capitalism.” “They too have failed to contribute anything of even modest value. We’re back to Square 1, facing the void.”
Under U.S. law, social media platforms are not held legally responsible for policing unwanted or even much illegal content on their services, with some exceptions for copyright issues and child pornography. But in recent years, Silicon Valley has dealt with crises over enabling disinformation and the spread of extremism from domestic and international forces, and the blowback has forced the companies to invest significantly in content moderation. That investment picked up in 2020, when companies including Facebook and Twitter launched stronger policies aimed at combating misinformation surrounding the election and the coronavirus.
Those forces helped prompt the creation of the oversight board.
“You can imagine some sort of structure, almost like a Supreme Court, that is made up of independent folks who don’t work for Facebook, who ultimately make the final judgment call on what should be acceptable speech in a community that reflects the social norms and values of people all around the world,” Zuckerberg told Vox in a 2018 interview.
Facebook then embarked on a months-long process collecting feedback on how to design the board and consulting more than 2,000 people in 88 countries. It released the rules and selected its first members in 2020. The board attracted controversy during its formation, when Facebook’s critics warned that the board’s authority was too limited and that the company’s role in picking board members compromised its independence.
The board issued its first decisions in late January, a week after Facebook announced that it would refer the high-profile Trump case to the body. The initial round of decisions — which touched on alleged hate speech, coronavirus misinformation and references to dangerous organizations — signaled that the board would demand greater transparency from Facebook about its policies. Before Wednesday’s decision, the board had overturned Facebook’s decisions six times, upheld them twice, and was unable to complete a ruling once.
In the board’s nearly 12,000-word document, it said that it had asked Facebook 46 questions and that the company declined to answer seven of them. That included one about Facebook’s design and algorithms, and the role those potentially played in the spread and visibility of Trump’s posts. Facebook also declined to answer a question about whether a suspension or deletion would have an effect on its ability to target ads.
— — —
The Washington Post’s Heather Kelly and Rachel Lerman contributed to this report.