Policymakers around the globe are demanding social media companies be held accountable for the spread of hateful content on their platforms as the tech giants struggle to remove violent video footage of the New Zealand terrorist attack.

Sen. Richard Blumenthal, D-Conn., wants Congress to hold an immediate hearing with Facebook and other technology platforms to address the “abject failure” to stop the spread of graphic videos and messaging:

“Facebook, YouTube, & others turned a blind eye to hate & racism on their platforms for a decade. We will be suffering the violent & divisive repercussions of Big Tech putting profits over people for years. This must stop & Congress must demand answers,” he tweeted.

After Facebook removed 1.5 million videos of the shooting rampage at two mosques in Christchurch within the first 24 hours of the attack — and there were still many more available online — New Zealand Prime Minister Jacinda Ardern said she wants answers.

“This is an issue that goes well beyond New Zealand but that doesn’t mean we can’t play an active role in seeing it resolved,” Arden said. “This is an issue I will look to be discussing directly with Facebook.”

U.K. Home Secretary Sajid Javid said on Twitter that “enough is enough”:

Advertising

“You really need to do more @YouTube @Google @facebook @Twitter to stop violent extremism being promoted on your platforms. Take some ownership. Enough is enough,” he posted.

The U.K. lawmaker who leads the Digital, Culture, Media and Sports Committee in the House of Commons said there needs to be “a serious review” of why the companies’ attempts to police the content weren’t more effective: “It’s very distressing that the terrorist attack in New Zealand was live streamed on social media & footage was available hours later. There must be a serious review of how these films were shared and why more effective action wasn’t taken to remove them.”

The growing international outcry could be a game-changer for Silicon Valley companies wary of more regulation.

Other countries, particularly in Europe, have been adopting tougher rules when it comes to hate speech — and it’s likely that the toughest restrictions on the technology companies’ content moderation practices will continue to be outside the United States.

Countries such as Germany and United Kingdom are setting penalties for the companies when they fail to remove harmful content. In Germany, regulators can fine companies if they fail to remove illegal content in less than 24 hours. In the United Kingdom, ministers are planning to establish a new technology regulator that could dole out fines in the billions if companies such as Facebook or Google (which owns YouTube) fail to remove harmful content from their platforms. Actions regulators take in those countries take could set the tone globally for how governments should address the proliferation of violent content on social media.

There could also be action in the U.S. The extremely broad volume of videos spread across various social networks could reignite debate over whether Congress needs to update a decades-old law that shields companies from legal liability for content posted on their platforms.

Advertising

Less than six months ago, in the wake of the massacre at a Pittsburgh synagogue, hate speech linked to the attack rekindled debate in Congress over whether Section 230 of the Communications Decency Act needed to be updated.

The provision generally protects tech companies from legal action related to content that people have posted on their websites. Sen. Mark Warner, D-Va., said last year this law might need an overhaul.

“I have serious concerns that the proliferation of extremist content — which has radicalized violent extremists ranging from Islamists to neo-Nazis — occurs in no small part because the largest social media platforms enjoy complete immunity for the content that their sites feature and that their algorithms promote,” Warner, the top Democrat on the Senate Intelligence Committee, told my colleague Tony Romm in the fallout of the Pittsburgh shooting. He did not comment this weekend on whether he would renew this charge after the New Zealand attack.

The industry has largely resisted any changes to the law. As the Post’s Tony said on Twitter in the hours following the New Zealand shooting: “At what point will US lawmakers just say “enough” and strip these platforms of CDA 230 protections in response to the mass proliferation of videos from a shooting? I mean that — like what is it actually going to take for that convo to happen despite the intense industry lobbying.”

The companies have already made investments to better police harmful content, ranging from improved algorithms to expanding their ranks of human content moderators under previous political pressure. But expect renewed questions from policymakers across the world over whether these investments were enough.

Tech companies “have a content-moderation problem that is fundamentally beyond the scale that they know how to deal with,” Becca Lewis, a researcher at Stanford University and the think tank Data & Society, told my colleagues Friday. “The financial incentives are in play to keep content first and monetization first.”