WASHINGTON — Nobody seems happy with Twitter these days — or Facebook or any other social media platform, for that matter.
“The Radical Left is in total command & control of Facebook, Instagram, Twitter and Google,” President Donald Trump charged.
“Twitter is completely stifling FREE SPEECH, and I, as President, will not allow it to happen!”
He said those things on Twitter, where he has more than 80 million readers.
House Speaker Nancy Pelosi, who’s rarely in league with Trump, agreed with him that the internet giants are a problem.
“Facebook, all of them, they’re all about making money,” she complained on Thursday. “Their business model is to make money at the expense of the truth.”
If there’s a bipartisan consensus on anything in Washington, it’s that someone ought to take big tech down a notch.
The result may be the biggest political crisis the internet companies have faced since their creation — and it’s mostly their own fault.
The most likely outcome, no matter how the November election turns out, is that Congress will repeal the 1996 law that protects the platforms from liability for almost anything their users post.
Removing that law, Section 230 of the Communications Decency Act, could throw their business model into chaos and make it difficult for Twitter and Facebook to survive.
Trump and the presumptive Democratic nominee, Joe Biden, have both said the provision should be repealed — although for different reasons.
Republicans claim Twitter and the other platforms are deliberately targeting conservatives and unfairly applying rules against hate speech, incitement and harassment.
Democrats have the opposite complaint: They charge that the platforms don’t enforce the rules often enough. They complain that Trump and others have been allowed to get away with flagrant falsehoods and calumnies — which is true.
Last week’s crisis focused on Twitter, which enforced its internal rules on Trump for the first time.
First the company attached warnings, labeled “get the facts,” to two presidential tweets that had called mail-in ballots “fraudulent” and predicted a “rigged election” in November.
Then Twitter added an anti-violence warning to a Trump tweet about riot-torn Minneapolis. “When the looting starts, the shooting starts,” he wrote.
Trump and his allies said those disclaimers amounted to censorship. They didn’t. Twitter still published all the president’s words; now his readers will see the warnings as well.
Until recently, Facebook was also a target of conservative complaints because it sometimes enforced standards against right-wing extremists. But last week, Facebook Chief Executive Mark Zuckerberg said he no longer thinks the company should try to be “arbiters of truth.”
Politicians’ resentment against the increasingly powerful platforms has been building for a long time.
When Section 230 passed in 1996, it was intended to shield fragile startup companies when the internet was young. In the ensuing 24 years, the social media firms have become big and wealthy; it’s no longer clear that they need protection.
The law not only shielded the companies from liability; it empowered them to restrict access to material that is “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.”
That clause granted the platforms almost unlimited control over content. They developed rules and procedures for applying them in-house, with little or no visibility to anyone outside.
Twitter’s standards, for example, list multiple categories of prohibited content, including child exploitation, targeted harassment, hateful conduct, encouraging suicide, nonconsensual nudity, posting other people’s private information, manipulated media, promoting terrorism, threats or glorification of violence, and “manipulating or interfering in elections.”
The company ruled that Trump’s tweets last week violated the last two rules.
The problem is it’s not clear how the rules are applied, who applies them and how to get redress if you think you’ve been treated unfairly.
“It’s easier to get information out of the (National Security Agency) than one of these big tech companies,” said Jeff Kosseff, a cybersecurity law professor and author of “The Twenty-Six Words that Created the internet,” a book about Section 230.
In Trump’s case, the company bungled the process. Its actions against the president seemed to come out of nowhere, with little warning or explanation.
Did Trump’s claim, five months in advance, of a “rigged election” really constitute manipulation or interference?
And it seemed odd that while the president was tagged for falsely claiming that mail-in ballots produce fraud, he was not for repeatedly suggesting that MSNBC host Joe Scarborough had murdered a staff aide in 2001. (Authorities say there is no basis for Trump’s allegation; Scarborough was 800 miles away when the woman died.)
Trump’s response to Twitter’s actions was a colorfully worded executive order that threatened to sic two regulatory agencies on the tech company. It asked the Federal Communications Commission to draft new rules to make social media companies liable for more content, and asked the Federal Trade Commission to examine whether the companies are enforcing their standards fairly.
The president’s order appeared mostly symbolic, tailored for an election year. The FCC routinely takes months or years to draft new regulations, even on simple issues, and the president’s order was almost certain to be challenged in court.
In the longer run, though, Section 230 remains vulnerable in Congress — if only because both Trump and Biden say they want to see it repealed.
What happens then?
Theoretically, eliminating the law could prompt social media platforms to crack down on more content to eliminate anything that could lead to a lawsuit. That could make Trump vulnerable to even more pesky warning labels.
But repeal could also perversely lead to the elimination of almost all standards — because if there are no rules, the platforms can argue that they aren’t responsible for enforcing any.
And that could produce an internet where pornography and extremist political content, both protected by the 1st Amendment, flood into platforms that were intended to be family friendly.
There was a more sensible solution available. The companies could have reformed their standards and practices, made them far more transparent and allowed the public to see how they were implemented.
But it may be too late for that kind of self-improvement. The companies moved too slowly. They resisted letting anyone see inside.
As a result, a law that was a foundation of their business is now in danger of repeal. That’s not all Trump’s fault. It’s their fault too.