When the most consequential law governing speech on the internet was created in 1996, Google.com didn’t exist and Mark Zuckerberg was 11 years old.
The federal law, Section 230 of the Communications Decency Act, has helped Facebook, YouTube, Twitter and countless other internet companies flourish.
But Section 230’s legal protection has also extended to fringe sites hosting hate speech, anti-Semitic content and racist tropes like 8chan, the internet message board where the suspect in the El Paso, Texas, shooting massacre posted his manifesto.
The law shields websites from liability for content created by their users, while permitting internet companies to moderate their sites without being on the hook legally for everything they host. It does not provide blanket protection from legal responsibility for some criminal acts, like posting child pornography or violations of intellectual property.
Now, as scrutiny of big technology companies has intensified in Washington over a wide variety of issues, including how they handle the spread of disinformation or police hate speech, lawmakers are questioning whether Section 230 should be changed.
Last month, Sen. Ted Cruz, R-Texas, said in a hearing about Google and censorship that the law was “a subsidy, a perk” for big tech that may need to be reconsidered. In an April interview, House Speaker Nancy Pelosi of California called Section 230 a “gift” to tech companies “that could be removed.”
“There is definitely more attention being paid to Section 230 than at any time in its history,” said Jeff Kosseff, a cybersecurity law professor at the U.S. Naval Academy and the author of a book about the law, “The Twenty-Six Words That Created the Internet.”
“There is an inclination to look at Section 230 as one lever to influence the tech companies,” he said.
Here is an explanation of the law’s history, why it has been so consequential and whether it is really in jeopardy.
Q: So why was the law created?
A: We can thank “The Wolf of Wall Street.”
Stratton Oakmont, a brokerage firm, sued Prodigy Services, an internet service provider, for defamation in the 1990s. Stratton was founded by Jordan Belfort, who was convicted of securities fraud and was portrayed by Leonardo DiCaprio in the Martin Scorsese film about financial excess. An anonymous user wrote on Prodigy’s online message board that the brokerage had engaged in criminal and fraudulent acts.
The New York Supreme Court ruled that Prodigy was “a publisher” and therefore liable because it had exercised editorial control by moderating some posts and establishing guidelines for impermissible content. If Prodigy had not done any moderation, it might have been granted free speech protections afforded to some distributors of content, like bookstores and newsstands.
The ruling caught the attention of a pair of congressmen, Ron Wyden, D-Ore., and Christopher Cox, R-Calif. They were worried the decision would act as a disincentive for websites to take steps to block pornography and other obscene content.
The Section 230 amendment was folded into the Communications Decency Act, an attempt to regulate indecent material on the internet, without much opposition or debate. A year after it was passed, the Supreme Court declared that the indecency provisions were a violation of First Amendment rights. But it left Section 230 in place.
Since it became law, the courts have repeatedly sided with internet companies, invoking a broad interpretation of immunity.
On Wednesday, the 2nd U.S. Circuit Court of Appeals affirmed a lower court’s ruling that Facebook was not liable for violent attacks coordinated and encouraged by Facebook accounts linked to Hamas, the militant Islamist group. In the majority opinion, the court said Section 230 “should be construed broadly in favor of immunity.”
Q: Why is the law so consequential?
A: Section 230 has allowed the modern internet to flourish. Sites can moderate content — set their own rules for what is and what is not allowed — without being liable for everything posted by visitors.
Whenever there is discussion of repealing or modifying the statute, its defenders, including many technology companies, argue that any alteration could cripple online discussion.
The internet industry has a financial incentive to keep Section 230 intact. The law has helped build companies worth hundreds of billions of dollars with a lucrative business model of placing ads next to largely free content from visitors.
That applies to more than social networks like Facebook, Twitter and Snapchat. Wikipedia and Reddit depend on its visitors to sustain the sites, while Yelp and Amazon count on reviews for businesses and products.
More recently, Section 230 has also provided legal cover for the complicated decisions regarding content moderation. Facebook and Twitter have recently cited it to defend themselves in court when users have sued after being barred from the platforms.
Many cases are quickly dismissed because companies assert they have the right to make decisions on content moderation as they see fit under the law.
Q: What’s wrong with the law?
A: The criticisms of Section 230 vary. While both Republicans and Democrats are threatening to make changes, they disagree on why.
Some Republicans have argued that tech companies should no longer enjoy the protections because they have censored conservatives and thereby violated the spirit of the law, which states that the internet should be “a forum for a true diversity of political discourse.”
Facebook, Twitter and Google, which runs YouTube, are the main targets for bias claims, and have said they are baseless.
On the flip side, some Democrats have argued that small and large internet sites aren’t serious about taking down problematic content or tackling harassment because they are shielded by Section 230.
Wyden, now a senator, said the law had been written to provide “a sword and a shield” for internet companies. The shield is the liability protection for user content, but the sword was meant to allow companies to keep out “offensive materials.”
However, he said firms had not done enough to keep “slime” off their sites. In an interview with The New York Times, Wyden said he had recently told tech workers at a conference on content moderation that if “you don’t use the sword, there are going to be people coming for your shield.”
There is also a concern that the law’s immunity is too sweeping. Websites trading in revenge pornography, hate speech or personal information to harass people online receive the same immunity as sites like Wikipedia.
“It gives immunity to people who do not earn it and are not worthy of it,” said Danielle Keats Citron, a law professor at Boston University who has written extensively about the statute.
Q: Is Section 230 in jeopardy?
A: The first blow came last year with the signing of a law that creates an exception in Section 230 for websites that knowingly assist, facilitate or support sex trafficking. Critics of the new law said it opened the door to create other exceptions and would ultimately render Section 230 meaningless.
Citron, who is also vice president of the Cyber Civil Rights Initiative, a nonprofit devoted to combating online abuse, said this was “a moment of re-examination.” After years of pressing for changes, she said there was more political will to modify Section 230.
Sen. Josh Hawley, R-Mo., a frequent critic of technology companies, introduced a bill in June that would eliminate the immunity under the law unless tech companies submitted to an external audit that their content moderation practices were politically neutral.
While there is growing political will to do something about Section 230, finding a middle ground on potential changes is a challenge.
“When I got here just a few months ago, everybody said 230 was totally off the table, but now there are folks coming forward saying this isn’t working the way it was supposed to work,” said Hawley, who took office in January. “The world in 2019 is very different from the world of the 1990s, especially in this space, and we need to keep pace.”