On Thursday, President Trump signed an executive order targeting social media companies such as Twitter. The order centers on Section 230, a fragment of law from the 1990s that underpins much of today’s internet and is often misunderstood.
Here’s a rundown of what the law is, what’s at stake, and what Trump’s executive order might accomplish.
What is Section 230? | How did Section 230 become law? | How has it worked in practice? | What is this executive order trying to do? | Why modify Section 230? | Can the president do that? | What’s next?
What is Section 230?
Section 230 is a small piece of the 1996 Telecommunications Act that has, in many ways, created the internet we all use today.
Its first part states, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
In effect, that means websites are not legally responsible for what other people post there. That applies to every site on the internet, whether they’re social media platforms such as Facebook and Twitter, sites that depend on customer reviews such as Yelp and Amazon, or any website with a comment section, from the Los Angeles Times to your personal blog.
It also has a second part, which states website owners or users can’t be held liable for deleting or restricting access to anything they deem objectionable if those actions are “taken in good faith.”
Without Section 230, any company operating a website could be sued over a statement posted by a user, and sued by any user whose post was deleted. Internet companies with many millions of users could ill afford to defend large numbers of such lawsuits, even if they won most of them.
There are some existing exceptions to those protections. According to the original law, they don’t apply to violations of federal criminal law, intellectual property law, or the Electronic Privacy Communications Act — this is why YouTube tries to take down copyrighted material, for instance, and companies try to respond quickly to reports of child pornography, which is a federal crime. Starting in 2018, a new law also exempted the facilitation of sex trafficking from Section 230 protections.
How did Section 230 become law?
Section 230 was considered a minor part of the Telecommunications Act of 1996 at the time that it passed — most of that legislation centered on questions of competition among telecom companies — and was intended to give legal cover to internet companies that wanted to stop users from posting pornography or racist screeds.
The entire internet at the time had fewer than 40 million users (for context, Snapchat has 229 million daily active users today, and Facebook has more than 2.6 billion), but websites had already faced legal trouble.
Two cases in particular had drawn lawmakers’ attention.
An internet service provider called CompuServe placed no limits on what its users could post. When someone sued the company for defamation over a statement that another user had posted, the judge dismissed the case, arguing that CompuServe fell into the same legal category as a bookstore or a newsstand — its forums were host to other people’s speech, but it didn’t claim to control that speech in any way.
Offering a contrasting example, an online service called Prodigy actively tried to maintain a family-friendly website with active moderation. Again, a user sued the company over another user’s post, alleging defamation. This time, the judge found Prodigy legally liable. Because the website exercised editorial control, went the ruling, it fell into the same category as a newspaper, making its owner responsible for everything on the site.
The result was a legal regime in which companies were punished for trying to actively remove pornography, violent language, hate speech and the like from their sites, and could reduce their liability by letting anything go.
Christopher Cox, a former Reagan appointee who was then serving as a Republican congressman in Orange County, teamed up with Oregon Democrat Ron Wyden, who has since become that state’s senior U.S. senator, to fix that problem and incentivize websites to police themselves. Section 230 was their solution.
How has it worked in practice?
Even after its passage into law, Section 230 could have resulted in a very different internet from the one we know today, said Jeff Kosseff, professor of cybersecurity law at the U.S. Naval Academy and author of “The Twenty-Six Words That Created the Internet,” a 2019 book about the law.
Before 1996, the law concerning legal liability for distributing other people’s speech had been based on a court ruling from 1959.
The case centered on Eleazer Smith, the 72-year-old proprietor of a bookstore on South Main Street in downtown Los Angeles. He was arrested by an LAPD vice officer for selling a copy of a pulp novel, about a ruthless lesbian realtor, that was considered obscene under city and state law.
Smith argued that he couldn’t possibly review every book in his store, and his case wound its way to the Supreme Court. There, the court decided that Smith was right, and could be found in violation of the law only if he failed to remove the book after being informed that it was illegal to be selling it. In the years since, courts upheld that legal distinction between distributors such as bookstores and publishers such as newspapers.
Section 230 made explicit in its language that a website could not be treated as a publisher or a speaker, which theoretically left the door open to their being treated like a bookstore — in fact, that’s how the judge described CompuServe in one of the early internet cases.
But the first case to test Section 230 after its passage led to an even broader set of protections for websites. A judge in the 4th Circuit Court of Appeals ruled that even being a distributor was just a special subset of being a publisher or speaker, and Section 230 made clear that websites were neither.
The judge in that case “was a well-respected conservative Reagan appointee, and also a former newspaper editor,” Koseff said. “He had a strong free speech streak, and a lot of the law’s history rests on the fact that he was the first” judge to rule on the law.
That interpretation of the law has held to this day, allowing companies such as Yelp, Facebook and Twitter to exist without fear that they’ll be sued for their users’ statements or for trying to police what kind of statements can stay up on their sites.
What is this executive order trying to do?
The heart of President Trump’s executive order is an attempt to modify the scope of Section 230.
If a company edits content — apart from restricting posts that are violent, obscene or harassing — “it is engaged in editorial conduct” and may forfeit any safe-harbor protection, according to the language of the order signed Thursday.
The order directs Commerce Secretary Wilbur Ross to work with Atty. Gen. William Barr to request new regulations from the Federal Communications Commission that determine whether a social media company is acting “in good faith” to moderate content.
If the order is carried out (the FCC has the right to refuse), it could functionally give the agency the ability to strip Section 230 protections from any website that moderates what users post, inviting individuals — or the government — to sue without such suits getting dismissed out of hand, as they are now.
Why try to modify Section 230?
The executive order came two days after Twitter added a disclaimer to two tweets from President Trump, saying that his claims about mail-in ballots leading to widespread voter fraud were false.
Trump’s executive order builds on a strain of criticism of social media platforms, embraced in recent years by the political right, that views companies’ decisions to take down certain posts or delete certain users’ accounts as censorship.
Twitter’s disclaimers on the president’s tweets, it should be noted, are not protected by Section 230. Because they represent the speech of the company itself, Trump could sue Twitter for defamation if he so chose.
An almost opposite strain of Section 230 criticism has also arisen in recent years. This criticism argues that social media companies are not doing enough to control the conversation on their sites, allowing extremism and misinformation to spread unchecked.
On this side of the debate, some legal scholars have proposed adding a “reasonableness” clause to Section 230, which would extend its protections only to sites that can show that their content moderation practices as a whole meet a reasonable standard of actively trying to prevent harm to users.
Can the president do that?
The interpretation of Section 230 that the executive order rests on “has no legal merit,” in the opinion of Aaron Mackey, staff attorney at the Electronic Frontier Foundation, a digital civil liberties nonprofit.
In Mackey’s analysis, the executive order hinges on a fundamental misreading of the law. The executive order argues that if a company violates the second part of Section 230, by moderating content that isn’t strictly “obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable,” then it can be stripped of the protections in the first part of Section 230, which says that websites are not legally liable for what users post.
The first and second parts of the law are clearly not linked in the text, and have not been linked by legal rulings. The first part — the protection against liability — is a blanket statement, unconditionally applied. Most lawsuits against websites for removing or leaving up user-generated content are simply thrown out because the first part of Section 230 has been seen as cut and dried.
The executive order gives the Commerce secretary and attorney general 30 days to request that the FCC make a rule to reflect the policy laid out in the executive order. If the FCC takes up that request, a period of public notice and comment follows before the rule becomes final. Then, if the FCC chooses to exercise its new power to modify Section 230 for some companies, that decision will probably be challenged in court.