Column: Trump’s attack on Twitter is a complete fake
You may not have noticed, what with America’s COVID-19 deaths passing the 100,000 mark and cities in an uproar coast-to-coast over police tactics against black residents, but President Trump last week staged a completely fictional attack on Twitter and other online services.
The fiction was embodied in an executive order Trump signed on May 28, purportedly aimed at “preventing online censorship.” The order targets Section 230 of the 1996 Communications Decency Act, which dramatically changed the environment for online services hosting user-provided content.
Section 230, which has been consistently misunderstood by its critics across the political spectrum, allows online services to host potentially objectionable, even defamatory user-posted content without becoming liable to legal action themselves, while also giving them the discretion to moderate that content as they wish.
We have to let go of some Platonic ideal of content moderation.... You’re always going to cheese off somebody.
— Eric Goldman, Santa Clara University
“The section’s most fundamental concept is that we want internet companies to manage user content, and not be liable for whatever they miss,” says Eric Goldman, an expert in the law at Santa Clara University Law School. “The fear was that if they were liable for whatever they missed, they wouldn’t even try.”
The tech community has long treated Section 230 as “the most important law on the Internet.” As my colleague Sam Dean reports, the title of a book on the section by Jeff Kosseff, a cyberlaw expert at the U.S. Naval Academy, labels its text “the twenty-six words that created the internet.”
But the law also has come under concerted attack by plaintiffs who keep looking for loopholes and judges who open them, all aimed at scrubbing distasteful material from the Web.
Trump’s executive order is a typical attack on Section 230, launched by someone acting out a personal grievance.
It’s so sloppily drafted that it would accomplish nothing resembling the prevention of “online censorship,” would be almost certainly unconstitutional if it did, and was basically a reflexive reaction to one offense: Twitter’s unprecedented designation of Trump tweets as the embodiment of lies requiring corrections.
To hear San Francisco lawyer Dawn Hassell tell it, Ava Bird was a nightmare client.
Twitter tagged the May 27 tweets, which asserted that mail-in ballots would lead to a “rigged election,” with a note directing users to fact-checked information refuting the assertion.
Trump issued his executive order the very next day.
Twitter went even further a day later, when it placed a blocking message over a Trump tweet implying that participants in protests over the killing of George Floyd, a black man who apparently died in the custody of Minneapolis police, should be shot if they were looting. The message required users to click separately to view the tweet.
The executive order bears all the hallmarks of a Trump tantrum, including the lack of a mechanism to turn it into action. It begins with a Frank Costanza-like litany of personal grievances.
“Online platforms are engaging in selective censorship that is harming our national discourse,” the order reads, specifically calling out Twitter: “Twitter now selectively decides to place a warning label on certain tweets in a manner that clearly reflects political bias... Twitter seems never to have placed such a label on another politician’s tweet.” (Trump means any politician other than himself.)
The order calls on the Federal Communications Commission to “clarify” the scope of 230’s immunity from liability, even though that latitude is quite clear in the language of the law.
The text makes it clear that the immunity is very broad indeed. It allows online services to restrict access to content that they consider to be “obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable.”
The catchall language of “otherwise objectionable,” Goldman says, “makes you wonder exactly what wouldn’t have been included in Congress’s expectations, since they gave such a broad-based mandate to services.”
The 230 exemption has been relied on by countless services that allow users to post statements or other content on their sites — newspapers hosting reader comments, merchants posting consumer reviews, expert and amateur forums of every description.
Nevertheless, efforts to place limits on the 230 exemption are legion. In one closely followed California case, a San Francisco personal injury lawyer persuaded a state judge to order the consumer review site Yelp to remove an ex-client’s criticism of her performance after the lawyer won a defamation lawsuit against the client.
It’s being held as a landmark ruling for free speech on the internet, and a ringing endorsement of what’s been called “the most important law on the internet.”
Yelp refused, noting that it hadn’t been named as a defendant in the defamation lawsuit and arguing that it was immune from liability for the client’s posts under Section 230. The California Supreme Court found in Yelp’s favor, and the U.S. Supreme Court refused to take up the issue, ending the case against Yelp. (The defamation judgment against the client remained in effect.)
Congress tried to carve out an exception to Section 230 protection aimed at online sites that purportedly facilitated sex trafficking. The so-called Fight Online Sex Trafficking Act, or FOSTA, which passed overwhelmingly and was signed into law by Trump in 2018, made online services liable for ads ostensibly promoting prostitution, consensual or otherwise.
But FOSTA has failed to achieve its goals. Law enforcement officials have said it has made it harder for them to root out sex trafficking, because it drove perpetrators further underground, and interfered with posts aimed at warning consensual sex workers away from dangerous situations or clients.
In Congress, attacks on Section 230 or services that rely on its terms are bipartisan. For years, Sen. Ted Cruz (R-Texas) has been asserting that under Section 230, online services that remove conservative-leaning contents lose their status as “neutral public forums” and therefore their immunity.
Those services “should be considered to be a ‘publisher or speaker’ of user content if they pick and choose what gets published or spoken,” Cruz wrote in 2018. (His target then was Facebook, which he complained had been “censoring or suppressing conservative speech for years.”)
Cruz’s take was wrong and in any event unenforceable, since any content moderation whatsoever entails picking and choosing what to allow online. Cruz is a graduate of Harvard Law School, so it’s reasonable to assume that he knows he’s wrong, and just as reasonable to conclude that he’s merely preaching to an ideologically conservative choir .
But an attack on 230 has also come from Sen. Mark Warner (D-Va.), who in 2018 proposed a sheaf of regulations on social media aimed at stemming the tide of disinformation, including faked photos and videos, posted online.
Warner advocated making online services liable for defamation and other civil torts if they posted “deep fake” or other manipulated audio or visual content. But he acknowledged in his position paper that distinguishing between “true disinformation and legitimate satire.”
In 2012, Tristan Harris made a presentation to his bosses at Google arguing that “we had a moral responsibility to create an attention economy that doesn’t weaken people’s relationships or distract people to death.”
He also recognized that “reforms to Section 230 are bound to elicit vigorous opposition, including from digital liberties groups and online technology providers.”
The best approach to Section 230 is to leave it alone, but manage our expectations of what it can achieve. For the most part, legitimate online services find it in their best interest to combat material widely judged to be socially unacceptable —hate speech, racism and sexism, and trolling. But the debate on the margins is always going to be contentious.
“We’re never going to be happy with internet companies’ content moderation efforts,” says Goldman. “You can’t ask whether one company’s doing it right and another’s doing it wrong. They’re all ‘doing it wrong,’ because none of them are doing it the way I personally want them to do it. Your standards may differ from mine, at which point there’s no pleasing everybody.”
Online services will always be vulnerable to attacks like Cruz’s or, indeed, Trump’s.
The goal of his executive order was to pump up the image of online services into behemoths that have taken over the public debate space for their own purposes, assuming “unchecked power to censor, restrict, edit, shape, hide, alter virtually any form of communication between private citizens and large public audiences,” as he put it in remarks during the executive order signing ceremony. In his mind, that made them legitimate targets for regulation.
Trump’s audience, of course, wasn’t ordinary citizens who feel their access to information or right to post their content online was being trampled, but his political base, which imagines that its megaphone is being taken away. The biggest joke during the signing ceremony was Trump’s assertion that “if [Twitter] were able to be legally shut down, I would do it. I think I’d be hurting it very badly if we didn’t use it anymore.”
No one should be surprised that Facebook CEO Mark Zuckerberg passed the test placed before him by the Senate Judiciary and Commerce committees on Tuesday.
As the prominent internet rights lawyer Mike Godwin observed in response, “Seriously? Who on earth believes that Donald J. Trump could make himself live another week in the White House — much less serve another term — without his daily dose of Twitter psychodrama?”
In truth, Trump was just trying to work the referees — hoping that his rhetoric alone will discourage Twitter from further interfering with his tweets.
That seems to be working with Facebook, which thus far has announced a hands-off policy on political posting, no matter how noxious or mendacious. Even Facebook executives, as it happens, have been discontented by the hands-off policy adopted by CEO Mark Zuckerberg.
Arguments that private companies such as Twitter or Facebook are infringing on constitutional free speech rights are misguided, since constitutional protections for free speech apply to official government infringements, not those of private actors.
In the private sphere, the diversity of approaches to content moderation may be society’s safety valve. “We have to let go of some Platonic ideal of content moderation, that if internet companies just invested enough time and money, they’d come up with something that would make everyone happy,” Goldman told me. “That outcome does not exist. You’re always going to cheese off somebody.”