Advertisement

Op-Ed: Banning Trump from Twitter and Facebook isn’t nearly enough

Facebook CEO Mark Zuckerberg speaks at a Facebook developers conference in May 2018.
Facebook CEO Mark Zuckerberg said he was shocked by the attack on the U.S. Capitol. Experts have warned for years that Facebook’s features help nurture extremist groups.
(Justin Sullivan / Getty Images)
Share via

Social media finally pulled the plug on Donald Trump. Days after Trump incited a riot at the U.S. Capitol, Twitter permanently banned the president from its platform, and many other social media companies like Facebook, YouTube and Snapchat suspended Trump’s accounts as well.

Mark Zuckerberg and the other creators of the most powerful speech engines in the world have shown astonishingly little contrition in contributing to one of the darkest days for democracy in America. They all expressed shock. But Facebook, Twitter, YouTube and every other social media company have known for over a decade that their tools would be used in ways that lead to violence — they’ve seen it happen. And they did too little, for too long.

There’s growing evidence that banning influential individuals from social media has a big impact on the spread of harmful misinformation. We’re about to have a great test case. Even as many applaud Twitter and Facebook for finally “deplatforming” this toxic president, others cower at the enormous power internet companies hold over public discourse — concerns wrapped up with deep American intuitions around enabling free speech.

Advertisement

The 1st Amendment restricts only the ability of governments to interfere with free expression. So Facebook and Twitter are not trammeling anyone’s constitutional rights when they delete posts and accounts for violating the company’s terms of use. In fact, it would be a 1st Amendment problem were governments to start forcing private companies to continue publishing speech they disagree with. But the free speech objections in this debate obscure an important point: These companies have built their systems to profit from the largely unchecked, viral spread of information. They are clearly aware of how their tools are being used.

In the coming months we will hear a lot about how social media fanned insurrection and whether we need better rules to hold them accountable. There are no easy answers here. The roots for violent extremism in America run deeper than the communication technologies available to them, with white supremacy at the top of the list. But social media is a significant piece of this puzzle.

Social media companies like Facebook and Twitter have built their systems to encourage and profit from misinformation and viral hatred. Their user interfaces encourage toxic sharing by removing barriers to exposing one’s thoughts and making it too easy to reflexively pass along posts that agree with your worldview. They provide instant gratification in the form of likes and hearts for the most pithy and indulgent takes. Their algorithms wind up recommending toxic communities and rewarding the most incendiary posts. Outsize amplifiers like Trump play a key role.

Advertisement

Under the law, if you created something dangerous, knowing the specific harm that would result, you can be held liable. Social media companies knew that their platforms were designed in ways that fostered misinformation and extremism. It’s time our laws held them accountable.

American law tends not to punish people or institutions for harms they could not anticipate. Crimes generally must be intended, and most harmful actions for which one can sue for civil damages must at least be foreseeable. These requirements come from a sense of fairness. Even the makers of asbestos — which went on to kill almost 100,000 people a year and become the subject of a cottage industry of lawsuits — were not initially held liable for lung disease because courts found the manufacturers did not know and could not predict the harm.

But that’s not where we are with social media and political violence.

It’s not just that Zuckerberg should have known political violence was likely. He did know. He knew because his own employees told him. He knew because it happened in Myanmar. He knew because every credible expert — especially women and people of color — publicly said it would happen over and over.

Advertisement

In recent years, Zuckerberg has invited several sets of prominent critics to his home to talk about Facebook. These people must have told him that political violence was likely. And yet he released a statement last week pretending that “the current context is now fundamentally different” in barring Trump from using Facebook through the end of his term. Imagine if the chief executive of an asbestos company invited scientists over to dinner and they told him that his product causes lung disease. He would not be able to claim in court later that he couldn’t foresee the harm.

Of course, information is different from asbestos. But not so different to justify a free pass. There are several ways lawmakers and the public might move to hold platforms more accountable for building and maintaining an environment they know to be dangerous.

Lawmakers and courts can and should distinguish between attributing user speech to platforms — which the law properly forbids — and failing to take reasonable measures to keep the community safe. A company with inadequate cybersecurity can face consequences when it fails to ward off an easily foreseeable hack, even though the company isn’t the hacker. The same should be true of harmful misinformation, especially when the platform’s own terms of service lay out the sort of community the user should expect.

Lawmakers could create new rules to regulate harmful algorithms and user-interface design choices that amplify dangerous rhetoric and predictably make online spaces such a powder keg. Or judges could adapt the law of negligence and product liability to respond to the foreseeable dangers in the way these services are built. Scholars and policymakers have been proposing these kinds of interventions for a while. Unfortunately, up to now they have been ignored.

Banning Trump from social media platforms grabs public attention. Now we have to challenge the actions of these companies that made removing him necessary in the first place.

Ryan Calo is the Lane Powell and D. Wayne Gittinger professor at the University of Washington School of Law.
Woodrow Hartzog is a professor of law and computer science at Northeastern University.

Advertisement