How the debate over holding internet platforms accountable is changing under Biden
Two people were dead; one was injured; and Jason Flores-Williams wanted to hold Facebook responsible.
But after filing a lawsuit in September alleging that the website’s lax moderation standards led to 17-year-old Kyle Rittenhouse killing two protesters in Kenosha, Wis., over the summer, Flores-Williams withdrew the suit in January. His fight for accountability had collided with a law the activist attorney came to see as a “brick wall.”
“You have no levers of control, no leverage,” he told The Times. “You’re up against Section 230.”
A snippet of text buried in the 1996 Telecommunications Act, Section 230 is the regulation under which websites enjoy broad freedom to choose if and how they moderate user-generated content. Flores-Williams had alleged that a Facebook post by the Kenosha Guard militia summoning armed civilians to the city had laid the groundwork for Rittenhouse’s violence there; but as Section 230 is written, Facebook and its peers are rarely liable for what their users post — even when it results in death.
Flores-Williams isn’t alone in seeing the law as outdated. President Biden, former president Donald Trump and a long list of Democrats and Republicans have all pushed for the law to be restructured or scrapped entirely amid increasingly bipartisan criticism of Big Tech.
But if liberals and conservatives are united in their calls for reform, they’re split on what that reform should look like — leaving internet companies stuck in a limbo where a massive forced change to their business model is constantly discussed yet never quite materializes.
Meanwhile, those who seek to hold the platforms accountable for the harms caused by content spread there are left searching for new approaches that might offer a greater chance of success — which is to say, any at all.
Section 230 takes a two-pronged approach to content moderation: not only does it absolve websites of liability for user content they don’t moderate, but it also says they can moderate user content when they choose to. That lets social networks, chat forums and review websites host millions of users without having to go to court every time they leave up a post that’s objectionable, or take one down that’s not.
Online platforms usually, though not uniformly, support leaving Section 230 the way it is. In a congressional hearing last fall, Alphabet Chief Executive Sundar Pichai and Twitter CEO Jack Dorsey warned that the internet only works thanks to the protections afforded by the law; Facebook CEO Mark Zuckerberg broke ranks to say the law should be updated, citing a need to promote transparency around moderation practices.
Of the law’s critics, conservatives typically lean toward unrestricted speech. A Trump executive order sought to modify the law so users could sue platforms if they restricted content that wasn’t violent, obscene or harassing, although legal experts said the order was unlikely to hold up in court and it appears to have had little impact on how the platforms conduct themselves.
On the left, critics have called for a version of Section 230 that would encourage more rigorous moderation. Reforms targeting sex trafficking and child abuse have also garnered bipartisan support in the past.
Both sides have only gotten louder in recent weeks: the Jan. 6 siege of the U.S. Capitol prompted concern from the left about the role unregulated social media can play in organizing real-world violence, while the subsequent banning of Trump’s Facebook and Twitter accounts gave the right a striking example of how easily tech platforms can silence their users.
With Democrats now controlling the presidency and both houses of Congress, the party has an opportunity to rewrite Section 230, but it has yet to achieve consensus, with members floating multiple differently calibrated proposals over the last year.
The latest of those is the SAFE TECH Act, proposed last month by Sens. Mazie Hirono (D-Hawaii), Amy Klobuchar (D-Minn.) and Mark R. Warner (D-Va.). The bill would increase platforms’ liability for paid content and in cases involving discrimination, cyberstalking, targeted harassment and wrongful death.
Flores-Williams said that last item in particular, which the sponsors say would allow “the family of a decedent to bring suit against platforms where they may have directly contributed to a loss of life,” opens the door for future cases along the lines of his withdrawn suit.
It could also bolster suits over deaths such as that of Brian Sicknick, the Capitol police officer who died after defending the Capitol on Jan. 6. The official cause of Sicknick’s death has yet to be determined, but the case is cited by the bill sponsors in their argument for the carve-out.
The implications could extend well beyond high-profile deaths, too.
“Talk about floodgates, right?,” said Daniel Powell, an attorney at the internet-focused firm Minc Law. “Floodgates to millions in liability for lawsuits where people have died for any reason that has any tangential relationship to social media.”
It’s not clear how broadly lawmakers and prosecutors would try to interpret SAFE TECH’s provisions, but if passed, the bill could force tech companies to rethink how they engage with user-generated content.
Nadav Shoval, CEO and co-founder of OpenWeb — a platform which manages comment sections for online media outlets such as TechCrunch and Salon — said changes to Section 230 could hinder innovation through overly broad liability.
“I have more questions than answers on this specific proposal, but I remain confident that changing the law at all is a mistake,” Shoval said of the SAFE TECH Act via email. “We have other laws in place that are not [Section] 230 to ensure the communities we host are safe, free from violence, hate speech, discrimination, etc.”
But clearer guidelines around moderating and distributing user content would be helpful, Shoval said; those are areas “which should be slightly regulated, or at least more clear, because right now there’s a lot of gray areas … where some guidance would definitely help.”
Other social media platforms that would be affected by the passage of the SAFE TECH Act — including Facebook, Twitter, Google, Reddit and Snapchat — declined or did not answer a request for comment on the bill.
The legislation faces a rocky path forward. Opposition to content moderation became a major Republican rallying cry under Trump, and the party has significant power to block legislation in the Senate through filibusters. With Democrats preoccupied by the COVID-19 pandemic and accompanying economic crisis, liberal leaders might be hesitant to spend their time and energy on abstruse social media policies.
In the absence of imminent reform, some lawyers have adopted another strategy: trying to find novel legal theories with which to hold platforms liable for user content while Section 230 still remains in force.
“For as long as [Section 230] has been around, there have been plaintiff’s attorneys attempting to plead around the immunity it affords,” said Jeffrey Neuburger, a partner at Proskauer who co-leads the law firm’s technology, media and telecommunications group.
But the courts have “usually, with few exceptions” shot those efforts down, Neuburger added. For instance, he wrote via email, courts have “routinely and uniformly” rejected arguments that websites become liable for user content if they perform editorial functions such as removing content or deleting accounts; and have similarly rejected arguments that websites’ “acceptable use” policies constitute legally binding promises. And in the few cases where plaintiffs have managed to circumvent Section 230 defenses, the verdicts have generally been reversed on appeal.
“There are no easy answers,” Neuburger said. “It’s hard to regulate content online.”
An approach that might make it easier to regulate the places where content lives would be to alter the legal status of large internet platforms in a way that puts them under greater government control.
“Rather than trying to change Section 230, because I’m not sure that’s workable … maybe [try] treating these providers like public utilities,” said Daniel Warner, a founding partner at the online defamation-focused law firm RM Warner. “You can’t not give someone electricity because they support Joe Biden or Donald Trump. It just doesn’t work like that, and it shouldn’t. So I think the same goes for social media.”
While a push to use antitrust law to break up the biggest tech companies has gained momentum in recent years, the public utilities approach pivots in the opposite direction, embracing networks such as Facebook, Amazon and Google as “natural monopolies” and allowing them to dominate their respective markets — but only under tight government regulation.
Proponents of this approach argue that social networks have become central to their users’ lives and are prohibitively hard to leave. Critics say that compared to conventional utilities such as railroads and sewage systems, social networks are less essential to consumers’ lives and easier for upstart firms to compete with.
For Warner, the public utilities approach is mostly theoretical at this point: “We have yet to have an opportunity to make that argument and really explore it in detail.”
And going down that path could introduce new legal headaches, Neuburger said, such as forcing the government to delineate which platforms count as public utilities and which don’t, or to clarify how Section 230 should interact with contradictory state laws.
For now, everyone involved — from the kneecapped lawyers to the directionless politicians to the imperiled tech executives — remains trapped between an unpopular present and an unclear future.
to continue reading