Facebook, Twitter, Google CEOs split over social media shield law
The leaders of the three most popular social media platforms are at odds over the thorniest public policy question they face: Who’s responsible for policing the content that appears on their pages.
At issue is a decades-old law that protects social media companies from liability over content posted by users. The heads of Facebook Inc., Alphabet Inc.’s Google and Twitter Inc. are all slated to appear before a House panel Thursday to testify about the spread of false information that contributed to the deadly Jan. 6 Capitol attack.
The executives outlined their positions in prepared remarks ahead of the hearing. Facebook Chief Executive Mark Zuckerberg supports reforming the measure, known as Section 230 of the Communications Decency Act, while Alphabet CEO Sundar Pichai remains averse to any changes to the legal shield. Twitter CEO Jack Dorsey defended the company’s handling of misinformation.
Zuckerberg called for making liability protection for internet platforms conditional on having systems in place for identifying and removing unlawful material.
The liability shield “would benefit from thoughtful changes to make it work better for people, but identifying a way forward is challenging given the chorus of people arguing — sometimes for contradictory reasons — that the law is doing more harm than good,” Zuckerberg said in his written testimony.
President Trump made reforming Section 230, the law that lets internet companies censor content, into a signature cause. Will the tumultuous end of his presidency, including his own deplatforming, advance or undo that effort?
He added that platforms “should not be held liable if a particular piece of content evades its detection — that would be impractical for platforms with billions of posts per day.” Under Zuckerberg’s proposal, a third party would determine whether the company’s systems are adequate enough to handle the load.
Pichai signaled that he is opposed to any changes to the law. Reforming it or repealing it altogether “would have unintended consequences — harming both free expression and the ability of platforms to take responsible action to protect users in the face of constantly evolving challenges,” he said in his written testimony.
Instead, Pichai wants companies to focus on “developing content policies that are clear and accessible,” such as notifying users if their work is removed and giving them ways to appeal such decisions.
Dorsey touted Twitter’s decisions to apply labels to misleading posts about the vaccine and the election. Twitter has permanently banned former President Trump and is soliciting feedback about how to handle world leaders who violate its rules, while Facebook is awaiting a verdict from its oversight board after kicking Trump off its platform. Google suspended Trump’s YouTube channel in the aftermath of the deadly attack on the Capitol.
Dorsey warned that “content moderation in isolation is not scalable, and simply removing content fails to meet the challenges of the modern Internet.” Twitter is experimenting with new approaches to crowd-source policing of speech online, including a project called Birdwatch, which would allow users to add notes to tweets that are misleading or inaccurate.
“Every day, millions of people around the world Tweet hundreds of millions of Tweets, with one set of rules that applies to everyone and every Tweet,” Dorsey said. “We built our policies primarily around the promotion and protection of three fundamental human rights — freedom of expression, safety, and privacy.”
Zuckerberg, Pichai and Dorsey are set to testify before the U.S. House Committee on Energy and Commerce on Thursday at noon Eastern time.
After the Jan. 6 riot, there’s been growing bipartisan interest in holding tech companies accountable for certain hate speech and extremist content on their platforms.
Politicians on both sides of the aisle have proposed bills that would weaken Section 230 to encourage the platforms to change their content moderation practices. Democratic senators, led by Mark Warner of Virginia, introduced the Safe Tech Act, which would hold companies liable for content violating laws pertaining to civil rights, international human rights, antitrust and stalking, harassment or intimidation.
And a bipartisan bill — the Pact Act — from Democratic Sen. Brian Schatz of Hawaii and Republican Sen. John Thune of South Dakota would require large tech companies to remove content within four days if notified by a court order that the content is illegal.
President Biden has said he is interested in revoking Section 230, arguing that internet platforms have failed to curb misinformation responsibly. Trump had also called for revoking Section 230 over the unfounded accusations that social media platforms are censoring conservative viewpoints.