Advertisement

Op-Ed: Trump’s bizarre fixation on demolishing an internet speech rule

Share via

As a law professor and tech industry veteran, I never would have predicted that an obscure section of federal law would become a point of focus in President Trump’s effort to derail the operation of the federal government in his final weeks of office.

Yet Trump attempted to tie elimination of Section 230 of the Communications Decency Act to the defense authorization bill, prompting Congress to override his veto by a convincing margin, leaving the law intact. Section 230 protects internet service providers and social media platforms from various kinds of legal liability for speech or information posted by others on their services.

Eliminating Section 230 would result in less expression on social media platforms. And yet, it would do nothing to increase accountability or transparency on social platforms like Facebook and Twitter.

Advertisement

Section 230, created in 1996, allowed ISPs and other services to host millions of user messages without screening those messages for defamation or other speech concerns. It shifted the burden of liability for publication away from the hosting platform and toward the poster of the offending message. This protection encouraged investment in digital services and eventually gave rise to the vast array of social media forms that exist today.

As the internet transitioned from message boards and chat rooms to platforms designed for user-created content, the nature of social sites and the perception of their need to monitor user content changed. The evolution of Facebook, Twitter, Google and other major platforms has clouded the issue of whether the hosting service is simply providing infrastructure for user messages or playing an active role in monitoring and selectively censoring these messages in a way that should make the service responsible for the content.

The 1st Amendment only applies to speech censorship by the government, yet the more that private tech companies act as prominent forums for speech and enforce “rules of the road” for permissible speech, the more people have come to expect that social media platforms should be treated as if they were legally responsible for the content they allow to be published.

Advertisement

In a global internet with billions of users on each major social media platform, the issue of how to enforce local standards will stymie even fair-minded content monitors. Facebook, for example, is currently grappling with this challenge with efforts to develop speech codes that satisfy political groups in both India and Pakistan, with opposing factions believing that the platform is favoring its enemy.

Conservative groups in America have complained that social media sites unfairly single out their content for removal, yet eventually someone has to draw a line between speech that advocates opinion and speech that encourages associations with groups that promote more noxious messaging, including some that could be interpreted as promoting violence. Tens of thousands of content monitors cannot resolve this problem, because it is inherently a moving target subject to interpretation.

Prominent legal scholars have argued for regulation of these private companies, either as common carriers or public utilities. Yet even if such regimes were adopted, the thorny questions of “who decides what content shall be published?” and “what community standards will apply to speech?” will remain.

Advertisement

If Congress eliminated the liability shield of Section 230, the logical response of big tech companies could be to set a much lower bar for banning offensive speech to reduce the likelihood that they would be sued for offensive speech. For example, any time a particularly offensive key word is used to describe an individual, the post would be automatically banned or put aside for further review. This would dramatically constrict the volume of content and user expression.

In the coming year, Amazon, Google, Facebook and others will face increasing scrutiny for alleged anti-competitive business practices arising from their quasi-monopoly status in their spheres. Such scrutiny is healthy, because it will add transparency to their operations and provide consumers with more insight into their business models and use of the personal data that fuel many of their profitable operations.

But attacking the 25-year-old liability shield for all internet platforms is not a viable way to increase accountability. We need to recognize that, despite their size and influence, private companies should have the autonomy to create rules that they believe best serve their users.

We need to devise creative solutions to the problem posed by regulating speech on private platforms. At minimum, such solutions will utilize both technology to identify problematic content and human review consistent with transparent speech codes. Social media platforms already have the ability to deploy algorithms to reduce the spread of polarizing content and conversations. The tech companies might fail to maximize advertising revenue in this scenario, but they might accept trimming profits to avoid intrusive federal regulation.

This is no easy task to find an acceptable balance. But repealing Section 230 will not serve anyone.

Alex Alben teaches internet law, media and society at the UCLA School of Law. He worked as a technology executive for early internet companies and also served as chief privacy officer for the state of Washington from 2015 to 2019.

Advertisement