Skip to Content

Free Expression

Shielding the Messengers: Section 230 and Free Speech Online

This post is the first in our ‘Shielding the Messengers’ series, which will examine issues related to intermediary liability protections, both in the U.S. and globally. Without these protections, the Internet as we know it today–a platform where diverse content and free expression thrive–would simply not exist.

To most Internet users, 47 U.S.C. § 230  is an unfamiliar U.S. legal provision. The explanation that ‘Section 230’ “protects Internet intermediaries from liability for user content” likely doesn’t make the concept less obscure. But when it comes to free speech on the Internet here in the United States, Section 230 is as vital as the First Amendment.  It’s fair to say that without it, the proliferation of revolutionary platforms for user generated content and communications—think YouTube, Facebook, Twitter and Yelp—may never have come to pass.

Section 230 protects all ‘Internet intermediaries,’ including access providers, hosts and forums for user-generated content, search engines, and applications that provide the backbone for online communication—in other words, the messengers that enable communications to travel from one user to another. It protects intermediaries in three ways:

  • Shielding platforms from liability for what their users post.
  • Shielding platforms from liability for any actions they take to block content they find objectionable, regardless of whether the content is constitutionally protected.
  • Shielding entities from liability for providing the technical tools platforms use  to block objectionable content.

The significance of Section 230 is best understood in the context of what the Internet communication environment could have looked like without it. In 1995, the opinion in the case Stratton-Oakmont v. Prodigy  imagined a very different role for Internet and online service providers. In that case, the New York Supreme Court held that the Internet company Prodigy should be considered a “publisher” of the content posted by users on its online bulletin boards, instead of just the messenger transmitting that user-generated content to readers.  This meant Prodigy could be held responsible for defamatory statements made by an anonymous user on its bulletin boards. It should not be hard to see that the threat of such liability would make any intermediary much less likely to offer forums for user content in the first place.

The next year, as some prescient legislators realized the potential implications of this decision for online speech platforms, Congress passed Section 230 as part of the Communications Decency Act, effectively reversing the Prodigy decision.

Since its passage, Section 230 has played the role of “shielding the messengers,” allowing online platforms and services to develop innovative ways for people to communicate and connect. Without Section 230, the costs and risks of providing new communications tools would be too high for many online startups. Imagine the overhead that monitoring each comment or post on a website could entail: On YouTube alone, users upload forty-eight hours of video every minute.   Facebook hosts 700 new status updates from users every second.   Instead of risking the legal liability of running an open platform and being held responsible for users’ posts, some companies might calculate they would be better off not running open platforms at all.

It’s no coincidence, then, that many of the most popular platforms for speaking, sharing, and organizing were developed in the United States, where intermediary protections are the strongest in the world. These same protections are needed within legal frameworks around the world  if all countries are to fully unleash the innovation and foster the engagement and development the Internet has the potential to enable.