Skip to Content

Free Expression

Clearing Up Misinformation about Section 230

For over 20 years, Section 230 has been an essential part of the foundation for free expression online. It has enabled websites and online services in the US to host, link, and otherwise facilitate access to an enormous diversity of users’ speech, posted by people around the world, without risking liability for what their users choose to say. And it was drafted based on a clear understanding that the Internet presents a wholly different kind of communications medium than print or broadcast, which requires a different way of thinking about how liability ought to be allocated.

Today, there’s more discussion of the role of Section 230 in our online speech environment than ever before. Unfortunately, much of this debate is based on misunderstandings, or flat-out inaccurate descriptions, of what the law says and how it operates. That’s why CDT has joined a group of 27 advocacy organizations and 50 legal scholars in releasing Liability for User-Generated Content Online: Principles for Lawmakers. These principles explain some of the key elements and concepts that are reflected in Section 230 and the two decades of case law interpreting the statute. We hope these principles will be a useful guide for legislators and anyone else thinking and writing about Section 230, to better understand how the statute works and why changing it would raise significant risks to free expression online.

Section 230: How It Actually Works

The statute itself is fairly simple: it shields online intermediaries (“interactive computer service providers,” a very broad set of actors) from being treated, under the law, as the publisher or speaker of content authored by a third party. It also shields intermediaries from liability for actions they take to restrict the availability of obscene, violent, or “otherwise objectionable” content. So if Anna posts something defamatory about Bob, Bob can sue Anna but he can’t sue the website where she posted it. And if the website takes down Anna’s post, Anna can’t sue the website for having removed her speech.

That’s it. There are a number of cases that further interpret and develop the meaning of the statutory text, but the case law stays within those general parameters. There are no requirements in the statute for “neutrality” or any other conditions that intermediaries have to meet to benefit from Section 230’s protections. The law makes no distinction (unlike regimes in other countries) between “passive” intermediaries and those that take a more active hand in moderating content. Indeed, Section 230 recognizes that website operators will make editorial decisions about the content they choose to host, and even encourages it – Section 230 was designed to remove the disincentives against online content moderation that traditional publisher liability would create. Without this protection, interactive online services likely wouldn’t exist, since they would have to make thousands, millions, or even billions of daily decisions about individual posts before they could go up.

Content Moderation: Who Makes the Rules?

When site operators exercise their discretion (and First Amendment right) to moderate content, a lot of time what they take down is someone else’s lawful, constitutionally protected speech. It can be extremely frustrating, to say the least, for people to find their posts removed or demoted or accounts shut down entirely, particularly if it’s not clear what rule they’ve allegedly violated. CDT has long advocated for internet companies to treat their users fairly by giving them clear notice, up front and whenever content is restricted, and opportunities to appeal the company’s decision to restrict their speech. Last year we joined with other free expression advocates to create the Santa Clara Principles on transparency and accountability in content moderation because there is still significant room for improvement in how companies approach content moderation.

But one of the biggest safeguards against unfair content moderation – however you, specifically, would define that – is the ability for speakers to find other online homes for their speech. This was an obvious feature of the early Internet, when people more often ran their own sites and blogs and had to decide for themselves what rules and standards they would apply to their little slice of the web. Crucially, Section 230 protected these webmasters, forum moderators, and mailing list administrators from liability for third-party content just the same as it did any larger scale intermediary.

Today, with a few mega sites claiming enormous shares of the online audience, it can seem like an insurmountable and unrewarding task to switch platforms or to start your own site. But in practice, many people are continuing to experiment with different approaches to running sites for social media, political debate, news, niche interests, and more. These sites won’t become the next Facebook or YouTube overnight (not even Facebook or YouTube did that), but without Section 230 these alternative sites wouldn’t even get to attempt that feat.

And we shouldn’t lose sight of the fact that today’s mega sites are not the only (or even optimal) model for how people can organize information, and themselves, online. Section 230 creates the breathing room not only for direct competitors to today’s dominant sites for user-generated content, but also for the development of completely alternative models for interactive online services. Policymakers who talk about amending Section 230 need to understand these fundamental principles first.