Skip to Content

Free Expression

Answers to Five Key Questions from House Energy & Commerce Section 230 Hearing

Last week, a subcommittee of the House Energy & Commerce Committee held a hearing on “Holding Big Tech Accountable” to discuss potential changes to Section 230 of the Communications Decency Act. While Democrats and Republicans appeared to reach little agreement on legislative next steps, the hearing raised several questions about Section 230 and algorithmic ranking and recommendations. In advance of another subcommittee’s hearing on Big Tech this week, CDT answers those questions below, so Congress can refocus the conversation where it belongs: on whether and how internet users will be helped or harmed by amendments to Section 230.

Question: What is the purpose of Section 230?

Answer: According to numerous statements at last week’s hearing, the purpose of Section 230 is to protect big technology companies. In truth, Section 230 is about protecting internet users. Users benefit tremendously from online services of all sizes that allow them to post their own speech, and on which people can, for example, seek and find critical resources during a global pandemic and expose wrongdoing. If providers feared publisher liability for content posted by users, they might not take the risk of offering these services at all or host only limited amounts of third-party speech, likely from large, well-known publishers. In either case, the range of opportunities provided by the internet as we know it today would not exist. 

Section 230 also enables providers to moderate content without fear of lawsuits and potentially crippling litigation costs. Section 230 reversed an influential decision from a New York court that would have held a provider potentially liable for all third party content if it moderated any content. As a result, the law assures providers that they can moderate third-party content without risking liability for the content they don’t moderate and without worries that moderation activities would subject them to notice-based liability for allegedly harmful content. Without Section 230, some providers would decide not to moderate content at all, while others would continue to do so, but in a broad manner that over-removed lawful and beneficial user speech.         

Congress must prioritize this user-centric purpose of Section 230 when considering proposals to amend the law. SESTA/FOSTA has shown how changing Section 230 can chill speech and harm vulnerable members of society. As whistleblower Frances Haugen rightly emphasized in her testimony, Congress must consult with human rights advocates who can explain how specific proposals to change to Section 230 will impact users, especially from marginalized communities.

Question: What services would be affected by amending Section 230?

Answer: Last week’s hearing had an almost singular focus on Facebook, which is understandable, given the company’s numerous and well-documented failings. But Congress must remember that changes to Section 230 will reverberate across the internet. 

Section 230’s liability shield applies to providers and users of an interactive computer service, which has a broad definition. Section 230 shields Wikipedia from liability for the entries written by its users, and Yelp from liability for critical reviews posted by users; safeguards The Washington Post from liability for comments in its comments section; and protects Cloudflare from liability for content on the websites it serves. Changing Section 230 will impact intermediaries broadly, not just Facebook.

Section 230 also plays a crucial role in allowing newer and smaller services to develop to compete with Facebook and other dominant intermediaries. As CDT has explained before, if Congress amends Section 230 to make intermediaries liable for content posted by third-parties, large platforms such as Facebook would gain a big advantage. They have greater resources to put toward filtering or removing potentially-liability-causing content and hiring attorneys to fight expensive lawsuits based on user-generated content. By contrast, a startup company will not be able to afford large numbers of human moderators, sophisticated content moderation technology, or legal help to defend against ruinous lawsuits.

Question: Will removing Section 230’s liability shield lead to legal remedies for issues like disinformation, hate speech, and discriminatory ads?

Answer: Congress – and the rest of society – is grappling with serious issues raised by online user-generated content and the intermediaries that host it. Online disinformation can suppress voter participation, undermining our democracy. Facebook’s ad targeting algorithms appear to discriminate against protected classes in advertisements for housing, credit, and employment. Online hate speech is powerfully effective at targeting and harassing people of color and distorts public opinion about non-white communities

However, amending Section 230 will not be a magic bullet, in part because removing the law’s liability shield will not automatically create a legal claim that addresses each of these scenarios. Without an existing underlying cause of action prohibiting certain speech or information, intermediaries will not be liable for it, even in the absence of Section 230. In addition, even without Section 230, intermediaries cannot be held legally liable for content protected by the First Amendment, including hate speech and false speech that does not meet certain constitutional standards. Congress should carefully study whether existing causes of action apply to the online speech they’re concerned about and whether Section 230 or the First Amendment blocks potential claims.

Question: What do we mean by “algorithms”?

Answer: Committee members expressed clear interest during last week’s hearing in amending Section 230 to remove its liability shield for providers who use certain kinds of algorithms or use them in certain ways. While this interest is based on a legitimate concern about how intermediaries’ ranking and recommendation algorithms fuel the spread of lawful-but-awful content, Congress must consider the potential unintended consequences and pitfalls of amending Section 230 to address intermediaries’ use of algorithms.  

First, intermediaries use a variety of content-sorting algorithms, some of which benefit users. For example, a social media service’s ranking and recommendation algorithm may display content from a user’s friends higher in the user’s feed than content from other random individuals, or they may deprioritize content that is on the “borderline” of breaking platforms’ Terms of Service or content policies. Intermediaries also use algorithmic processes to adapt the display of webpages or apps to different browsers, devices, or user accessibility settings to the benefit of users, including those with disabilities. Any amendments to Section 230 targeting “algorithms” must have precise definitions, so intermediaries are not disincentivized from using algorithmic ranking and recommendation processes that benefit users. 

In addition, intermediaries’ ranking and recommendation algorithms can be blunt instruments that are technically incapable of detecting whether posts contain certain types of content. Certain ranking and recommendation algorithms are not based on (and do not understand) the specific details about the content they recommend. Other ranking and recommendation algorithms are based on traits of the content itself, but their recognition of these traits may be imperfect. Especially when it comes to detecting hate speech, disinformation, and other categories of offensive or undesirable speech, these ranking and recommendation algorithms may fail, because those categories often depend on context. For example, automated systems removed YouTube videos debunking COVID-19 misinformation after erroneously detecting that they were spreading the misinformation. 

These technical limitations will influence how intermediaries would respond to removing Section 230 immunity for certain content, like medical or voting misinformation, that an intermediary algorithmically recommends. Given the technical limits, intermediaries may respond by ending algorithmic recommendation of all content, including algorithmic processes that benefit users. Or, they may respond by refraining from algorithmically recommending any content on the covered subject matter, such as posts that promote vaccination (if algorithmic promotion of medical misinformation were carved out from Section 230 immunity) or voter registration (if algorithmic promotion of voting misinformation were carved out).

Question: Is amending Section 230 the best way to address the problems with online content distribution that Congress has identified?  

Answer: Changing Section 230 is not a panacea that will easily solve the spread of lawful-but-awful content online or the discriminatory targeting of online advertisements. Thankfully, Congress can explore other avenues to address the problems it has identified. This week’s hearing brings a welcome focus on transparency. Congress should consider whether and how various aspects of transparency – from transparency reporting of aggregate data and improved communication with users about intermediaries’ content moderation practices, to researcher access to data from technology companies to risk assessments and audits – can shed light on technology company practices and incentivize them to improve their services. 

In addition, passage of federal privacy legislation would help address issues such as targeted disinformation and discriminatory advertisements. If platforms are able to collect less data about users, or can only use data for intended purposes, their ability to target lawful-but-awful content at particular users or groups of users will be limited. Finally, Congress should act to promote competition among technology services to give users more choices about where to participate in the online ecosystem and create incentives for services to act in the interests of users so as to out-compete rivals. Each of these avenues will help address the problems caused by social media services while preserving Section 230 as a crucial protection for users’ free expression.