Skip to Content

Free Expression, Open Internet

Section 230 and Competition Policy are Inextricably Intertwined

If you follow tech policy, you know about two current debates involving Big Tech. One is about Section 230. That’s the 1996 law which, as a general matter, promotes free expression online by shielding online companies from liability for users’ speech. Search engines, domain name providers, web-hosting services, restaurant review sites, and the like are protected by Section 230, as are social media services such as Twitter, YouTube, and Facebook. Section 230 also shields these companies from liability for their decisions to label or remove user-generated content if they so choose. A number of political leaders and advocates have expressed concern that online platforms allow or even amplify user-generated content that is obnoxious, untrue, unpleasant, or just plain harmful. Others are concerned that content moderation is done unfairly. (My colleagues have written frequently about Section 230 and its role in promoting free expression, e.g., here). 

The second big tech policy debate is about the power of the very same companies: Twitter, YouTube (owned by Google’s parent company), Facebook, and a few others. Politicians from both sides of the aisle, academics, and others have expressed their concern that a few companies wield too much power over what gets spoken in (and what gets content-moderated out of) what is often thought of as the “modern public square.” They have also raised questions about the amount of economic power that tech giants control, with concerns about user experiences, competition in advertising markets, potentially anticompetitive mergers, and allegedly collusive behavior. 

These two debates are each separately complicated and important, and they are also intrinsically related. If Section 230’s liability shield were repealed or substantially limited, social media providers would soon face lawsuits from people who wanted specific content removed – whether as part of a legitimate legal claim or as a strategy for silencing opposing views. To protect themselves from the risks posed by such lawsuits, these providers would likely take down more content than the law required. Providers could also face lawsuits for their decisions to label or remove users’ content, creating a complex liability calculus for providers considering how best to enforce their content policies and respond to abuse on their services.  

And here’s where the competitive issues intersect with the content ones: large social media platforms have the resources to hire lawyers to defend such cases and negotiate with plaintiffs, and to develop moderation systems that aim to sift through user-generated content and remove objectionable material before the threat of a lawsuit emerges. But upstart social media companies are less likely to have the resources to hire those armies of content moderators and lawyers. They likely won’t have the resources to develop and employ automation with appropriate safeguards to prevent bias and disparate impacts on vulnerable users. Smaller providers are far more likely to remove much more content, or be put out of business via litigation. In antitrust terms, the increased threat of liability from limiting or repealing Section 230 will serve as an entry barrier for all types of interactive services.

Section 230 not only protects online content, it protects the competitive opportunities of new online platforms. Thus, if we want to protect opportunities for entrepreneurs to create new entrants to challenge the incumbents, we should include that interest when we talk about the future of Section 230. There may be reforms in Congress that change Section 230 in the years to come, but policymakers should not let new entrants and innovation be collateral damage in those discussions.