Skip to Content

Free Expression

Congress Should Take “Filtering Practices of Social Media” Seriously

Tomorrow, the House Judiciary Committee will host what’s likely to be a wide-ranging discussion of how social media companies moderate content, in its hearing on Filtering Practices of Social Media Platforms. While the hearing is sure to include some spectacle and grandstanding, make no mistake: This is a deeply serious issue that deserves thoughtful consideration by policymakers, companies, and users alike. Here are a few key themes we hope members of the committee will consider:

1. Congress needs to understand how the law works in this space.

Section 230 and the First Amendment are the twin pillars of U.S law that support free speech. Section 230 sets out the legal framework that shields intermediaries—web hosts, domain name registrars, search engines, social media networks, and many others—from liability for user-generated content. Without this strong protection, social media networks and other intermediaries would face extraordinary legal risk every time one of their users uploaded a file. If any post, image, or video could potentially lead to a drawn-out court battle involving civil claims or criminal charges, intermediaries would be highly unlikely to offer robust spaces for reporting, commentary, and debate.

As the SESTA/FOSTA debates showed, there’s still a lot of misunderstanding in Congress about how Section 230’s protections actually work. Section 230’s broad protection from liability for third-party content is coupled with an assurance that intermediaries will retain this protection even when they review user posts and make decisions about whether to take them down or leave them up. Far from requiring “neutral” content-hosting, as was suggested during Mark Zuckerberg’s testimony before Congress, Section 230 explicitly shields social media sites from liability for removing obscene, excessively violent, harassing, or “otherwise objectionable” material.

The First Amendment, of course, also plays a major role. Several courts have recognized intermediaries’ own free-speech interests in their handling of user-generated content. Congress, conversely, is highly constrained by the First Amendment in its ability to regulate speech. In practice, most user-generated content hosts operate under Terms of Service that are much more restrictive than the First Amendment would allow government actors to be. For example, these terms often include rules against pornography, hate speech, profanity, and other constitutionally protected speech. Content moderation on social media platforms is a challenging, complex task (as recent publications from Facebook and YouTube demonstrate), and reasonable people can disagree about the correct type and calibration of platforms’ policies. But Congress must remember that there are high constitutional barriers to its ability to dictate online platforms’ content rules.

2. Social media companies’ decisions about how to moderate content have substantial consequences for speakers and audiences.

Of course, however much freedom platforms have to shape their own content policies, the choices they make undoubtedly create significant consequences for their users’ speech. Vague policies leave users uncertain of what kinds of posts will lead to takedown or account deactivation, and broadly restrictive policies result in the silencing of journalists, activists, and people across the political spectrum. Companies’ content moderation practices are wielded by bad actors to target certain individuals, groups, and opinions for censorship. And even when there’s no abuse of the system, rote application of a platform’s content policy can lead to confusing and counterintuitive results.

But failure to moderate has its own consequences for speech. Unaddressed harassment and hate speech on social media platforms can create a significant chilling effect on women, people of color, journalists, members of religious groups, and many other people whose voices have historically been marginalized. If a platform truly wants to support a diverse user base and a robust environment for speech of all kinds, it must grapple with the fact that different people face different sorts of threats and barriers to participation in online discourse. A freewheeling, lightly moderated platform will privilege certain kinds of speakers. Others will find it unappealing, or downright unsafe, to participate in online discussions unless there are strong measures in place to prevent and address harassment and threats.

Social media platforms must therefore think carefully about what balance they strike with their content moderation practices, and must clearly communicate both their policies and the values that inform them to their users. CDT has long advocated for better transparency and accountability from social media platforms toward their users, and we’re starting to see some positive developments in that direction. But there’s no perfect answer to how a platform ought to resolve the tension between too much moderation and not enough, particularly as globe-spanning social media networks enable dozens of languages, hundreds of cultures, and millions of communities to intersect and interact in unexpected ways. That’s why it’s so important to maintain an open internet that is home to a diversity of platforms for speech, so that speakers can find the fora that work for them.

3. Automation plays a big role in content moderation at scale, but it’s not a silver bullet and carries its own risks.

A cross-cutting issue for intermediaries of all sizes is the challenge of scale. From a small site where a few people are responsible for moderating thousands of posts a day to a gigantic site with thousands of people moderating millions of posts, any intermediary is likely handling more content than a team of humans could reasonably review. Automated processes, including spam-fighting tools, keyword filters, and hash-matching have long been used to help wrangle large volumes of content. As YouTube disclosed in its recent transparency report, over 6.6 million of the 8.2 million videos the platform took down between October and December 2017 were first flagged for review by an automated process.

There is a disturbing trend, however, of both policymakers and leaders in the tech sector pointing to automation (particularly artificial intelligence and machine learning techniques) as the magic answer to some very thorny problems, including everything from hate speech and harassment to disinformation and terrorist propaganda. But there are real limits to the capabilities of machine learning tools to solve these problems, some of which will persist even as computational techniques become more advanced.

For example, in a recent paper, we discuss the challenges of developing a machine learning tool to distinguish “hate speech” from benign, non-hateful speech. Training such a tool requires an annotated data set, with examples of hate speech and non-hate speech clearly identified. This, in turn, requires the people annotating the data set to have a clear definition of hate speech that they apply consistently to the training data. And even if the designers of the tool can agree to one consistent definition of hate speech, this definition must align with what the platform, and the community of speakers it hosts, also consider to be hate speech. Given how rooted in local and historical contexts our conceptions of hate speech are, this is an extremely tall order for an online platform with a global user base.

Moreover, many of the conversations about the promise of AI presume that the process of identifying and removing problematic online content will resolve the underlying policy issue. But this is a dangerous assumption. Perfect censorship of hate speech on a social media platform does not equate to eliminating hatred from society, or even eliminating hate-fueled interactions on the platform. It’s easy to lose sight of how complex these challenges are when “take down content” becomes a proxy for “solve the problem.”

***

We hope tomorrow’s hearing in Congress yields some fruitful discussion; these are deeply important issues, and it’s crucial that policymakers have good information about the technical, legal, and social complexities around free speech online. But the internet is a global network, and the U.S. audience for this hearing should also be paying attention to the regulatory activity happening abroad.

The EU, for instance, had been extremely active in the area of “illegal content”—and a lot more content is illegal in Europe. Between the Code of Conduct on Illegal Hate Speech, the mandatory filtering proposal in the Copyright Directive, and the ongoing consultations on disinformation and illegal content, the EU is charging ahead with a variety of concerning regulatory efforts, some of which put social media platforms in the position of judges, deciding what violates national law. These processes are already shaping how U.S.-based social media platforms handle problematic content, and First Amendment values are increasingly seen as an outlier. The rest of the world is taking this issue seriously; Congress should, too.