Skip to Content

Free Expression

CDT Urges Oversight Board to Hold Political Leaders to Higher Standards on Incitement to Violence

The Facebook Oversight Board is taking on its most high-profile case yet: the suspension of former President Donald Trump’s account following the January 6th riot at the U.S. Capitol. This case is coming early in the history of the Oversight Board, which released its first set of opinions in late January. Those decisions reveal a serious effort to grapple with difficult questions of content moderation, by a Board that is still defining its scope and approach.

Account Suspension and Political Leaders

In our comments in the case of the Trump account suspension, CDT offers an analysis of account suspension as a key type of enforcement action on a social media service. The Oversight Board has relied heavily on international human rights law in its analysis of the first set of cases, which is appropriate given the massively global nature of Facebook’s 2-billion-plus user base. However, international human rights law, much like an analysis under the First Amendment, would likely consider account suspension to be a prior restraint on speech, as suspending a person’s account preemptively stops them from ever speaking on that site in the future. There is generally a heavy presumption against the validity of prior restraints.

While the presumption against prior restraints is a crucial limitation on governments’ ability to censor speakers, we argue that the analysis in the case of a specific service’s decision ought to be more permissive of account suspension as a remedy. Account suspension, and related policies prohibiting ban evasion, are essential tools in a service’s content moderation toolbox. They enable a service to remove users who persistently violate the rules and norms of the site and who undermine the integrity of a site’s policies. While permanent account suspension can seriously limit an individual’s ability to reach a particular audience, especially on a site as large as Facebook, it remains materially different from a government-imposed prior restraint that reaches across multiple services.

We also recommend that the Oversight Board consider the special role played by prominent political figures, as it interprets and applies Facebook’s policies and international human rights law concerning incitement to violence. It can be appropriate to allow a political figure’s posts to remain available if those posts are genuinely newsworthy, but when it comes to incitement to violence, political leaders should be held to a more stringent standard than the average user.

People often imbue statements from such leaders with more legitimacy and authenticity than statements from ordinary individuals. That, combined with their reach, means that political leaders have an outsized potential to incite violence. Moreover, as groups like the Dangerous Speech Project have long observed, public officials may control “the power to deploy force against uncooperative audience members,” which can give these officials’ explicit or implied threats of violence more weight. The Rabat Plan of Action’s six-part threshold test, from the Office of the UN High Commissioner on Human Rights, calls for “the speaker’s position or status in the society” to form a key part of the consideration of whether a statement is likely to incite violence.

As our comments explain, in order to assess whether to suspend a political leader’s account, Facebook should conduct a contextual analysis that includes an evaluation of the likelihood that a leader’s speech will lead to violence, taking into account factors such as the real-world context. Given the challenges of doing such an analysis at scale, CDT urges the Oversight Board to make specific policy recommendations to Facebook about improving its procedures for identifying and escalating statements by political leaders that carry a high risk of inciting offline violence.

The Role of the Oversight Board

The Oversight Board is an important experiment in content governance. It will not solve all of the content governance challenges in the world, or even at Facebook; it won’t even come close. But it is still a worthwhile effort to apply an independent set of standards and analysis to a major social media service’s content moderation decisions. Many of the practical challenges before the Oversight Board, of how to address abuse while protecting and promoting free expression online, and how to hold a social media service accountable to its own content policies, are the same ones facing policymakers in the U.S., the EU, and around the world. The Oversight Board’s cases are an opportunity to get out of the realm of hypotheticals in content moderation and to think seriously about concrete situations and the consequences of moderation decisions.

In its first set of decisions, the Oversight Board made multiple recommendations to Facebook about policy changes and clarifications the company should consider. To be most useful, the Oversight Board’s policy recommendations will need to be specific and technically feasible, and not vague or impossible to implement at scale. At the same time, Facebook should not reject recommendations merely because they would be expensive or logistically challenging to implement, and Facebook should provide detailed explanations for why it rejects any recommendations. Ideally, the Oversight Board and Facebook will engage in an on-going dialogue that enables the Oversight Board to craft useful, implementable recommendations while holding Facebook to the highest standards of protecting human rights on its service. And that dialogue itself will prove useful to all of us — from policymakers to users — in better understanding some of the difficult challenges and tradeoffs that content moderation poses.