Skip to Content

Free Expression

Key Questions Remain Following Facebook Oversight Board’s Trump Decision

The Facebook Oversight Board has affirmed Facebook’s January 7, 2021 decision to restrict the ability of then-President Donald Trump to post content on his Facebook page and Instagram account. The Board’s decision also raised a number of important questions, but left the big one unanswered: it put the ultimate decision of whether Trump should be on the Facebook platform squarely back on Facebook’s shoulders. 

The Board gave Facebook six months to decide whether to permanently disable Trump’s account, or instead issue a time-bound account suspension. It also recommended a number of improvements and clarifications to Facebook’s existing policies and enforcement practices.

Among the many issues to unpack in the Board’s decision are important questions about Facebook’s treatment of high-profile users, the Board’s application of international human rights standards, and what the decision means for Trump’s account, as well as those of other world leaders.

How does Facebook treat high-profile individuals, and what does this mean for other users?

The Board’s decision reveals new information about how Facebook treats influential users generally, and Trump, specifically. Facebook told the Board that the same general rules apply to all users. However, the company also explained that it applies a “cross check” system when evaluating potential concerns related to some “high profile” users: content flagged as violating Facebook’s Community Standards undergoes additional internal review in order to “minimize the risk of errors in enforcement.” Under Facebook’s “newsworthiness allowance,” content that violates its policies can remain on the site if it is “newsworthy and in the public interest.”

Facebook says it “has never applied the newsworthiness allowance to content posted by the Trump Facebook page or Instagram account.” At the same time, in addition to the two posts it found on January 6, Facebook has previously found five violations of its Community Standards in posts by Trump’s Facebook page. Conversely, an additional 20 pieces of content from Trump’s Facebook page and Instagram account were flagged as violating Facebook’s Community Standards but ultimately determined not to be violations. 

Based on this information, either Facebook had a significant error rate in applying its Community Standards, or it was in fact applying a separate standard to Trump’s accounts through the cross-check system. If human or automated review of Trump’s posts initially marked 25 of them as violating the Community Standards, but, upon review, only five pieces of content were found to actually violate the Community Standards, that reveals an 80 percent error rate in the application of the standards. 

Such a high frequency of incorrect takedown determinations by Facebook’s moderation systems has enormous implications for regular Facebook users, who don’t get the white-glove cross-check treatment that high-profile users receive. As a result, they may have large amounts of their content erroneously removed. Facebook’s transparency reports provide further insight into its content moderation error rates, including data about how much actioned content in various categories was restored either as a result of user appeals or without an appeal (that is, through unilateral action by Facebook).

This data, however, clearly doesn’t tell the whole story, and is not presented in a way that allows readers to easily understand the overall percentage of content that is restored. Facebook must provide more information about its error rate in content moderation decisions to improve moderation for all users. It should also examine whether the kinds of errors it identifies in the cross-check process are qualitatively different from the errors it looks for in its regular quality assurance processes. When a different set of staff with different perspectives and incentives reviews high-profile content-removal decisions, what other kinds of context do they consider, and how does this affect removal decision outcomes? The Board’s recommendation that Facebook “report on the relative error rates of determinations made through cross check compared with ordinary enforcement procedures” will help shed light on Facebook’s content moderation error rate and the impact it has on the vast majority of users subject to ordinary enforcement procedures. 

On the other hand, if Facebook really does apply different standards to political leaders and other influential users, it must clearly and publicly explain them. In our comments to the Board on this case, for example, CDT recommended that Facebook hold world leaders to a more stringent standard on incitement to violence because high-ranking political figures have an outsized potential to influence public sentiment and to incite violence. The Board endorsed this view when it found that context — including the speaker’s status as a political figure — matters when assessing the risk of harm caused by a post. The Board also correctly called on Facebook to provide more information about the application of the newsworthiness allowance, including how it applies to influential accounts, and the cross-check review.

Will the Board ever tell Facebook that its policies violate substantive human rights standards?

The Board’s decision also raises questions about how it applies substantive human rights standards to Facebook’s Community Standards. Facebook justified Trump’s suspension under its “Dangerous Individuals and Organizations” policy, which prohibits “content that praises, supports, or represents” terrorism, hate crimes, and other “violating events” and individuals who have engaged in acts of organized violence. 

The Board examined Facebook’s application of the Dangerous Individuals and Organizations policy under international law standards allowing for restrictions on expression when restrictions are (1) clear and accessible, (2) designed for a legitimate aim, and (3) necessary and proportionate to the risks of harm. Under the first prong of that test, the Board criticized the Dangerous Individuals and Organizations policy as vague and noted that it has previously held that the policy “leaves much to be desired.” (The Board previously held that the policy violates international human rights standards on legality because it lacks clarity.) However, because it concluded that Trump’s posts at issue “fell squarely within the type of harmful events set out in Facebook’s policy,” it upheld application of the policy to those posts.  

In analyzing the final prong, the Board applied the six factors of the Rabat Plan of Action — which CDT also referred to in its comments to the Board — and concluded that Trump’s violation was “severe” in terms of human rights harms. 

While its ultimate holding that Trump’s speech squarely violated the Dangerous Individuals and Organizations policy is correct, the Board must examine the policy more closely under international human rights standards. Broad prohibitions on speech “glorifying” terrorism, violence, or other crimes have been repeatedly criticized by human rights organizations and United Nations Special Rapporteurs as inherently vague and vulnerable to abuse. In practice, a great deal of legitimate, protected, and non-terroristic speech has been swept up in anti-glorification laws and policies.  

The Board must ensure that Facebook revises its Dangerous Individuals and Organizations policy to meet human rights standards. Facebook has publicly committed to implementing certain recommendations from the Board in the past to clarify its Dangerous Individuals and Organizations policy, but does not appear to have actually done so yet. For now, the Board seems to give Facebook a pass on reforming its Dangerous Individuals and Organizations policy to meet human right standards for clarity, generally, by holding that the application of the policy in this case, specifically, meets those standards.

The Board did not examine Facebook’s Community Standard on Violence and Incitement, because Facebook did not rely on that policy to justify its action against Trump’s accounts. (In contrast, Trump’s counsel at the American Center for Law and Justice focused almost entirely on incitement, barely even addressing the Dangerous Individuals and Organizations policy.) A minority of the Board members would have considered that standard and held that Trump’s posts violated it, too. Whether the Board will apply the minority’s context-based and relatively low standard for incitement in future cases remains to be seen. The Board’s recognition of the applicability of the Rabat Plan of Action when “assess[ing] the contextual risks of potentially harmful speech” suggests that this standard may apply in future cases where the Board directly considers the Violence and Incitement policy.

What does this mean for Trump’s account – and for other world leaders?

The question on many people’s minds is what the impact of the Board’s decision will be on Trump’s Facebook and Instagram accounts, and those of other world leaders. The Board held that Facebook cannot indefinitely suspend Trump, but it didn’t make the final call on whether and when he should be reinstated. Rather, it put the ball back in Facebook’s court to make the final decision, within six months, of whether to permanently ban Trump or suspend his accounts for a “time-bound” period. Whatever decision Facebook makes must be based on “the gravity of the violation and the prospect of future harm,” and be consistent with Facebook’s rules.

The Board’s policy recommendations lay out a series of proposed changes to Facebook’s rules that could impact content removals and account suspensions or bans of political leaders in the future. In particular, the Board calls on Facebook to act quickly when posts by influential users pose a high probability of imminent harm, and to consider the heightened risk posed by posts by political leaders encouraging, legitimizing, or inciting violence. Facebook must respond to the Board’s recommendations within 30 days, though it is free to adopt or reject them, in whole or in part.

While some world leaders criticized social media companies for barring or suspending Trump after the January 6 Capitol attacks, at the end of the day it’s not clear what democratically supervised process would or could have responded to Trump’s incitement during the Capitol attacks. Certainly, in the United States, the ability of any government actor to require the removal of Trump’s social media posts would have faced an exceptionally — and perhaps impossibly — high bar. Social media companies and other online services have the flexibility to make editorial decisions and respond quickly to lawful-but-abusive content that state actors simply do not.

Still, there are legitimate concerns over the unchecked power of Facebook and other social media companies to suspend users’ accounts, which CDT has in the past analogized to a prior restraint on speech. Shutting down people’s accounts can have an enormous impact on their access to an audience, especially for a service the size of Facebook. The Board intends to act as a check on Facebook’s unilateral power in this domain. Not only does the Board make decisions in individual cases, but its policy recommendations can have a far broader impact on the millions of users whose content Facebook moderates but who will never have an appeal heard by the Board. 

At the same time, because their Board’s policy recommendations are merely advisory, their power is greatly limited. As others have noted, Facebook’s commitments to past policy recommendations by the Board have been vague or even misleading. The Board also has no process by which it reviews or determines if Facebook is actually implementing its recommendations. 

Without the formal power to make its policy recommendations binding, the Board must rely on transparency and public pressure to influence Facebook. The Board should publicly review and comment on Facebook’s responses to its recommendations. It should provide periodic public updates on Facebook’s implementation of the recommendations that the company commits to following, and the Board should call on Facebook to explain itself if the company does not actually follow them.

Ultimately, the most powerful oversight the Board can provide may be in the additional transparency it can bring to Facebook’s content moderation processes. It receives much more detailed explanations about Facebook’s decisions to remove particular posts and accounts than regular users ever do. The Board’s decisions provide insight into the internal processes at Facebook, and provide an opportunity to keep track of whether Facebook is implementing the recommendations for how to bring its policies and processes in line with international human rights standards. This kind of information is essential to having more informed public policy debates about Facebook’s role in our global information ecosystem and its consequences for individuals’ human rights.