Skip to Content

Free Expression

Tech Policy & COVID: What We’ve Learned About Access to Data on Automated Content Moderation

In March 2020, several social media and content-sharing platforms announced that they would send their human workforces home and increase their use of automated tools for content moderation as a result of the coronavirus pandemic. Journalists and researchers immediately saw an opportunity for a natural experiment to test questions about the efficacy, limits, and desirability of automated content moderation.

Because access to data is critical to studying these questions, in April 2020, CDT and a coalition of 75 other organizations and individuals sent an open letter to social media and content-sharing platforms, calling on them to preserve certain information and data and share it with journalists and researchers. Among other things, the letter urged companies to preserve and publicly report data about automated content removal during the COVID-19 pandemic, such as information about which takedowns did not receive human review, whether users tried to appeal those takedowns (when that information is available), and reports that were not acted upon. 

Over a year later, no social media or content-sharing platform has committed to all of the requests in the letter. CDT continues to urge companies to provide detailed information about the impact of COVID-19 on their content moderation efforts, and about their use of automation in general. 

Transparency reports by several companies, including Facebook, YouTube, and Twitter, do contain some information about automated content blocking and removal during the pandemic, as urged in the April 2020 letter. These transparency reports are a welcome if limited window into the influence of the pandemic on content moderation. 

Researchers and journalists have used these reports and other available information to begin to study platforms’ content moderation practices during the pandemic and their impact on users.  For example, they have looked at issues such as how companies combated disinformation and misinformation about COVID-19 or the 2020 general election, and whether they should prioritize certain appeals of content moderation decisions. 

In this post, we summarize some of what key platforms’ transparency reports have shown.

Facebook

Facebook’s Community Standards Enforcement Reports show the percentage of content that Facebook and Facebook-owned Instagram take action on before users report it, because it is flagged either by automated systems or the company’s human reviewers. In many cases, the percentage of content the company actioned before users reported it is consistently high. However, around the time when Facebook began to increasingly rely on automation, certain categories of content, such as bullying and harassment and hate speech, saw increases in the percentage of content actioned on both Facebook and Instagram before users reported it. This data suggests that automation may have played a greater role in detecting bullying and harassing content and hate speech during the pandemic than in the past.

Facebook’s analysis of its August 11, 2020 Community Standards Enforcement Report—which covered the three months immediately after Facebook increased its use of automation—explicitly addressed the effect of COVID-19 on content moderation. News reports highlighted drops in removals of child sexual abuse material from Instagram and suicide and self-injury content on both Facebook and Instagram. Facebook attributed these changes to human moderators working from home. In contrast, Facebook’s proactive detection of hate speech increased—the result of improvements in its automated detection technologies, Facebook claims—and Facebook removed more than double the amount of hate speech during the same period.  

Facebook’s subsequent transparency reports, in November 2020 and February 2021, showed how a return of human moderators initially caused removal of child sexual abuse material to rebound before declining again, reportedly due to a “technical issue.” Facebook attributed continued increases in removals of hate speech shown in its February report to improvements in its AI systems.

Of course, the number of pieces of content removed does not tell the full story.  For example, the Electronic Frontier Foundation examined Facebook’s August 2020 report and noted that, because the company limited appeals of content moderation decisions during the pandemic, it may have permanently deleted more content that was incorrectly taken down than in the past. 

The Facebook Oversight Board criticized Facebook for failing to inform users when it uses automation to make enforcement decisions and for limiting human review of those decisions, and it urged Facebook to “restore both human review of content moderation decisions and access to a human appeals process to pre-pandemic levels as soon as possible.” These statements shine a spotlight on the impact of automation on Facebook’s content moderation during the pandemic. However, the Oversight Board reviews a miniscule fraction of the vast number of content moderation decisions that Facebook makes, limiting the role it can play in providing transparency of Facebook’s content moderation decisions and processes.

YouTube

YouTube’s Community Guidelines enforcement reports also reveal trends about the impact of automation. YouTube’s reports show the number of videos that are detected for moderation through automated flagging and removed from the site. From April to June 2020, when YouTube first implemented greater automated review and removal of content, the number of videos removed more than doubled compared to the previous quarter. The vast majority of that increase was in videos that were flagged automatically. At the same time, the number of successful appeals grew fourfold, suggesting that YouTube incorrectly removed greater numbers of videos when it increased its use of automation.  

Indeed, in a blog post accompanying its August 2020 transparency report, YouTube said that it made a purposeful decision to over-enforce its policies during the pandemic and use its automated systems “to cast a wider net” so that more potentially harmful content would be removed quickly. It did so even knowing that “many videos would not receive a human review, and some of the videos that do not violate [YouTube’s] policies would be removed.” As might be expected, YouTube has been repeatedly criticized throughout the pandemic for erroneously removing or demonetizing content, which the company has blamed on its increased reliance on automation.

Twitter

Twitter’s Rules Enforcement report gives extremely limited insight into how automation affected its content moderation during the pandemic. The data in its report covering content moderation actions from January through June 2020 does not specifically break down how many accounts or posts were flagged by automated systems. However, the report’s analysis does note the percentages of content that was “proactively identified” in two categories, terrorism/violent extremism and child sexual exploitation.  

In its blog post accompanying this report, Twitter said generally that the pandemic caused “significant and unpredictable disruption to [its] content moderation work and the way in which teams assess content and enforce our policies.” Twitter specifically linked a 35 percent decrease in the number of accounts actioned under its hateful conduct policy with “disruptions to workflow” during the pandemic. However, it gave no explanation for decreases in the number of accounts actioned under its policies prohibiting abuse or harassment, promoting suicide and self-harm, and non-consensual nudity. 

Other Companies 

Other companies’ transparency reports provide a range of information about how they relied on automation for content moderation during the pandemic. For example, TikTok reported that, because of the pandemic, it relied on “technology to detect and automatically remove violating content in some markets, such as Brazil and Pakistan.” TikTok flagged and removed over eight million videos automatically for violating its community guidelines—about nine percent of all videos removed. In contrast, transparency reports by Reddit and Pinterest provide no information about the pandemic’s effect, if any, on their content moderation practices.

Journalists have used the information in some companies’ transparency reports to conduct some analysis of automated content moderation. But true transparency requires companies to go beyond reports about numbers and categories of content that is removed, to provide greater context about the technology and processes they use when they engage in and develop policies for content moderation. At minimum, CDT recommends that social media and content-sharing platforms make public: 

  1. the total number of accounts and content flagged by automated systems;
  2. the number of those accounts and content that companies determine violate their standards;
  3. whether and how many of those determinations are made by human reviewers or automated systems;
  4. the number of appeals of accounts or content flagged or actioned by automated systems; and
  5. the number and percentage of those appeals that are unsuccessful, successful, or not acted upon.

Companies should also analyze this data to calculate a public error rate of their automated processes for content moderation.

Lawmakers, advocates, and citizens are pushing social media and content-sharing companies to  be more transparent about how their decisions shape our information environment and users’ opportunities to speak online. Regulators in the U.S., EU, and around the world are considering mandating transparency reporting and requiring these companies to provide data to third-party regulators and independent researchers. 

Lawmakers must carefully calibrate such laws and regulations to avoid or lessen the potential negative consequences that transparency mandates and data retention can have for free expression. (For example, the financial cost of complying with transparency mandates may discourage existing websites from adding user-generated content, like comment sections, and new platforms for user-generated content from entering the market.) Companies must make more data and information about their use of automation in content moderation available to independent researchers and the public to inform public policy conversations.