Companies Finally Shine a Light into Content Moderation Practices
One of the easiest ways for a social media platform to take over a news cycle is with a vocal user and a controversial content or account removal. From Diamond and Silk on Facebook to Innocence of Muslims on YouTube to Milo Yiannopoulos on Twitter, the big content-hosting platforms have come under fire time and time again for enforcing their policies in ways that, to many, seem inconsistent, opaque, or even illogical. Debates over these and other controversial decisions have been frustrated by a severe lack of publicly available data about how our information environment is shaped by content moderation activity. As Congress, academics, civil society advocates, and others have debated the role and responsibilities of these platforms, they (and we) have been forced to do so with a blindfold on, lacking even basic information about how online content is handled and evaluated by internet companies.
Now, however, YouTube and Facebook have taken a big step in the right direction, with both companies publishing—for the first time—detailed information about how they actually enforce their content policies. In the form of quantitative data (YouTube) and qualitative context (Facebook), these developments will bring important new information to the aforementioned discussions, making for more informed, data-driven, and evidence-based policymaking.
YouTube: Community Guidelines Enforcement
Published Monday evening, YouTube became the first major social media platform to put out a report on the number of posts it removes under its own content policy. The “Community Guidelines Enforcement Report” offers a data-heavy peek at flagged and removed content, as well as insight into the source and subject matter of the flagging.
The YouTube data covers a range of topics: who is doing the flagging (e.g., human vs, automation; regular user vs. other categories of flaggers, including government), the subject matter of the content flagged (e.g., sexual content vs. hateful/abusive content vs. promotion of terrorism), the volume of content and flags, and geographical distribution of flags by country. In addition, YouTube introduced a “Reporting History dashboard” that allows individual users to view the status of videos they’ve flagged for review.
YouTube parent company Google has previously been a leader in transparency reporting by internet companies, so it’s no surprise that they beat others to the punch with the first-ever report on policy enforcement and content moderation actions. The YouTube report is a much-needed and welcome first step for Google, helping to provide a fuller picture about when and how content is removed from its platform. The data about sources of flagging, for example, demonstrate that the platform relies on a combination of automated detection and human flaggers. One of the more notable insights here is the proportion of videos removed after automated flagging (6.7 million) vs. videos removed after human flagging (1.6 million). In a policy environment rife with calls for platforms to use automation to tackle everything from copyright infringement to terrorist propaganda, it’s crucial to understand just how much major platforms are already relying on automated flagging— ”more automation” would be no panacea for identifying all content that is either illegal or falls outside a company’s guidelines.
CDT and many other human rights organizations have been calling for this type of reporting for many years and we’re glad to see YouTube take this initial step. Of course, the report also raises a number of questions and there is much more that we’d like to see YouTube disclose. The data is largely quantitative, with the exception of a handful of case studies that discuss in brief actual examples of flagged videos. The report could benefit from additional “qualitative” transparency of this sort. Further, while some terms are linked to or explained (e.g., “trusted flagger”), others are used with little context. For example, how does an NGO flagger differ from a trusted flagger—and how are these numbers calculated considering that “[t]rusted flagger program members include…NGOs”?
Likewise, the report notes that 73 of the 8 million removed videos had been notified to YouTube by a “government agency.” This raises the immediate question of how government agencies/actors are identified. The data on that category of flagger is likely limited to self-identified government agencies/actors, which, if true, would provide helpful context for the sections on the report that focus on who is doing the flagging. YouTube would also do well to connect the dots between terms used in the report—such as the categories of reasons content may be flagged—and other relevant resources, like the actual Community Guidelines, where those terms are defined.
Facebook: Internal Enforcement Guidelines and Expanded Appeals Process
Tuesday morning, Facebook released a version of its Internal Enforcement Guidelines, which describe in greater detail the various factors that Facebook moderators consider when applying its Community Standards to a flagged post. Facebook’s release of the company’s Internal Enforcement Guidelines is the yin to YouTube’s yang: where one is heavily focused on quantitative data and metrics, the other is largely focused on explanatory and qualitative information. They are both positive developments and each platform (and countless others) would do well to add the counterpart to its transparency efforts.
This effort should win Facebook some deserved praise. After years of criticism for lack of transparency around its content moderation practices, the company is addressing those criticisms head-on with this release. The guidelines offer an unflinching look at some of the content Facebook has to handle, including bigotry and hate, gruesome imagery, and unthinkable violence. They demonstrate an effort to articulate some bright lines that separate acceptable speech from the unacceptable. But if there is a single takeaway from this release, it’s that content moderation is much more of an art than a science: There is often no single ‘right’ answer to the tough questions faced by moderators and the line between what does and does not violate a platform’s rules can be blurry—often by necessity, given the many different languages, cultures, and contexts their users come from.
Those languages, cultures, and contexts also speak to the importance of another announcement from Facebook yesterday: the company will, for the first time, allow users to appeal decisions on individual posts. Previously the appeals process was open only to pages and profiles. While the company ultimately hopes to expand the appeals process to all individual posts, “as a first step,” they are limiting the feature to “posts that were removed for nudity / sexual activity, hate speech or graphic violence.” This is an absolutely essential update for Facebook. No content moderation system for any size of platform is complete without providing users some ability to appeal removal decisions. To be meaningful, this appeal should include the opportunity for users to submit additional information, and not simply have the same rules re-applied to the same context by the same rank-and-file moderator, and, as an additional safeguard, the opportunity to re-appeal to a “higher court” (so to speak) for further consideration.
Facebook’s release of its content standards is a welcome one, and the company should also take a page from YouTube’s playbook and provide additional data about how these standards are enforced. Quantitative data and metrics will go a long way toward contextualizing the information released yesterday. It’s clear that Facebook’s standards are a work in progress, ever-changing and adapting. Additional information into how those changes and adaptations occur—what prompts them, how are they decided—would further shed light on the practices of a company that, like it or not, functions as a virtual town square for many people.
A Means to an End
The recent publications and greater operational transparency from both YouTube and Facebook are welcome news. Transparency, however, is not itself an end. Rather, it is a means to an end: creating and fostering responsible digital platforms that truly empower their users to express themselves, shape their own communities, and access information of all kinds. Only through efforts like those of YouTube and Facebook can we begin to hold these platforms accountable for how they treat and interact with their users—whether that’s user content, user accounts, or user data.