Skip to Content

Free Expression

Three Lessons in Content Moderation from New Zealand and Other High-Profile Tragedies

Following the terrorist attacks on two mosques in Christchurch, New Zealand, social media companies and internet platforms have faced renewed scrutiny and criticism for how they police the sharing of content. Much of that criticism has been directed at Facebook and YouTube, both platforms where video of the shooter’s rampage found a home in the hours after the attacks. The footage was filmed with a body camera and depicts the perpetrator’s attacks over 17 minutes. The video first appeared on Facebook Live, the social network’s real-time video streaming service. From there, Facebook says, it was uploaded to a file-sharing site, the link posted to 8Chan, and began to spread.

While the world struggles to make sense of these horrific terrorist attacks, details about how tech companies handled the shooter’s video footage and written manifesto have been shared, often by the companies themselves. Collectively, these details in combination with the public discourse on and reaction to, as the New York Times referred to it, “a mass murder of, and for, the internet,” have made clear three fundamental facts about content moderation, especially when it comes to live and viral content:

1. Automated Content Analysis is Not a Magic Wand

If you remember nothing else about content moderation, remember this: There is no magic wand. There is no magic wand that can be waved and instantly remove all of the terrorist propaganda, hate speech, graphically violent or otherwise objectionable content. There are some things that automation and machine learning are really good at: functioning within a specific and particular environment (rather than on a massive scale) and identifying repeat occurrences of the exact same (completely unaltered) content, for example. And there are some things they are really bad at: interpreting nuance, understanding slang, and minimizing discrimination and social bias, among many others. But perfect enforcement of a complex rule against a dynamic body of content is not something that automated tools can achieve. For example, the simple change of adding a watermark was enough to defeat automated tools aimed removing video of the New Zealand shooter.

Some, then, have suggested banning of all live video. However, that overlooks activists’ use of live streams to hold government accountable and report on corruption as it is happening, among other uses. Further, the challenges of automated content analysis are by no means limited to video. As a leaked email from Google to its content moderators reportedly warned: “The manifesto will be particularly challenging to enforce against given the length of the document and that you may see various segments of various lengths within the content you are reviewing.”

All of this is to reiterate: There is no magic wand and there never will be. There is absolutely a role for automated content analysis when it comes to keeping certain content off the web. Use of PhotoDNA and similar systems, for example, have reportedly been effective at ensuring that  child pornography stays off platforms. However, the nuance, news value, and intricacies of most speech should give pause to those calling for mass implementation of automated content removal and filtering.

2. The Scale, Speed, and Iterative Nature of Online Content – Particularly in This Case – is Enormous

It is a long-standing fact of the internet that it enables communication on a vast scale. Reports from YouTube and Facebook about the New Zealand attack seem to indicate that this particular incident was unprecedented in its volume, speed, and variety. Both of these companies have dedicated content moderation staff and it would be easy to fall into the trap of thinking that this staff could handily keep up with what seems to be multiple copies of a single live video. But that overlooks a couple of realities:

  • The videos are not carbon copies of each other. Any number of changes can make identifying variations of a piece of content difficult. The iterations could include different audio, animation overlays, cropping, color filters, use of overlaid text and/or watermarks, and the addition of commentary (as in news reporting). Facebook alone reported 800 “visually distinct” videos.
  • There is other content – the normal, run-of-the-mill stuff – that continues to be posted and needs to be addressed by the same staff that now is also scrambling to keep up with the 17 copies of the video that are being uploaded every second to that single platform (Facebook in this case; YouTube’s numbers were somewhat lower, but still reaching one video upload every second, culminating in hundreds of thousands of copies).

It’s worth noting here that not a single person reported the live video stream to Facebook for review. The video was reportedly viewed “fewer than 200 times” while it was live but the first report came 12 minutes after the stream ended – a full 29 minutes after the broadcast began. That’s a lot of time for a video to be shared and reposted by people motivated to ensure it spread widely, not only on Facebook, but on other sites as well.

In addition to proving the challenges of automated content review, the New Zealand attacks demonstrated weaknesses of the companies’ own systems, particularly when dealing with emergencies at scale. YouTube, for example, was so overwhelmed by the flood of videos that it opted to circumvent its standard human review process to hasten their removal. Facebook, too, struggled. The company has a process for handling particularly sensitive content, such as an individual threatening to commit suicide; however, that process wasn’t designed to address a live-streamed mass shooting and likely could not easily be adapted to this emergency.

3. We Need Much Greater Transparency

As non-governmental and civil society organizations have hammered home for years, there needs to be more transparency from tech companies about their policies, processes, and practices that impact user rights. One of the more promising developments from 2018 in this space was the release of reports by YouTube, Twitter, and Facebook providing a quick peek under the hood with respect to their content enforcement. While there is still a long way to go from the first year’s reports, the reaction to their publication shows a hunger for and deep interest in further information from tech companies about their handling of content.

Among companies’ next steps should be transparency around specific major incidents, including the New Zealand attacks. Social media platforms are still reeling from over a week of whack-a-mole with a heavy side of criticism. But once they are able to identify trends or data points across the incident, they should be shared publicly and contextualized appropriately. For example, how did Facebook identify and handle the 800 distinct versions of the video? Did those include uses of the video in news reporting? How was the Global Internet Forum to Counter Terrorism – an entity formed to share information on images and videos between companies – engaged?

One challenge for companies when providing transparency into their policies and practices is doing so without providing a roadmap to those looking to circumvent the platforms’ systems. However, extant transparency reporting practices – around, for example, government requests for user data – suggest companies have found a balance between transparency and security, tweaking their reports over time and contextualizing the data within their larger efforts.

What’s Next?

There are no quick fixes. There are no magic wands. We will continue to debate and discuss and argue about whether the tech companies “did the right thing” as they responded to the New Zealand shooter’s video, his manifesto, and public reaction to both. But as we do so, we need transparency and insight into how those companies have responded, and we need a shared understanding of the tools and realities of the problem.

As details have emerged about the attacks in New Zealand and how they played out on social media, much of the analysis around internet companies’ handling of user content has fallen into one of two buckets:

  • Tech companies aren’t doing enough and refuse to use their enormous money/power/tools/resources to address the problem; or
  • The problem is unsolvable because the volume of content is too great for platforms to handle effectively.

The problem with presenting the issue as this dichotomy is that it overlooks – really, completely ignores – the fact that an unknown number of viewers watched the live video but did not report it to Facebook. Perhaps some viewers were confused and maybe others believed it was a joke. But the reality is that some people will always chose to use technology for harm. Given that fact, the question that will ultimately guide this debate and shape how we move forward is: What do we want our social media networks to be? Until we can answer that question, it will be hard, if not impossible, to address all of these challenges.