Real Time Threats: Analysis of Trust and Safety Practices for Child Sexual Exploitation and Abuse (CSEA) Prevention on Livestreaming Platforms
This report is also authored by Robert Gorwa.
Executive Summary
In recent years, a range of new online services have emerged that facilitate the ‘livestreaming’ of real-time video and audio. Through these tools, users and content creators around the world can easily broadcast their activities to potentially large global audiences, facilitating participatory and generative forms of collaborative ‘live’ gaming, music making, discussion, and other interaction. The rise of these platforms, however, has not been seamless: these same tools are used to disseminate socially problematic and/or illegal content, from promotion of self-harm and violent extremism to child sexual exploitation and abuse (CSEA) materials.
This report examines the range of trust and safety tools and practices that platforms and third-party vendors are developing and deploying to safeguard livestreaming services, with a special focus on CSEA prevention. Moderating real-time media is inherently technically difficult for firms seeking to intervene responsibly: much livestreaming content is “new”, produced on the spot, and thus by definition not “known” and possible to match against previously identified harmful material through hash-based techniques. Firms seeking to analyze livestreams instead must do so with comparatively inefficient and potentially flawed predictive computer vision models, working creatively with the stream audio (e.g., through transcription and text classification), and/or through other emerging techniques, such as “signals”-oriented interventions based on the behavioral characteristics of suspicious user accounts.
Based on a review of publicly available documents of livestreaming platforms and vendors that offer content analysis services, as well as interviews with persons working on this problem in industry, civil society, and academia, we find that industry is taking three main approaches to address CSEA in livestreaming:
- Design based approaches — Steps taken before a user is able to stream, such as implementing friction and verification measures intended to make it more difficult for users, or suspicious users, to go live. For example, some platforms require a user to have a threshold number of followers or subscribers before they can livestream to prevent an actor from spontaneously creating an account and livestreaming harmful content.
- Content analysis approaches — Various forms of manual or automated content detection and analysis that can work on video, audio, and text as content is livestreamed. Examples include taking sample frames from livestreams and seeing if they match hashes of known CSEA material; using machine learning classifiers to detect CSAM on live video; and employing predictive analysis of text transcriptions of live audio or user chats in livestreams.
- Signal based approaches – Interventions based on the behavioral characteristics and metadata of user accounts. For example, platforms may share certain account metadata to help identify bad actors as they move from platform to platform or use signals to identify accounts engaged in potentially suspicious behavior that prompts further investigation.
In part because of the challenges of livestream content detection, the way in which industry tackles the problem of CSEA and other harmful content is evolving. As one interviewee put it, the idea is for firms to engage more actively in reducing the ability to use their platform for CSEA dissemination, not only engaging in a detect and report mode but also, aspirationally, towards a predict and disrupt model of trust and safety more akin to that used in areas such as cybersecurity and fraud.
Industry approaches to CSEA raise several concerns. First, there is a general trend to eschew transparency and clarity in how these systems operate and are deployed, ostensibly to prevent bad actors from circumventing them, but potentially to the detriment of victims, users, policymakers, and other stakeholders. Second, and related to the first point, it is almost impossible to determine how effective these approaches are, what gaps they leave, whether they result in overmoderation of legitimate content, and how well they serve the needs of all stakeholders. Third, these approaches introduce significant security, privacy, free speech, and other human rights risks that can undermine the safety of the minors that they are meant to protect as well as that of users in general.
To help address these concerns, we highlight four areas for improvements:
- Greater transparency is needed to help evaluate and improve efforts to address CSEA on livestreaming platforms. For example, there are currently no performance metrics that firms can use to test and compare the accuracy of the measures they take or that experts, policymakers, and researchers can use to gain a better understanding of their efficacy, as well as the extent of what is really possible.
- Vendors and livestreaming platforms should be explicit about the limitations of automated approaches to detecting and addressing CSEA. In so doing, platforms can improve their trust and safety systems by ensuring human reviewers are appropriately involved and allowing them to make nuanced decisions based upon context and other information.
- Focus on design interventions that empower users including minors. The needs of streamers to protect themselves from being targeted with or being used to distribute CSEA are worthy of greater attention when it comes to design based solutions. For example, one design based approach that was not raised in our discussions with industry is to provide users, particularly minors, with the right set of tools and reporting mechanisms to help them protect themselves and others.
- Multistakeholder governance models can improve accountability of approaches to address CSEA on livestreaming. Best practice frameworks around the implementation of these systems could be developed not only through the continuing work of organizations like the Tech Coalition, but also through critical multistakeholder engagement in fora that not only involve child safety organizations, but also organizations actively engaged on a broader set of digital rights and civil liberties.
Addressing the problem of CSEA in general and on livestreaming platforms is critically important given the impacts on children, parents, and their communities, so this is a hugely consequential and high-stakes area of platform governance. Vendors and industry alike are understandably eager to show that they are developing innovative new tools to address CSEA and other harmful content, but poor implementation (or poor design, with systems that are fundamentally flawed) will decrease, rather than increase, policymaker and public confidence in platforms’ trust and safety over the longer term. Better understanding of the measures platforms are taking on livestreaming platforms, along with increased multistakeholder engagement, will improve trust and safety systems in ways that minimize the risk of CSEA in livestreamed content, while also minimizing unintended impacts on ordinary users.
Lea nuestro informe en español.
This project was funded by Safe Online.