Skip to Content

Free Expression, Privacy & Data

The STOP CSAM Act Threatens Free Expression and Privacy Rights of Children and Adults

Last month, Senator Durbin introduced the Strengthening Transparency and Obligation to Protect Children Suffering from Abuse and Mistreatment Act of 2023 (STOP CSAM Act). While the goal of the bill — to protect children from sexual exploitation and abuse online — is laudable, the bill’s approach is seriously flawed.

The STOP CSAM Act will make children less safe, at the expense of the privacy and free expression rights of children and adults alike. By significantly increasing the liability risk that online services would face for hosting, transmitting, or otherwise enabling user-generated content, the bill increases the likelihood that these services will report and remove substantially more speech than the illegal child sexual abuse material (CSAM, also referred to in law as “child pornography”) currently in scope of federal law. It will also make prosecution of perpetrators of child exploitation crimes harder, because law enforcement will have to search a haystack of reports for a needle, wading through innumerable useless reports to find reports of actual CSAM.

Background on Existing Law and Proposed Changes in STOP CSAM

The STOP CSAM Act makes a number of changes to existing law concerning online service providers’ liability for CSAM that third parties post and distribute through their services. Online service providers are already required to report CSAM and are subject to potential criminal liability for the publication or distribution of CSAM. The STOP CSAM Act expands those obligations in ways that may violate the First Amendment. 

Under current law, online service providers must file reports with the National Center for Missing and Exploited Children (NCMEC) when they have actual knowledge of apparent CSAM on their services (18 U.S.C. §2258A). They are also permitted to report evidence of planned or imminent violations of federal child exploitation law that involve child pornography. Knowing and willful failure to file reports can result in fines of up to $300,000 per missing report (though NCMEC notes that the Department of Justice has never enforced this provision). Crucially, the existing mandatory reporting statute explicitly states that it does not require online services to affirmatively scan or monitor people’s communications.

Existing federal criminal statutes prohibiting publication or distribution of CSAM also apply to online service providers. Section 2252 of the criminal code, for example, prohibits the knowing distribution of CSAM with penalties of fines and a minimum of 5 years in prison. (As a reminder, Section 230 of the Communications Act, which shields providers from many forms of liability for user-generated content, has always included an exception preserving the ability to prosecute online services under federal criminal law.) 

The STOP CSAM Act makes a number of significant changes to existing law that affect adults’ and children’s constitutional rights. The bill would:

  • Expand the trigger for mandatory reports to include not only actual knowledge of CSAM, but also of any apparent, planned, or imminent violations of a set of federal child exploitation crimes. 
  • Create a new federal crime when online services “knowingly promote or facilitate” federal child exploitation crimes. 
  • Amend Section 230 to allow for civil lawsuits by victims of child exploitation crimes against online service providers for “the intentional, knowing, reckless, or negligent promotion or facilitation” child exploitation or child trafficking crimes. 
  • Create a new quasi-judicial body, administered by the Federal Trade Commission (FTC) and modeled after the muchcriticized Copyright Claims Board, that allows the FTC to order online services to remove non-CSAM content without the opportunity for review by a court.

Key Issues in the STOP CSAM Act

These provisions raise a host of First Amendment issues that threaten individuals’ rights to speak and to access lawful, constitutionally protected information. Moreover, by significantly increasing providers’ incentives to over-report and over-remove people’s online content, the STOP CSAM Act risks inundating NCMEC and law enforcement with useless reports, squandering resources that should be directed towards combatting child exploitation, while also putting vulnerable young people at risk of having their communications censored and surveilled, and cutting them off from crucial vectors for information and support.

Mandatory content filtering

Several provisions of the STOP CSAM Act effectively require online services to employ content filters. This occurs most directly in the new “Report and Remove” regime created by Section 7 of the bill. Under that regime, a victim of a child exploitation crime, their representative, or a qualified organization may file a notification with an online service provider directing it to remove both CSAM and non-CSAM content relating to that person. If the provider does not remove that material, or engages in “recidivist hosting” of the material, the notifier may bring a claim to the new Child Online Protection Board, administered by the FTC. 

The prohibition on “recidivist hosting” in the bill creates a notice-and-staydown regime that a provider can only possibly comply with by using hash-matching filters such as PhotoDNA. The bill defines this novel term as when a provider removes notified material “and then hosts a visual depiction that has the same hash value or other technical identifier of the visual depiction that had been so removed.” If the FTC finds that a provider has engaged in recidivist hosting (including of the non-CSAM depictions of the individual), it can issue fines of up to $200,000 per image. Thus, a provider has no choice but to filter every newly posted image against a database of technical identifiers of previously removed images.

Beyond the “Report and Remove” regime, the STOP CSAM Act creates strong incentives for providers to employ content filters by expanding their liability risk for a wide range of user-generated content. Section 6 of the bill would amend Section 230 to allow for lawsuits by victims of child exploitation crimes against online service providers for “the intentional, knowing, reckless, or negligent promotion or facilitation” of child exploitation or child trafficking crimes. This is a vague standard whose mens rea requirements fall far below the level necessary to satisfy the First Amendment. Further, the child exploitation and trafficking crimes listed in this provision cover a wide variety of potential online conduct beyond the hosting and distribution of CSAM, including posting trafficking advertisements (§1591) and enticement or “grooming” (§2251 and §2242). Thus, this provision opens the door to legal claims against online services that they, for example, were negligent in failing to scan and monitor user communications for indications of trafficking or grooming activity, including claims against providers of end-to-end encrypted services that cannot perform such filtering.

Direct and indirect mandates to employ content filters typically violate the First Amendment  because filtering technology is imprecise and will undoubtedly block lawful, constitutionally protected speech along with whatever material the filter is intended to block. These errors can be devastating to parents and others, such as the parents locked out of their online accounts for sharing diagnostic photos with their child’s doctors, or the LGBTQ individuals whose speech and very identity often get swept up in filters intending to block sexual or non-“family friendly” content. Filtering also acts as a form of prior restraint on speech, requiring every post any user might make, including those that have zero connection to child exploitation, to be evaluated and deemed acceptable before being published. First Amendment doctrine has a “heavy presumption against [the] constitutional validity” of prior restraints.

Mandatory takedown of content without a court order

This issue with prior restraints continues throughout the STOP CSAM Act, in several provisions that create legal mandates for online services to remove content without a court order holding that the content is actually illegal CSAM. In practice, today, online service providers that file mandatory reports of apparent CSAM to NCMEC typically also remove that content from their services, in part because they have already attested to having actual knowledge of that content and could face criminal prosecution under 18 U.S.C. §2252 for knowing distribution of it. But the choice to remove the allegedly illegal content remains in the hands of the online service providers — they could conceivably wait to be charged with the crime of knowing distribution and then seek to defend themselves — and the voluntariness of this removal makes every difference under the First Amendment.

The STOP CSAM Act would create a new legal duty to remove publicly available “apparent child pornography” that a provider has reported to NCMEC. But Congress cannot require an online service provider to take down third-party content that is not illegal. Presuming that content, even apparent CSAM, is illegal circumvents the role of judges in our legal system and supplants it with the decisionmaking of private technology companies. The current reporting obligation in 2258A strikes an important balance: online services are required to report material they deem likely to be CSAM, but those reports are not treated as conclusory as a matter of law, and prosecutors must still prove that the material is in fact CSAM in any subsequent court proceedings. 

The STOP CSAM Act also short-circuits independent judicial proceedings through the “Report and Remove” regime administered by the FTC referenced previously. Under this regime, a provider must remove a “proscribed visual depiction of a child” within 48 hours of receiving a notification (or 2 business days for a provider with under 10 million monthly active users in the United States). If a provider fails to remove the content promptly or at all, notifiers may file a petition with a new Child Online Protection Board made up of three FTC attorneys. Providers may then opt out of a proceeding before the Board and face a potential lawsuit in court. If they do not opt out, they “waive their right to a jury trial regarding the dispute” and “lose the opportunity to have the dispute decided by a[n Article III] court.” Instead, if the Board rules for the notifier, it will order the provider to remove all known copies of the images from its service and levy a fine of up to $200,000.

The Child Online Protection Board process is an extraordinary grant of power to a handful of FTC attorneys and would empower them to determine whether the content at issue met the legal standard for CSAM. Because the STOP CSAM Act allows the Board to order the removal of more than just CSAM (the Board process can address any “proscribed visual depiction of a child”), these attorneys would also have to address the novel legal question of whether and in what circumstances it is permissible to order the blocking of non-CSAM (i.e. lawful) content, regardless of context. Such determinations should obviously be the province of independent judges, not attorneys employed by a regulatory agency. The “Report to Remove” regime cannot provide the procedural safeguards around orders to block or suppress content that the First Amendment requires.

Vague standards will lead to overbroad suppression of lawful speech

Throughout the STOP CSAM Act, the bill expands the scope of existing prohibitions on and reporting requirements for CSAM to include other kinds of content — content that is, unlike CSAM, not necessarily obviously illegal on its face or in every context. This could include photographs of adults, text communications discussing transportation of a child, and innocent conversations between adults and children. By expanding the scope of providers’ potential liability to include non-CSAM content, the STOP CSAM Act creates powerful incentives for online services to be overbroad in their removal of user-generated content and to thus block lawful, constitutionally protected speech.

For example, the changes to Section 2258A expand the scope of the reporting requirements beyond “apparent child pornography” to include “any facts and circumstances indicating an apparent, planned, or imminent violation” of a range of federal child exploitation crimes. While each of the listed crimes relates to the production of CSAM, they encompass a broad range of potential criminal activities: persuading or enticing a child to engage in explicit conduct (§2251), a legal guardian transferring custody of a child (§2251A), shipping (§2252) or mailing (§2252A) CSAM in interstate or foreign commerce, using a misleading domain name (§2252B), or transporting a minor across state lines (§2251 and §2260). All of these activities are appropriately criminal in the context of the production and distribution of CSAM, but the footprint they leave in the form of user-generated content will be decidedly less clear than “apparent child pornography.” For example, depending on the context, a message from one parent to another planning to transport a child from Virginia to Maryland could be evidence of a planned violation of several federal statutes, or two fathers coordinating the carpool for soccer practice. By imposing criminal penalties on the knowing failure to submit a report of such a broad range of apparent, planned, or imminent violations, the STOP CSAM Act significantly incentivizes online services to err on the side of reporting and removing a large amount of lawful speech.

The STOP CSAM Act also includes a new criminal provision that will similarly lead to overbroad suppression of lawful speech having nothing to do with CSAM. Section 2260B creates a new federal crime to “knowingly promote or facilitate a violation” of a range of federal child exploitation statutes beyond just knowing distribution of CSAM.

For example, §2242(b) of the criminal code covers anyone who “knowingly persuades, induces, entices, or coerces” a minor to engage in sexual activity — in other words, grooming a child. What it means for an online service provider to “knowingly promote or facilitate” the grooming of a child is unclear; online service providers may fear facing criminal prosecution for, e.g., providing end-to-end encrypted services that prevent them from proactively scanning and filtering user content, or allowing users to communicate anonymously or under pseudonyms. 

The vagueness of the standard “promote or facilitate” as applied to online services is under active litigation in multiple civil cases and is being challenged under the First Amendment in the Woodhull Foundation’s constitutional challenge to FOSTA-SESTA. As CDT has argued in those cases, the vagueness of the “promote or facilitate” standard will cause online service providers to over-remove substantial amounts of lawful, constitutionally protected speech in order to avoid liability — indeed, in the case of FOSTA-SESTA, it already has.

This effect will be amplified by the vagueness and low mens rea requirements of the new civil cause of action the STOP CSAM Act creates in 18 U.S.C. §2255. The STOP CSAM Act also narrows the scope of Section 230’s liability shield to allow suits against online service providers for the “intentional, knowing, reckless, or negligent promotion or facilitation” of a broader range of crimes, including not only the crimes related to the production of CSAM covered in the new §2260B, but also the sex trafficking crimes defined by 18 U.S.C. §1591. Strikingly, the STOP CSAM Act would allow lawsuits against online service providers for “facilitation” of crimes at a lower knowledge standard than applies to the criminals committing the underlying crimes themselves, all of which require knowledge on the part of the actor. Such low knowledge standards, and the associated risk of facing and losing lawsuits, would exert a strong pressure on online services to remove significant amounts of lawful speech and to prevent users from employing pro-privacy tools such as end-to-end encryption and pseudonyms.


The STOP CSAM Act jeopardizes children and adults’ constitutional rights to privacy and freedom of expression and risks overwhelming law enforcement with bogus reports transmitted by risk-averse tech companies. Legal mandates to filter and block speech without a court order will not pass constitutional scrutiny and are thus not actually tools in Congress’s toolbox. 

Instead, Congress should look to addressing the barriers that currently stand in the way of the fight against online child exploitation. It could seek to understand, for example, what limits smaller online service providers face in incorporating tools like PhotoDNA into their content moderation systems, and charge NCMEC with providing technical and other resources to help services be able to voluntarily implement different technical tools. Congress could also support the further development of tools, including machine learning technologies, for the voluntary detection of novel CSAM. While machine-learning tools are far from perfect, they can provide useful inputs into an overall system for moderating content. But machine learning tools require training data sets to develop, and given the highly illegal nature of CSAM, there are few, if any, entities that could lawfully amass such training data. Congress could examine the technical, legal, and security safeguards needed to foster the development of more robust machine-learning tools for CSAM detection — crucially, Congress cannot mandate the use of a government-developed filtering tool or any other, but it likely must act in order to permit such tools to be developed for voluntary use.

We urge the Senate Judiciary Committee to reject the risky, rights-invading approaches of the STOP CSAM Act and to focus on proposals that protect children and uphold the constitutional rights of Americans of all ages.