Skip to Content

Free Expression

NDII Victims Deserve Help. Let’s Build an Effective Takedown System.

On June 18, 2024, Senators Ted Cruz and Amy Klobuchar introduced the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks (TAKE IT DOWN) Act. The bill intends to help victims of a profound invasion of privacy and violation of autonomy — namely, the nonconsensual distribution of intimate images (NDII) —  whether those images are real or generated with artificial intelligence.

NDII is not a new problem. Leaders in addressing online gender-based violence, including the Cyber Civil Rights Initiative, have worked for years to help victims and advocate for policy aimed at deterring image-based abuse, including a bill known as the SHIELD Act which would criminalize the nonconsensual distribution of intimate visual depictions. Since then-Senator Kamala Harris first introduced the SHIELD Act in the Senate in 2019, new and ubiquitous access to generative AI technologies has exacerbated existing concerns about NDII.

Earlier this year, the Senate passed the DEFIANCE Act, which CDT endorsed. The bill would expand an existing federal civil cause of action for NDII to more clearly cover synthetic NDII. More recently, the Senate also passed the SHIELD Act. Its current iteration would criminalize the nonconsensual distribution of intimate visual depictions of either adults or children obtained or created when the perpetrator knew or reasonably should have known the individual depicted had a reasonable expectation of privacy, and where the distribution causes or intends to cause harm. The bill would assign a maximum two-year penalty for distribution of intimate visual depictions of adults and three-year penalty for depictions of children.

Senators Cruz and Klobuchar introduced the TAKE IT DOWN Act against a troubling backdrop of victimization of both adults and children through AI-generated NDII. Upwards of 90% of AI generated or manipulated videos online are sexually explicit, and many of these videos may be produced without the consent or knowledge of those depicted. Celebrities, politicians, and other public figures are frequent targets of AI-generated NDII. Minors can also be victims and, sadly, perpetrators, harming other minors by producing and disseminating AI-generated NDII of their classmates. According to CDT research, 79% of high school students who have heard of AI-generated NDII say that a student was depicted, and 74% report hearing that other students shared such depictions. The TAKE IT DOWN Act was, in part, inspired by this victimization of minors by other minors, specifically the abuse of a 14-year-old girl from Texas whose classmates nonconsensually created and shared AI-generated intimate depictions of her via Snapchat.

The TAKE IT DOWN Act incorporates and modifies the SHIELD Act’s criminal prohibition and penalties for NDII to include “deepfakes” that “realistically depict the individual such that a reasonable person would believe the individual is actually depicted in the intimate visual depiction.” Importantly, both the SHIELD and TAKE IT DOWN Acts would impose higher penalties on depictions of minors, many of which may be produced by minors themselves. TAKE IT DOWN complements the SHIELD Act by proposing a takedown system, akin to the takedown system created through the Digital Millennium Copyright Act (DMCA) but instead enforced by the FTC, to allow NDII victims to request that their depictions be removed from platforms in certain circumstances. 

Some victims of NDII already have recourse through the DMCA to take down actual images for which they own the copyright (primarily selfies). Where the victim does not have copyright in a real depiction or the depiction is generated through AI, however, the DMCA’s takedown mechanism does not apply. The TAKE IT DOWN Act would address this gap by allowing victims of NDII to request that platforms take down their images, regardless of who created the image or who owns its copyright.

Given the serious and unique harm that can accompany NDII, lawmakers are understandably exploring ways to empower people to have NDII removed from online public forums and create liability for those who distribute these images without consent. But the bill currently still has flaws, creating risks to free expression and privacy that must be addressed. More must be done — and can be done — to improve the TAKE IT DOWN Act to ensure that it is operational, constitutional, and privacy-protective.

TAKE IT DOWN Misses Important Threats Posed by Synthetic NDII

Unlike the DEFIANCE Act, which amends the existing federal civil cause of action for NDII to include “digital forgeries” and specifies that visual depictions can constitute a digital forgery even if accompanied by a label or in a context that discloses or implies that the depiction is inauthentic, TAKE IT DOWN’s criminal prohibition and the takedown system focus only on AI generated images that would cause a “reasonable person [to] believe the individual is actually depicted in the intimate visual depiction.” In doing so, the TAKE IT DOWN Act is unduly narrow, missing several instances where perpetrators could harm victims. Even if viewers know that deepfake-NDII is not real, such depictions are a significant invasion of privacy, often motivated by misogyny and intending to humiliate those depicted and to “challenge, control, and attack [women’s] presence in spaces of public authority.”

Synthetic nonconsensual intimate images are often created to depict prominent women in public life. One study examining the five most popular “deepfake pornography” websites found that 99% of explicit digital forgeries depicted actresses or musicians, and much of the remaining 1% depicted women in other public-facing professions, including politics and corporate leadership. In a notable example this year, Taylor Swift was victimized by users on X who distributed synthetic intimate images of her that were viewed more than 45 million times. Many of the depictions featured a realistic visualization of Swift but occurring in implausible circumstances — meaning that viewers were likely aware that many of the depictions were not real.

As the IBSA Principles that CDT endorsed recognize, people generally should be able to control whether and how their likeness or body is depicted or appears to be depicted in intimate imagery, even if those depictions are labeled as false. NDII that realistically depicts an individual engaging in sexual conduct that is labeled as false or occurs against an implausible backdrop still causes harm akin to others that have long been redressable in law, including invasions of privacy, reputational harm based on false information, emotional distress, or financial loss. 

Consider, for example, a nonconsensually created and disseminated AI intimate visual depiction that is “realistic,” except for the fact that the depiction takes place on Mars. The TAKE IT DOWN Act’s criminal prohibitions and takedown authority would only apply to the extent that a reasonable person would believe that the individual is actually depicted in the NDII. Due to the implausible backdrop, a court would likely conclude that no reasonable person could believe the individual was actually depicted, placing the NDII outside the TAKE IT DOWN Act’s protections despite the clear harms accompanying the distribution of such images. While the bill’s authors may have added this language to protect its constitutionality, they need not exempt such depictions. The TAKE IT DOWN Act’s definition of “appears” should be modified to include the DEFIANCE Act’s constitutionally sound language that addresses all appropriate instances where an individual’s likeness is realistically depicted in NDII, with necessary exclusions to protect free expression.

TAKE IT DOWN Excludes Self-Curated Sites from Important Provisions

For purposes of its takedown provisions, the TAKE IT DOWN Act excludes online services that consist primarily of content that is not user generated. Websites that create and curate their own synthetic and real NDII, therefore, are not required to comply with the bill’s takedown mechanism. While operators of such websites may be unlikely to comply with takedown requests, the failure to extend FTC’s enforcement authority to such operators leaves criminal enforcement as the bill’s only recourse. Law enforcement, however, has historically neglected crimes disproportionately perpetrated against women and may not have the capacity to prosecute all such operators. To ensure that such websites are captured for civil enforcement purposes, the takedown provisions should be modified to include NDII-focused websites and applications.

TAKE IT DOWN’s Ambiguous Intersection with Section 230

Unlike other prominent proposed legislation intending to address sexual exploitation online, the TAKE IT DOWN Act does not amend Section 230 — the shield that generally prevents interactive computer services from being held liable for third-party content. Instead, it requires covered platforms to operate a notice and takedown system and to remove flagged content, including third-party content, within 48 hours. This approach would require covered platforms to operate a system similar to the notice and takedown system required by the DMCA for copyrighted material. Unlike intellectual property violations, which are expressly carved out of Section 230’s liability protection, however, liability for distribution of third-party NDII is covered by 230’s shield. While the TAKE IT DOWN Act’s criminal provisions would apply to interactive computer services because Section 230 does not apply to federal criminal law, the bill, in effect, would create civil liability for continued hosting of third-party NDII, despite Section 230 stating explicitly that no provider of an interactive computer service shall be treated as the publisher or speaker of user-generated content and therefore is not liable for such content. Moreover, the bill would apply civil liability to a broader swath of content than it would criminalize, including depictions that are matters of public concern. Thus, the takedown regime in the bill directly conflicts with Section 230 and nothing in the bill language clearly resolves that conflict. Given the novelty of the liability structure proposed by the TAKE IT DOWN Act, it is difficult to know how a court would rule if a covered platform challenged an FTC enforcement action. Congress should resolve that uncertainty by making clear that Section 230 does not preclude enforcement of the bill’s takedown system.

TAKE IT DOWN’s Risks to Free Expression

The TAKE IT DOWN Act requires covered platforms, as soon as possible but not later than 48 hours after receiving a valid request, to remove reported NDII and to make reasonable efforts to identify and remove any known identical copies of such depictions. Doing so at scale, and in that timeframe, would require the widespread use of automated content detection techniques such as hash matching. Hashes are “digital fingerprints” that can be used by platforms to detect known images across their services once the image has been distributed and assists in  removal of the identified content if it violates the platform’s use policy or the law. Many platforms already use hash matching for known NDII, child sexual abuse material (CSAM), and terrorist and violent extremist content, though none of these processes is currently required by U.S. law. While TAKE IT DOWN does not expressly mandate the use of hash matching, since services already commonly use the technology to identify known-violating content, it would likely be understood to be a “reasonable effort to identify and remove” known NDII under the bill.

As currently drafted, however, the TAKE IT DOWN Act raises complex questions implicating the First Amendment that must be addressed before final passage. As a general matter, a government mandate for a platform to take down constitutionally protected speech after receiving notice would be subject to close First Amendment scrutiny. The question is whether a narrowly drawn mandate focused on NDII with appropriate protections could pass muster. Although some NDII falls within a category of speech outside of First Amendment protection such as obscenity or defamation, at least some NDII that would be subject to the Act’s takedown provisions, even though unquestionably harmful, is likely protected by the First Amendment. For example, unlike the proposed Act’s criminal provisions, the takedown provision would apply to NDII even when it was a matter of public concern. Moreover, the takedown obligation would apply to all reported content upon receipt of notice, before any court has adjudicated whether the reported image constitutes NDII or violates federal law, let alone whether and how the First Amendment may apply. Legally requiring such take-down without a court order implicates the First Amendment.

In this way, the TAKE IT DOWN Act would go even further than current federal law for CSAM. Platforms are required to report apparent CSAM upon obtaining actual knowledge of such content to the National Center for Missing & Exploited Children’s CyberTipline. Federal law, however, does not affirmatively require platforms to remove apparent CSAM. Rather, as a practical matter, many platforms opt to remove such apparent CSAM — even if it has not been adjudicated to be CSAM — in part to minimize the risk of liability under federal criminal law, which is not preempted by Section 230. TAKE IT DOWN replicates these incentives by exposing platforms to criminal liability for NDII, while going even further than current law for CSAM by requiring the takedown of potentially lawful content.

To increase the chance of surviving constitutional scrutiny, the takedown provisions in the TAKE IT DOWN Act should be more narrowly tailored and include more guardrails. The Act currently does not include many of the DMCA’s guardrails intended to prevent abusive or malicious takedown requests. Even with those guardrails, complainants routinely abuse the DMCA takedown process, leading to the censorship of constitutionally-protected information and criticism. Under current processes, for example, complainants have successfully used the DMCA to take down negative video game reviews, silence parody, and shut down civil society YouTube accounts. The TAKE IT DOWN Act risks repeating this abuse by not expressly exempting commercial pornographic content from the takedown mechanism, only excluding matters of public concern from its criminal prohibitions (but not the takedown mechanism), and not including other protections, such as requiring complainants to attest under penalty of perjury that they are authorized to file a notice on a person’s behalf and other appropriate safeguards. While an NDII takedown mechanism should minimize burden on victims, such steps will mitigate the risks of abuse and the removal of content that cannot or should not be restricted from publication under the takedown mechanism.

TAKE IT DOWN’s Risks to Privacy

The TAKE IT DOWN Act should also be modified to make clear that it applies only to forums where content is published to the public, and does not apply to encrypted services or other services like messaging applications and cloud storage services where users expect their communications and information to be private. As currently drafted, the takedown mechanism applies to websites, online services, online applications, or mobile applications that serve the public and primarily provide a forum for user-generated content, excluding ISPs, email services, and (as discussed above) services that do not consist primarily of user-generated content. Without clarification, direct messaging services, cloud storage systems, and other similar services could be required to comply with the legislation’s requirements, when these services are arguably far more akin to the private “email services” exempted from the bill. Covered platforms, consequently, might be incentivized to use content filtering and hash matching technologies on direct messaging services or cloud platforms where users rightfully expect their communications and information to be private, despite the technology’s known vulnerabilities and tendency to lead to the inappropriate takedown of content.

Moreover, many direct messaging and cloud storage services are encrypted — meaning that the service providers do not have access to the content that users share, store, or generate. The TAKE IT DOWN Act, therefore, either would create an obligation to take down content to which a provider has no access and with which they cannot comply, or incentivize content filtering that would break encryption. Consider, for example, a report submitted of NDII shared via an encrypted messaging group chat. If this dissemination was considered “published” for purposes of the TAKE IT DOWN Act, the bill would create an obligation to remove the image, and reasonable efforts to remove any known duplicates, with which an encrypted platform could not comply without breaking encryption. 

Unlike current federal law that includes privacy protections specifying that platforms are not required to monitor users or the content of their communications, or to affirmatively search, screen, or scan for CSAM, the TAKE IT DOWN Act could create obligations to monitor communications and break encryption in ways that threaten users’ privacy. While platforms that elect to use hash matching technology on unencrypted public-facing services to detect reported NDII should be free to do so, mandating such scanning and creating obligations to monitor encrypted platforms threatens the privacy of all users and undermines the security guarantees of end-to-end-encryption, including for survivors of image-based sexual abuse who may rely on encrypted services to communicate safely and privately. To protect the privacy of users, the TAKE IT DOWN Act should be modified to make clear that it applies only to forums where content is published to the public and does not apply to encrypted services or other services where users expect their communications and information to be private.

TAKE IT DOWN’s Solvable Problems

The TAKE IT DOWN Act proposes a novel solution to NDII that would empower victims to reclaim control over their agency and likeness to take down real and synthetic intimate depictions they never consented to distribute. At the same time, revisions to its text are necessary to ensure that it is comprehensive and precise enough to help victims, protect privacy, and more likely to pass constitutional muster. While the issues NDII implicates are complex, Congress should commit to solving the problems the TAKE IT DOWN Act presents and create an effective, constitutionally-sound, and privacy-protective takedown mechanism for real and synthetic nonconsensual intimate images. NDII victims deserve, and courts will demand, nothing less.