Skip to Content

AI Policy & Governance, CDT Research

CDT Response to the Oversight Board’s Call for Public Comments on: “Explicit AI Images of Female Public Figures”

On April 30, CDT submitted a response to issues raised by the Oversight Board’s request for public comments regarding cases from the U.S. and India it described as “Explicit AI Images of Female Public Figures.” The cases touch upon key areas of rights-protection that CDT has engaged in extensively, including; free expression and online gender based violence, particularly its intersection with gendered disinformation. 

Our comments began by explaining that deepfakes (which are often pornographic in nature) and its antecedents (e.g., cheapfakes or shallowfakes) are a form of online gender-based violence and gender disinformation. The use of deepfakes targeted at women in politics in particular is meant to challenge, control, and attack their presence in spaces of public authority. We also noted that deepfakes are used to exploit existing forms of discrimination not only based on gender, but also a range of other identities — such as disability status, LGBTQIA+ communities, age, religious background and immigration status. Although there is limited research using intersectionality to examine deepfakes, our previous research found that women of color political candidates in the U.S. are more likely to be targeted with sexist and racist abuse and mis- and disinformation compared to other candidates. 

To address the problem of deepfakes, particularly those targeted at women public figures, we recommend that the Oversight Board take a harmonized approach which recognizes company obligations under recent legislation such as the EU’s Directive to combat violence against women. In addition, we made the following specific recommendations on how Meta should address this problem on its platforms:

  • Meta should clearly articulate policies that prohibit content such as deepfakes that harasses or abuses someone on the basis of gender or race.
  • With regard to women politicians, Meta should ideally provide transparency reports about election mis/disinformation before, during, and after an election.
  • Meta should grant independent researchers access to data that enables them to study the nature and impact of deepfakes, gendered mis- and disinformation, and online GBV on political candidates. This includes annual risk assessments performed in the context of Article 34 of the DSA, which expressly requires mitigation of risks related to the spread of disinformation and GBV.
  • Meta should ensure that content moderation systems, including human moderators and algorithmic systems, are attuned to the needs of and the threats faced by women public figures, particularly those whose identities may be particularly targeted in a given society (e.g., women of color in the U.S. and women of caste oppressed and religious minority communities in India).

See the full comments here.