Skip to Content

Equity in Civic Technology, Free Expression

Just Released Research: Student Demands for Better Guidance Outpace School Supports to Spot Deepfakes

What do Taylor Swift and Joe Biden have in common? They are some of the most recent high profile victims of artificial intelligence (AI) driven deepfakes. In late January, non-consensual, AI-generated sexually explicit deepfakes of Taylor Swift went viral on X, receiving 27 million views before the account that posted them was suspended. Around the same time, New Hampshire voters received an AI-generated robocall of “President Biden” imploring them not to vote in the state’s upcoming primary election.

Both of these instances make distinctly clear the present threats that deepfakes, and specifically AI-generated deepfakes, pose not only to impersonated victims, but also to media consumers. This is especially concerning for our youngest learners. Newly released research from CDT (described below) shows that students want more support from their schools to identify and manage deepfakes. 

Deepfakes and K-12 Students

A deepfake is an image, audio, or video that has been synthetically created or manipulated using machine or deep learning (artificial intelligence) technology to fake or alter someone’s appearance, voice, or actions. Deepfakes can be particularly deceptive because viewing a depiction or hearing the voice of a person you recognize is fundamentally different from reading or hearing a false story about them. Previous research has revealed that “humans process visual data naturally and thus fluently, and people believe what they see.” And given that AI tools continue to improve, the concerns of believability become even greater. 

One arena for deepfakes to proliferate is in K-12 schools, where more impressionable minds are still in the process of learning how to discern between reliable and false information. Deepfakes have already proven to be damaging forces in schools, as students create deepfakes of their peers in intimate poses to ridicule and bully each other. Students may also create deepfakes of their teachers, undermining trust in educators and the education system more broadly. As a part of their research process, students might also encounter deepfakes online and not know how to navigate what is truthful information and what is not.

Given the current media environment, it is inevitable that students are going to face these scenarios. At the same time, using AI technology to generate images and videos has some positive applications for youth exploration and expression. So the question at hand is how do schools equip students with the tools to empower them to discern what is a deepfake and keep themselves safe from the negative consequences of deepfakes?

Students Don’t Feel Confident in Their Ability to Detect AI-Generated Content, and They Aren’t Receiving the Guidance They Want to be Effective Media Consumers

CDT survey research of high school students from August 2023 reveals that students do not feel confident in their ability to discern between AI-generated and human-generated content, raising major questions around whether youth are being adequately prepared for effective information consumption in this new media environment. 

When it comes to images, only 22 percent of students said that they are very confident in their ability to detect whether an image they are viewing was generated with AI versus produced by a human. And on the strictly text related mis- and disinformation front, only 18 percent of students said that they are very confident in their ability to detect whether texts they are reading were generated with AI versus produced by a human. 

Students also reported that their schools are not providing the guidance they want (and need) to effectively address the reality of encountering AI-generated deepfakes. Only 38 percent of students said their school has provided guidance on whether something was generated by AI or a person, whereas 71 percent said that it would be helpful if their school did. Parents also echoed this same gap, as only 33 percent said that they and/or their child have received guidance from their school on whether something was generated by AI or a person, but 79 percent said it would be helpful. 

Alarmingly, historically marginalized student populations are less likely to receive this critical education. LGBTQ+ students reported their schools being less likely to provide guidance on how to spot false or inaccurate information online (e.g., misinformation, disinformation) (58 percent vs. 46 percent of non-LGBTQ+ students), and white students were more likely to report receiving this guidance (66 percent vs. 57 percent of Black students vs. 58 percent of Hispanic students). 

What Can Be Done to Address This Media Literacy Gap?

Schools, generative AI platform providers, and policymakers can take a number of steps to address the harms of deepfakes happening today. To help youth become more effective media consumers: 

  • Schools should provide students with the support they want. CDT research shows that a majority of students think training and resources on discerning between AI-generated and human-generated content and spotting general misinformation and disinformation would be helpful. In addition to these topics, schools should incorporate well-grounded education into school curriculum on being aware of the presence of deepfakes, evaluating the apparent source of the content posted, how to approach images or videos that seem surprising with skepticism, and the importance of not creating or circulating non-consensual deepfakes of their classmates, teachers, and others. Proactively providing this education gives students a safe space to improve their media literacy skills. And these resources and training should be accessible for students with disabilities. 
  • Generative AI platform providers should devote resources to helping students learn about the technology they release. Companies that provide tools to generate images and videos, such as OpenAI, Midjourney, and Google, should provide educational material geared toward a younger audience to explain how the technology works, its shortcomings, potential beneficial uses, and information on how to detect their AI-generated work. They should also consider offering courses to educators to help train them to teach students how to interact with their services safely. These are not just fun tools for people to create with – they have mass ramifications on individuals’ lives and society as a whole, making companies responsible for properly supporting safe, responsible use.
  • Policymakers should devote resources to media literacy efforts. Policymakers should also consider providing resources (including but not limited to funding) to K-12 schools for these media literacy programs. A number of bills in Congress and in state legislatures seek to address this issue. One example is the recently introduced House bill, H.R. 6582, which would create a task force to conduct a study on digital literacy and identify strategies to maintain increased levels of digital literacy in the U.S. 

Conclusion

Growing up in an age where deepfakes and general mis- and disinformation are regular aspects of a media diet poses new risks to learning, safety, self expression, and civic engagement. This places new, critical responsibilities on schools and technology companies to equip students with the knowledge needed to be effective consumers of media – especially vital in the age of AI.