Skip to Content

AI Policy & Governance, Equity in Civic Technology, Privacy & Data

Report – Up in the Air: Educators Juggling the Potential of Generative AI with Detection, Discipline, and Distrust 

CDT report entitled “Up in the Air: Educators Juggling the Potential of Generative AI with Detection, Discipline, and Distrust.” Illustration of an “AI-generated apple” with a parachute flying through an open sky, and “AI-generated” schoolwork, book, pencil & eraser falling behind. Note: this illustration was created solely by a human.
CDT report illustration of an “AI-generated apple” with a parachute flying through an open sky, and “AI-generated” schoolwork, book, pencil & eraser falling behind. Note: this illustration was created solely by a human.

Educators are having a very different experience with generative artificial intelligence (AI) since the 2022-23 school year came to a close. K-12 schools have now had the opportunity to take a breath and regroup to determine how to get a grip on the explosion of generative AI in the classroom – after the education sector was caught off guard when ChatGPT burst abruptly onto the scene during the last school year. 

To understand how teachers are currently interacting with and receiving support on this technology, the Center for Democracy & Technology (CDT) conducted a nationally representative survey of middle and high school teachers in November and December 2023. This research builds on previous CDT findings that highlighted how schools were failing to enact and/or share policies and procedures on generative AI and how, as a result, teachers lacked clarity and guidance, were more distrustful of students, and reported that students were getting in trouble due to this technology. 

This school year, teachers report some welcome movement towards more guidance and training around generative AI – but also areas that are cause for concern:

  • Familiarity, training, and school policymaking on generative AI in schools has increased, but the biggest risks remain largely unaddressed. Teachers report that both they and students have made increasing use of generative AI, and a majority indicate their schools now have a policy in place and provide training to teachers on generative AI. However, schools are providing teachers with little guidance on what responsible student use looks like, how to respond if they suspect a student is using generative AI in ways that are not allowed, and how to detect AI-generated work.
  • Teachers are becoming heavily reliant on school-sanctioned AI content detection tools. A majority of teachers report using school-endorsed AI content detection tools, despite research showing that these tools are ineffective. The proliferation of AI content detection tools could lead to negative consequences for students – given their known efficacy issues and teachers reporting low levels of school guidance on how to respond if they suspect a student has used generative AI in ways they should not.
  • Student discipline due to generative AI use has increased. Although schools are still in the process of setting generative AI policies, and the technology has been in use longer, more teachers report students experiencing disciplinary consequences than last school year. Historically marginalized students, like students with disabilities and English learners, are at particular risk for disciplinary action.
  • Teacher distrust in their students’ academic integrity remains an issue and is more pronounced in schools that ban generative AI. A majority of teachers still report that generative AI has made them more distrustful of whether their students’ work is actually theirs, and teachers at schools who ban the technology say they are even more distrustful. This is especially concerning because teachers from schools who ban generative AI are more likely to report student(s) at their school experiencing disciplinary action.

Read the full report.

Read the slide deck on the research findings.