Skip to Content

Equity in Civic Technology, Privacy & Data

Report – Off Task: EdTech Threats to Student Privacy and Equity in the Age of AI 

Graphic for CDT report, entitled "Off Task: EdTech Threats to Student Privacy and Equity in the Age of AI." A browser with a warning symbol.
Graphic for CDT report, entitled “Off Task: EdTech Threats to Student Privacy and Equity in the Age of AI.” A browser with a warning symbol.

This report is also authored by Hugh Grant-Chapman, Independent Consultant

In schools across the country, the use of educational data and technology (edtech) remains nearly ubiquitous. In addition to supporting instruction, schools have used edtech to respond to the painfully present safety threats that they face on a daily basis — from gun violence to the youth mental health crisis. However, long-standing technologies such as content filtering and blocking and student activity monitoring pose well-documented privacy and equity risks to students. Nonetheless, schools continue to deploy these technologies on a mass scale. And with generative artificial intelligence (AI) becoming rapidly integrated into the education space, many new risks are being introduced to students.

The Center for Democracy & Technology (CDT) conducted surveys of high school students and middle and high school parents and teachers from July to August 2023 to understand how edtech used by schools is tangibly affecting those it claims to serve. The research focuses on student privacy concerns and schools’ capacity to address them; emerging uses of AI-driven technology such as predictive analytics; and deep dives into content filtering and blocking, student activity monitoring, and generative AI, encompassing both well-established and emerging technology. These surveys build on CDT’s previous research, which revealed that student activity monitoring is adversely affecting all students, especially historically marginalized and under-resourced students.

Whether old or new, technologies deployed across schools have negative impacts on students, and schools are out of step in addressing rising concerns:

  • Schools are not adequately engaging and supporting students, parents, and teachers in addressing concerns about school data and technology practices: Students, parents, and teachers report a lack of guidance, information, and training on privacy, student activity monitoring, content filtering and blocking, and generative AI. They want more support from their schools and to be involved in decisions about whether and how these technologies are used.
  • Content blocking and filtering is stifling student learning and growth: Students and teachers agree that this technology is a barrier to learning, often making it hard to complete school assignments and access useful information.
  • Student activity monitoring continues to harm many of the students it claims to help: Disciplinary actions, outing of students, and initiating of law enforcement contact are still regular outcomes of the use of this technology, even though it is procured by schools to help keep students safe.
  • Schools have provided little guidance about generative AI, leaving students, parents, and teachers in the dark: Students, parents, and teachers report a collective state of confusion about policies and procedures related to responsible generative AI use in the classroom. Meanwhile, students are getting in trouble for the use of this technology.

Even more disheartening is that in all of these areas, at-risk communities of students are still experiencing disproportionate negative impacts of these old and new technologies:

  • Schools are filtering and blocking LGBTQ+ and race-related content, with Title I and licensed special education teachers more likely to report such practices: Although filtering and blocking technology was originally intended to primarily target explicit adult content, more school administrators are using it to restrict access to other content they think is inappropriate, including LGBTQ+ and race-related content. Title I and licensed special education teachers are more likely to report this occurrence. In key respects, this finding parallels the broader trend in education of removing books and curricular content on these subjects.
  • Student activity monitoring is disproportionately harming students with disabilities and LGBTQ+ students: Students with individualized education programs (IEPs) and/or 504 plans as well as licensed special education teachers report higher rates of discipline arising from student activity monitoring. LGBTQ+ students are also still being disciplined more than their peers and outed without their consent.
  • Title I and licensed special education teachers report higher rates of students receiving disciplinary actions for using or being accused of using generative AI: Despite having little guidance from schools on generative AI use, Title I teachers, licensed special education teachers, and parents of students with IEPs and/or 504 plans report higher rates of their student(s) getting in trouble as compared to peers.

Previous CDT research and this year’s findings continue to document the risks and harms of edtech on all students but especially on vulnerable communities. As uses of edtech, particularly AI-driven technology, continue to expand, education leaders across the country should focus not only on privacy concerns but also on identifying and preventing discrimination. Luckily, they already have the tools to do so with well-established civil rights laws that apply to discriminatory uses of technology.

Read the full report here.

Read the summary brief here.

Explore the research slide deck here.

Read the press release here.