Skip to Content

AI Policy & Governance, Privacy & Data

Student Privacy Report Gives Guidance on Algorithms in K-12 Education

Algorithmic systems (systems that take in data and output some sort of decision based on that data) are becoming pervasive in our society, and the K-12 education space is no exception. Schools, districts, and states are turning to algorithmic systems to help them make all sorts of decisions, from assigning students to schools, to keeping schools and students safe, to intervening to prevent students from dropping out or falling behind academically.

In an ideal world, these tools help educators and administrators by identifying patterns or signs that humans might miss so students don’t fall through the cracks, or by making processes faster and more efficient so educators have more time to focus directly on their students. And sometimes they do just that – however, if they aren’t implemented and managed carefully, these algorithmic systems can cause harm to the very students they are intended to help. These harms can manifest in a lot of different ways, but they tend to fall into three overarching categories.

  • The system doesn’t actually solve the problem schools are expecting it to. A major concern with algorithmic systems is that they just aren’t effective at solving the problem they are supposed to, which can lead to wasted resources or, more concerning, over-reliance on a faulty system.
  • The system exhibits bias. It’s easy to think of algorithmic systems as totally rational arbiters that don’t fall prey to human frailties like bias. But algorithms are just as capable of bias as humans — or rather, humans often end up accidentally designing biased algorithmic systems.
  • The system infringes on students’ rights. Algorithmic systems rely heavily on access to data, and sometimes that means the system collects, processes, or monitors students’ personal information in a way that is privacy-invasive, could chill students’ First Amendment-protected activities, or inhibit their ability to learn.

All of these concerns can translate to real damage done to students and families, so it’s important that schools are cautious and thoughtful about when they use algorithmic systems, and how. To that end, we’ve written an issue brief to help educators navigate these concerns. We discuss the issues above in more detail. More importantly though, we discuss steps schools and districts can take to protect their students when they use these systems, such as:

  • Keeping humans involved in decision processes;
  • Implementing data governance;
  • Regularly auditing the system;
  • Creating plans for what to do if the system does cause harm;
  • and more!

We hope this brief will help educators leverage some of the benefits of algorithmic systems while protecting their students from harm.

Read the Report