Skip to Content

AI Policy & Governance, Equity in Civic Technology, Privacy & Data

Protecting Student Privacy and Ensuring Equitable Algorithmic Systems in Education

Using student data responsibly is about more than adhering to legal requirements — it also requires schools and their partners to use data in ways that help students and to guard against using technology to discriminate against, stigmatize, or otherwise harm them and their families. 

As the technology and data used in education continue to evolve, discrimination and bias have taken on new forms, including in the use of algorithms. This summer, the Center for Democracy & Technology (CDT) submitted two sets of comments (here and here) to the U.S Department of Education (ED), asking it to address the discriminatory effects of some algorithmic systems on marginalized groups of students.

An algorithm is a process performed by a computer to answer a question or carry out a task, such as sorting students into schools, analyzing social media posts, or flagging students at risk for dropping out. Algorithms, however, are not neutral decision-makers. Subjective human judgments influence the design of the algorithm, dictate the purpose, design, and function of an algorithm and influence its outcomes. Moreover, data used to train algorithms may itself implicitly embed biases.

Algorithmic bias has very real effects on students, especially those in marginalized groups such as Black and LGBTQ students and students with disabilities. Consider:

  • Algorithmic systems are being used by schools to scan students’ messages, documents, and social media posts for signs of self-harm or bullying. However, some have been shown to disproportionately flag words related to certain groups of students like LGBTQ students and Black and Muslim students, exposing them to increased scrutiny and surveillance.
  • Remote proctoring software used to monitor students taking exams online often uses algorithmic technology to detect students’ gaze, movements, or sounds in the room. These technologies struggle to recognize students of color, especially Black students, and disproportionately flag the behavior of students with disabilities, whose movements or accommodations may be flagged by the algorithm as suspicious. 
  • Schools are increasingly using facial recognition technology, which relies on algorithmic software, with the hope of protecting student safety, monitoring unusual behavior, or enforcing health and safety measures such as social distancing. Facial recognition, however, disproportionately misidentifies Black people or transgender or gender non-conforming people and may further marginalize them by subjecting them to increased interactions with police and school disciplinary systems.

To combat these harms, CDT is calling on ED to begin working to address the discriminatory effects of some algorithmic systems. Algorithmic bias may not only run afoul of the principles of responsible data use, but also students’ legal rights to non-discrimination under Title VI of the Civil Rights Act of 1964, Title IX of the Education Amendments of 1972, and Title II of the Americans with Disabilities Act. Those laws broadly protect students from discrimination due to their “race, color, [] national origin,” sex, gender identity, or disability status. Those protections apply not only to explicit discrimination but to an “otherwise neutral policy or practice” that has a “disproportionate impact” due to race, sex, gender identity, or disability. 

Because algorithmic systems are increasingly used throughout education and have the potential to provide benefits for, as well as cause harm to, students and families, it is important for ED to examine questions such as which types of algorithmic systems can have disparate impacts on marginalized students, what categories of training data can lead to discriminatory outcomes, and what mitigating steps can help reduce the potential for discrimination. Informed by research and fact finding, ED should consider providing resources for schools, creating guidance, and/or engaging in rulemaking to help detect, mitigate, and avoid algorithmic bias. The scope of the guidance or rules — if any — should be appropriately tailored to the harms algorithmic systems pose.

CDT applauds the Department of Education’s efforts to protect the rights of students to non-discrimination and ethical data use, and we look forward to working with them to ensure all students have an opportunity for an equitable education.