Skip to Content

AI Policy & Governance, Equity in Civic Technology, Privacy & Data

Responsible Use of Data and Technology in Education: Managing Equity and Bias in Algorithmic Systems

PDF version here.

As schools incorporate more technology, they are also adopting more tools that rely on algorithmic systems. These tools are used for things like assigning children to schools and flagging students who are at risk of dropping out of school before graduating. During the COVID-19 pandemic, they have been used for tasks like enforcing social distancing requirements and assigning grades, as discussed below. As with all technology, it is critical that these systems be used responsibly by minimizing harm to students while supporting the educational mission of schools and education agencies.

What are Algorithmic Systems?

Algorithmic systems[1] are tools that rely on algorithms. Algorithms are processes performed by a computer to answer a question, make a decision, or carry out a task, often in domains that would traditionally have been handled by humans. For instance, a dropout early warning system might aim to determine whether a student is off-track academically. Automating these decisions can improve efficiency by freeing up educators’ time, and could benefit students if the systems make better decisions or catch patterns that the humans may have missed. However, schools cannot disregard the risks that stem from automating decisions. These tools may further entrench existing biases, muddy questions of accountability, or miss signals humans would have noticed.

How are They Being Used in Education During COVID-19?

Two examples of COVID-specific uses of algorithmic systems are: enforcing social distancing requirements and assigning grades.

  • Social distancing: As schools seek safer ways to reopen, some systems are using computer vision—in which computers analyze images or videos and draw information from them—to determine whether people are adhering to social distancing guidelines and wearing masks. Others are considering a “welcome center” tool that scans people’s faces to check for a mask and takes their temperature as they enter a building. While these systems may seem like a useful way to stem the spread of COVID-19 in schools, they are also a potential vector for bias. Facial detection systems are known to exhibit racial bias and are less effective at detecting faces of people with darker skin tones, particularly women of color. In this case, a higher error rate for students of color could result in disproportionate discipline of those students, or denial of entry into classrooms or school buildings at higher rates. 
  • Grading: During the pandemic, some schools have relied on algorithmic systems to conduct grading in place of holding in-person exams. The International Baccalaureate (IB) used an algorithmic system that assigned students grades based in part on the grades of prior students in the candidate’s school, which can end up serving as a proxy for other information like race, disability status, or income. The IB model’s unexpected results caused lower-than-expected scores for some students, leaving them frustrated and angry and in some cases jeopardizing their college admission. The UK also used an algorithmic system to predict how students would have performed on its A-level exams, but ultimately reversed its decision after an outcry from students and families about worse-than-expected grades that disproportionately affected students from lower socioeconomic backgrounds.

As these examples demonstrate, algorithmic systems have the potential to do real harm to students if they are not correctly evaluated, implemented, and monitored. 

What are Best Practices in Managing Equity and Bias in Algorithmic Systems During COVID-19?

School officials considering adopting algorithmic tools should think carefully about the goals they are trying to achieve, and whether and how the tool can address those goals while avoiding unintended consequences. Building a structure around how and whether algorithmic systems should be incorporated into the educational context can help avoid harm to students and families. The steps below can help issue-spot potential problems and ensure you have appropriate governance measures in place.

  • Before using a system, consider the potential impacts the system could have and whether it will be effective for the intended use: First, consider whether the algorithmic system would be fit for the intended purpose. For instance, facial detection systems designed for low-stakes tasks (like tagging photos) may not have been tested rigorously enough to be used for high-stakes tasks like ensuring access to education, such as the mask detection system discussed above. Document any risk of harm to students’ well-being and rights.
  • Govern appropriate uses of algorithmic systems: Because many algorithmic systems are only effective in the specific domain they were designed for — and even then may have limitations school officials should be aware of and account for — it is important to document the use cases for the system so it is not used in unintended ways. Avoid inaccurate or harmful results by understanding what the system is designed to do and how it works.
  • Engage stakeholders early and throughout use: Engaging stakeholders like students and families, teachers, and administrators will help ensure any concerns about the system are raised and addressed before the system is put into use. 
  • Implement data governance: Algorithmic systems typically consume and produce lots of data, such as grades consumed by a dropout early warning system and the assessment the system produces for a given student. This data, including the assessment score itself, may include sensitive personal information, and should be subject to data governance protections such as limited access.
  • Examine input data for bias: Ensure that any data used by the system (including training data) for decision-making is evaluated for bias, since using biased data will produce biased results. One of the concerns with the IB algorithm was its reliance on past scores from given schools, meaning that students from historically poorly-performing schools receive lower grades regardless of their own performance, a classic example of embedded bias.
  • Keep humans in the loop: Keeping humans involved in algorithmic systems can help maintain nuance in decision-making when needed and provide a channel for accountability for the decisions. For a mask detection system, that might mean alerting a human when a system detects a missing mask, giving the human the ability to override an incorrect reading without requiring that person to clear every student coming into the classroom.
  • Conduct regular audits: Because many algorithmic systems evolve over time, as does the data that they use, it is important to test systems before using them, and to regularly audit the system to ensure that it is producing accurate and unbiased results. 
  • Create protocols for accountability and redress: Because students will be affected by algorithmic systems, it is important to have accessible structures in place for students and families to seek redress if they feel a decision is unfair or wrong. The IB process for appealing scores requires a fee and thus may be inaccessible to some students.
  • Ensure legal compliance: Of course, systems will need to comply with any applicable laws such as FERPA, civil rights laws, and state student privacy laws.

For more information, see CDT’s policy brief on algorithmic systems in education.

This is one in a series of information sheets designed to give practitioners clear, actionable guidance on how to most responsibly use technology in support of students.  Find more of our work at cdt.org/student-privacy.

[1] Algorithmic systems is a broad umbrella term that includes systems that rely on technologies, such as artificial intelligence (AI) or machine learning (ML), as well as more straightforward rules-based systems.

PDF version here.