AI Policy & Governance, Equity in Civic Technology, Privacy & Data
Algorithmic Systems in Education: Incorporating Equity and Fairness When Using Student Data
Some K-12 school districts are beginning to use algorithmic systems to assist in making critical decisions affecting students’ lives and education. Some districts have already integrated algorithms into decision-making processes for assigning students to schools, keeping schools and students safe, and intervening to prevent students from dropping out. There is a growing industry of artificial intelligence startups marketing their products to educational agencies and institutions. These systems stand to significantly impact students’ learning environments, well-being, and opportunities. However, without appropriate safeguards, some algorithmic systems could pose risks to students’ privacy, free expression, and civil rights.
This issue brief is designed to help all stakeholders make informed and rights-respecting choices and provides key information and guidance about algorithms in the K-12 context for education practitioners, school districts, policymakers, developers, and families. It also discusses important considerations around the use of algorithmic systems including accuracy and limitations; transparency and explanation; and fairness and equity.
To address these considerations, education leaders and the companies that work with them should take the following actions when designing or procuring an algorithmic system:
- Assess the impact of the system and document its intended use: Consider and document the intended outcomes of the system and the risk of harm to students’ well-being and rights.
- Engage stakeholders early and throughout implementation: Algorithmic systems that affect students and parents should be designed with input from those communities and other relevant experts.
- Examine input data for bias: Bias in input data will lead to bias in outcomes, so it is critical to understand and eliminate or mitigate those biases before the system is deployed.
- Document best practices and guidelines for future use: Future users need to know the appropriate contexts and uses for the system and its limitations.
Once an algorithmic system is created and implemented, the following actions are critical to ensuring these systems are meeting their intended outcomes and not causing harm to students:
- Keep humans in the loop: Algorithmic decision-making systems still require that humans are involved to maintain nuance and context during decision-making processes.
- Implement data governance: Because algorithmic systems consume and produce a lot of data, a governance plan is needed to address issues like retention limits, deletion policies, and access controls.
- Conduct regular audits: Audits of the algorithmic system can help ensure that the system is working as expected and not causing discriminatory outcomes or other unexpected harm.
- Ensure ongoing communication with stakeholders: Regular communication with stakeholders can help the community learn about, provide feedback on, and raise concerns about the systems that affect their schools.
- Govern appropriate uses of algorithmic systems: Using an algorithm outside of the purposes and contexts for which it was designed and tested can yield unexpected, inaccurate, and potentially harmful results.
- Create strategies for accountability and redress: Algorithmic systems will make errors, so the educational institutions employing them will benefit from having plans and policies to catch and correct errors, receive and review reports of incorrect decisions, and provide appropriate redress to students or others harmed by incorrect or unfair decisions.
- Ensure legal compliance: While legal compliance is not enough to ensure that algorithmic systems are fair and appropriate, algorithmic systems must be held to the same legal standards and processes as other types of decision-making, such as FERPA and civil rights protections.