U.S. Department of Education’s AI Toolkit and Nondiscrimination Resources Provides Lasting Guidance for Educators on AI and Civil Rights
In October 2024, the U.S. Department of Education (ED) released its Toolkit for Safe, Ethical, and Equitable AI Integration, pursuant to its obligation under President Joe Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The Order mandated that ED create resources, policies, and guidance to address safe, responsible, and nondiscriminatory uses of AI in education. Drilling further into their mandate, ED’s Office for Civil Rights (OCR) then released another set of guidance titled Avoiding the Discriminatory Use of Artificial Intelligence to further address the intersection of AI and civil rights in schools. Together, these resources provide much-needed guidance that CDT has strongly advocated for over the course of several years, particularly around: 1) reinforcing the intersection of AI and civil rights; 2) addressing deepfake nonconsensual intimate imagery (NCII); and 3) rebuilding trust in student work in the wake of widespread availability of generative AI.
AI and Civil Rights
Although federal and state civil rights laws have been in existence for decades, school leaders have not had clarity on how they apply to edtech. The toolkit discusses civil rights and algorithmic bias, with a description of applicable civil rights laws that impact a school’s use and implementation of AI, while the OCR’s guidance provides illustrative examples of such uses. These acknowledgements of the existing legal obligations that school leaders must fulfill echoes CDT research and analysis published last year, discussing specific edtech and AI use cases within the existing legal frameworks of the Civil Rights Act (which includes Title IV and Title IX), Section 504 of the Rehabilitation Act, and the Americans with Disabilities Act — the combination of which is aimed at preventing discrimination on the basis of race, sex, and disability, among other characteristics. The resources specifically acknowledge the risks of bias and discrimination for use cases including student activity monitoring software, content filtering and blocking software (content moderation), facial/movement recognition software that relies on biometric information, generative AI detectors, and remote proctoring software.
Deepfake NCII
ED has previously found that certain online conduct can be actionable under the rule, which would include how non-consensual intimate imagery (NCII) creates a hostile learning environment on the basis of sex. In the toolkit, ED cited the recommendations to schools leaders in CDT’s report, In Deep Trouble, which focuses on the issue of NCII (both authentic and deepfake) in K-12 schools. These recommendations include: 1) instituting a trauma-informed approach to reports of deepfake NCII; 2) ensuring privacy and confidentiality for the parties involved; and 3) creating a mechanism for supporting victims after the fact (e.g., counseling, resources about having content removed from online platforms, and resources on how to report the conduct to law enforcement). OCR’s nondiscrimination guidance also offered a hypothetical on deepfake NCII to provide an example of what might be an insufficient response from schools. Because this is a growing issue in schools that shows no signs of slowing down, ED’s guidance on this topic is particularly meaningful.
Academic Integrity and Generative AI
In addressing the equity risks posed by generative AI, ED calls out the widening gap in trust between teachers and students due to the outsized fear of gen-AI facilitated academic dishonesty and states that educational leaders should seek evidence to ensure that protected groups of students are not disproportionately impacted. The toolkit also refers to CDT’s polling research on educator experiences with generative AI in the classroom, which touched on the use of AI detector tools, student discipline for suspected generative AI use, and the distrust these uses have caused between teachers and students. In a similar vein, the OCR resource highlights the potential discriminatory impact of generative AI detectors on English Learners, spelling this out as a potentially actionable scenario under Title VI.
Conclusion
For years, civil society has called upon ED to provide policies, guidance, and enforcement around schools’ approach to edtech and AI— especially as it relates to civil rights and marginalized students. The toolkit was a helpful first step, and now paired with ED’s resource on avoiding discrimination, provides much needed clarity on the application of civil rights laws to a variety of AI uses in schools. These resources also make clear that ED should take enforcement actions for such uses, consistent with its commitment to doing so through the Department of Justice’s pledge for enforcement of civil rights, fair competition, consumer protection, and equal opportunity laws in automated systems.
As we navigate a recent change in Administration, we hope to see these issues remain a priority for the Department in the years ahead. CDT remains committed to advocating for the privacy and civil rights of students in whether and how AI is used in schools. These resources, coupled with tailored guidance (e.g., model policies, best practices on appropriate prevention and response to deepfake NCII) and active enforcement efforts, will provide states and families with a model to help ensure that all students have the opportunity to learn and grow in an environment free from discrimination and harassment.