AI Policy & Governance, Equity in Civic Technology
Looking Back at AI Guidance Across State Education Agencies and Looking Forward
This blog is part of a series of pieces highlighting AI regulation trends across states. See CDT’s other blogs on state AI executive orders, public sector AI legislation, and local AI governance efforts.
Artificial intelligence (AI) has shaken up the education sector, particularly since the public release of ChatGPT and other generative AI tools. School administrators, teachers, students, and parents have grappled with whether and how to utilize AI, amidst fears such as diminishing student academic integrity and even more sinister concerns like rising prevalence of deepfake non-consensual intimate imagery (NCII).
In response to AI taking classrooms by storm, the education agencies of over half of states (Alabama, Arizona, California, Colorado, Connecticut, Delaware, Georgia, Hawaii, Indiana, Kentucky, Louisiana, Michigan, Minnesota, Mississippi, New Jersey, North Carolina, North Dakota, Ohio, Oklahoma, Oregon, Utah, Virginia, Washington, West Virginia, Wisconsin, Wyoming) and Puerto Rico have released guidance for districts and schools on the responsible use of AI in public education. These pieces of guidance vary by types of AI systems they cover, with some solely focusing on generative AI and others encompassing AI more broadly. Analysis of current state education agencies’ (SEAs’) guidance reveals four primary trends:
- There is alignment on the potential benefits of AI in education.
- Education agencies acknowledge the base risks of AI use in schools.
- Across the board, states emphasize the need for human oversight and investment in AI literacy/education.
- As a whole, SEA guidance is missing critical topics related to AI, such as how to meaningfully engage communities on the issue and how to approach deepfakes.
Below, we detail these trends; highlight what SEAs can do to advance responsible, rights-respecting use of AI in education in light of these trends; and explore a few particularly promising examples of SEA AI guidance.
Trends in SEAs’ AI Guidance
- Alignment on the potential benefits of AI in education
Guidance out of SEAs consistently recognizes the following four benefits of using and teaching AI in the classroom:
- Personalized learning: At least 17 SEAs cite personalized learning for students as a benefit of AI in education. Colorado’s AI roadmap, for instance, states that AI can support students by “tailor[ing] educational content to match each student’s learning pace and style and helping students learn more efficiently by offering individualized resources and strategies that align with their learning goals, styles, and needs.” Another example is Arizona’s generative AI guidance document, which highlights three different methods of personalized learning opportunities for students: interactive learning, AI coaching, and writing enhancement.
- Expediting workflow and streamlining administrative processes: Roughly 13 SEAs mention AI’s potential benefit of speeding up or even automating tasks, such as writing emails or creating presentations. Washington mentions “streamlin[ing] operational and administrative functions” as an opportunity for AI use in education, and similarly, Oklahoma states that educators can use AI to “increase efficiency and productivity” through means like automating administrative tasks, thus freeing up time to focus on teaching.
- Preparing students for the future workforce: Around 11 states discuss teaching AI and AI literacy to students now as essential in equipping them for future career opportunities, often predicting that AI tools will revolutionize the workforce. Indiana’s AI in education guidance states that “the ability to use and understand AI effectively is critical to a future where students will enroll in higher education, enlist in the military, or seek employment in the workforce.” Similarly, Delaware’s generative AI in education guidance explains that “students who learn how AI works are better prepared for future careers in a wide range of industries,” due to developing the skills of computational thinking, analyzing data critically, and evaluating the effectiveness of solutions.
- Making education more accessible to underrepresented groups: At least 11 of the AI in education guidance documents tout AI as making education more accessible, especially for student populations like those with disabilities and English learners. For example, California’s Department of Education and Minnesota’s Department of Education both note that AI can improve access for marginalized populations through functions such as language translation assistance and generating audio descriptions for students with disabilities. In addition to these communities of students, North Dakota’s Department of Public Instruction also mentions that AI tools can make education more accessible for students in rural areas and students from economically disadvantaged backgrounds.
- Acknowledgement of the base risks of AI use in schools
The majority of SEA guidance documents enumerate commonly recognized risks of AI in education, namely:
- Privacy harms: Roughly 20 states explicitly mention privacy harms as a risk or concern related to implementation of AI in education, especially as it pertains to personally identifiable information. For example, Hawaii’s AI in education guidance geared towards students urges them to be vigilant about protecting their privacy by avoiding sharing sensitive personal information with AI tools, such as their address and phone number. Another example is Mississippi’s Department of Education, which highlights that AI can “increase data privacy and security risks depending on the [vendor’s] privacy and data sharing policies.”
- Inaccuracy of AI-generated outputs: At least 16 SEAs express concerns about AI tools’ ability to produce accurate information, often citing the common generative AI risk of hallucination. North Dakota’s Department of Public Instruction encourages high schoolers to learn about the limitations of AI and to have a “healthy skepticism” of tools due, in part, to the risk of inaccuracies in information. Along the same lines, Wyoming’s AI in education guidance affirms that students are always responsible for checking the accuracy of AI-generated content, and that school staff and students should critically evaluate all AI outputs.
- Reduction of students’ critical thinking skills: Around 10 SEAs discuss the risk of students becoming overreliant on AI tools, thus diminishing their necessary critical thinking skills. Puerto Rico’s Department of Education cites the risk of students and staff becoming dependent on AI tools, which can reduce skills such as critical thinking, creativity, independent decision-making, and quality of teaching. Another example is Arizona’s generative AI guidance, stating that overreliance on AI is a risk for both students and teachers – technology cannot replace the deep knowledge teachers have of their students, nor can it “improve student learning if it is used as a crutch.”
- Perpetuation of bias: At least 22 states cite perpetuating bias as a risk of AI tools in the classroom. One of the ethical considerations that Louisiana puts forth is “avoiding potential biases in algorithms and data” when possible and placing safeguards during AI implementation to address bias. Virginia’s AI guidelines also affirm that the use of AI in education should do no harm, including “ensuring that algorithms are not based on inherent biases that lead to discriminatory outcomes.”
- Unreliability of AI content detection tools: Many states also express skepticism about the use of AI content detection tools by educators to combat plagiarism, in part due to their unproven efficacy and risk of erroneously flagging non-native English speakers. For example, West Virginia’s Department of Education recommends that teachers do not use AI content detectors “due to concerns about their reliability,” and North Carolina’s generative AI guidance notes that AI detection tools “often create false positives, penalizing non-native speakers and creative writing styles.”
- Emphasis on the need for human oversight and investment in education
Across the board, SEAs also stress the importance of taking a human-centric approach to AI use in the classroom – emphasizing that AI is just a tool and users are still responsible for the decisions they make or work they submit. For example, the Georgia Department of Education’s AI guidance asserts that human oversight is critical and that “final decision-making should always involve human judgment.” Similarly, the Kentucky Department of Education emphasizes how vital having a human in the loop is, especially when AI makes decisions that could have significant consequences for individuals or society.
To equip school stakeholders with the skills necessary to be responsible users of AI, many SEA guidance documents also highlight the need for AI literacy and professional development and training for teachers. Colorado’s AI roadmap frequently mentions the need for both teachers and students to be given AI literacy education so that students are prepared to enter the future “AI-driven world.” The Oregon Department of Education’s AI guidance continually mentions the need for educators to be trained to address the equity impacts of generative AI, including training on topics like combating plagiarism and spotting inaccuracies in AI outputs.
- Exclude critical topics, such as meaningful community engagement and deepfakes
Creating mechanisms for robust community engagement allows districts and schools to make more informed decisions about AI procurement to ensure systems and their implementations directly respond to the needs and concerns of those the tools impact most. Some pieces of guidance mention including parents in conversations about AI adoption and implementation, but only in a one-way exchange (e.g., the school provides parents resources/information on how AI will be used safely in the classroom). North Carolina, West Virginia, Utah, Georgia, Connecticut, and Louisiana are the only states that talk about more meaningful engagement, like obtaining parental consent for students using AI tools at school, or including parents and other external stakeholders in the policymaking and decision-making processes. For example, Connecticut’s AI guidance states that parents and community members may have questions about AI use in their children’s school, so, “Leaders may consider forming an advisory around the use of technology generally and AI tools specifically to encourage a culture of learning and transparency, as well as to tap the expertise that community experts may offer.”
One of the most pernicious uses of AI that has become a large issue in schools across the country is the creation of deepfakes and deepfake NCII. CDT research has shown that in the 2023-2024 school year, around 40 percent of students said that they knew about a deepfake depicting someone associated with their school, and 15 percent of students reported that they knew about AI-generated deepfake NCII that depicted individuals associated with their school. The harms from using AI for bullying or harassment, including the creation of deepfakes and deepfake NCII, is only mentioned in roughly four of the guidance documents – those from Utah, Washington, West Virginia, and Connecticut. Utah’s AI in education guidance expresses that schools should prohibit students from “using AI tools to manipulate media to impersonate others for bullying, harassment, or any form of intimidation,” and in the same vein, Washington’s Office of Superintendent of Public Instruction explicitly mentions that users should never utilize AI to “create misleading or inappropriate content, take someone’s likeness without permission, or harm humans or the community at large.”
What SEAs Can Do to Advance Responsible AI Use in Education
After analyzing the strengths and weaknesses of current SEAs’ AI guidance documents, the following emerge as priorities for effective guidance:
- Improve the form of the guidance itself
- Tailor guidance for specific audiences: School administrators, teachers, students, and parents each have unique roles in ensuring AI is implemented and used responsibly, thus making it necessary for guidance to clearly define the benefits, risks, risk mitigation strategies, and available resources specific to each audience. Mississippi’s guidance serves as a helpful example of segmenting recommendations for specific groups of school stakeholders (e.g., student, teachers, and school administrators).
- Ensure guidance is accessible: SEAs should ensure that guidance documents are written in plain language so that they are more accessible generally, but also specifically for individuals with disabilities. In addition, guidance released online should be in compliance with the Web Content Accessibility Guidelines as required by Title II of the Americans with Disabilities Act.
- Publish guidance publicly: Making guidance publicly available for all school stakeholders is key in building accountability mechanisms, strengthening community education on AI, and building trust. It can also allow other states, districts, and schools to learn from other approaches to AI policymaking, thus strengthening efforts to ensure responsible AI use in classrooms across the country.
- Provide additional clarity on commonly mentioned topics
- Promote transparency and disclosure of AI use and risk management practices: Students, parents, and other community members are often unaware of the ways that AI is being used in their districts and schools. To strengthen trust and build accountability mechanisms, SEAs should encourage public sharing about the AI tools being used, including the purposes for their use and whether they process student data. On the same front, guidance should also include audience-specific best practices to ensure students’ privacy, security, and civil rights are protected.
- Include best practices for human oversight: The majority of current SEA guidance recognizes the importance of having a “human in the loop” when it comes to AI, but few get specific on what that means in practice. Guidance should include clear, audience-specific examples to showcase how individuals can employ the most effective human oversight strategies.
- Be specific about what should be included in AI literacy/training programs: SEAs recognize the importance of AI literacy and training for school administrators, teachers, and students, but few pieces of guidance include what topics should be covered to best equip school stakeholders with the skills needed to be responsible AI users. Guidance can identify priority areas for these AI literacy/training programs, such as training teachers on how to respond when a student is accused of plagiarism or how students can verify the output of generative AI tools.
- Address important topics that are missing entirely
- Incorporate community engagement throughout the AI lifecycle: Outside of school staff, students, parents, and other community members hold vital expertise that should be considered during the AI policymaking and decision-making process, such as concerns and past experiences.
- Articulate the risks of deepfake NCII: As previously mentioned, this topic was missing from most SEA guidance. This should be included, with a particular focus on encouraging implementation of policies that address the largest gaps: investing in prevention and supporting victims.
Promising Examples of SEA AI Guidance
Current AI guidance from SEAs contains strengths and weaknesses, but three states stand out in particular for their detail and unique approaches:
North Carolina Department of Public Instruction
North Carolina’s generative AI guidance stands out for five key reasons:
- Prioritizes community engagement: The guidance discusses the importance of community engagement when districts and schools are creating generative AI guidelines. It points out that having community expertise from groups like parents establishes a firm foundation for responsible generative AI implementation.
- Encourages comprehensive AI literacy: The state encourages LEAs to develop a comprehensive AI literacy program for staff to build a “common understanding and common language,” laying the groundwork for responsible use of generative AI in the classroom.
- Provides actionable examples for school stakeholders: The guidance gives clear examples for concepts, such as how teachers can redesign assignments to combat cheating and a step-by-step academic integrity guide for students.
- Highlights the benefit of built-for-purpose AI models: It explains that built-for-education tools, or built-for-purpose generative AI models, may be better options for districts or schools concerned with privacy.
- Encourages transparency and accountability from generative AI vendors: The guidance provides questions for districts or schools to ask vendors when exploring various generative AI tools. One example of a question included to assess “evidence of impact” is, “Are there any examples, metrics, and/or case studies of positive impact in similar settings?”
Kentucky Department of Education
Three details of Kentucky’s AI guidance make it a strong example to highlight:
- Positions the SEA as a centralized resource for AI: It is one of the only pieces of guidance that positions the SEA as a resource and thought partner to districts who are creating their own AI policies. As part of the Kentucky Department of Education’s mission, the guidance states that the Department is committed to encouraging districts and schools by providing guidance and support and engaging districts and schools by fostering environments of knowledge-sharing.
- Provides actionable steps for teachers to ensure responsible AI use: Similar to North Carolina, it provides guiding questions for teachers when considering implementing AI in the classroom. One sample question that teachers can ask is, “Am I feeding any sensitive or personal information/data to an AI that it can use or share with unauthorized people in the future?”
- Prioritizes transparency: The guidance prioritizes transparency by encouraging districts and schools to provide understandable information to parents, teachers, and students on how an AI tool being used is making decisions or storing their data, and what avenues are available to hold systems accountable if errors arise.
Alabama State Department of Education
Alabama’s AI policy template stands out for four primary aspects:
- Promotes consistent AI policies: Alabama takes a unique approach by creating a customizable AI policy template for LEAs to use and adapt. This allows for conceptual consistency in AI policymaking, while also leaving room for LEAs to include additional details necessary to govern AI use in their unique contexts.
- Recognizes the importance of the procurement process: The policy template prioritizes the AI procurement process, by including strong language about what details should be included in vendor contracts. The policy template points out two key statements that LEAs should get written certification from contractors that they will comply with: that “the AI model has been pre-trained and no data is being used to train a model to be used in the development of a new product,” and that “they have used a human-in-the-loop strategy during development, have taken steps to minimize bias as much as possible in the data selection process and algorithm development, and the results have met the expected outcomes.”
- Provides detailed risk management practices: It gets very specific about risk management practices that LEAs should adhere to. A first key detail included in the template is that the LEA will conduct compliance audits of data used in AI systems, and that if changes need to be made to a system, the contractor will be required to submit a corrective action plan. Another strong detail included is that the LEA must establish performance metrics to evaluate the AI system procured to ensure that the system works as intended. Finally, there is language included that, as part of their risk management framework, the LEA should comply with the National Institute of Standards and Technology’s AI Risk Management Framework (RMF), conduct annual audits to ensure they are in compliance with the RMF, identify risks and share them with vendors to create a remediation plan, and maintain a risk register for all AI systems.
- Calls out the unique risks of facial recognition technology in schools: Alabama recognizes the specific risks of cameras with AI systems (or facial recognition technologies) on campuses and in classrooms, explicitly stating that LEAs need to be in compliance with federal and state laws.
Conclusion
In the past few years, seemingly endless resources and information have become available to education leaders, aiming to help guide AI implementation and use. Although more information can be useful to navigate this emerging technology, it has created an overwhelming environment, making it difficult to determine what is best practice and implying that AI integration is inevitable.
As SEAs continue to develop and implement AI guidance in 2025, it is critical to first be clear that AI may not be the best solution to the problem that an education agency or school is attempting to solve, and second, affirm what “responsible” use of AI in education means – creating a governance framework that allows AI tools to enhance childrens’ educational experiences while protecting their privacy and civil rights at the same time.