Skip to Content

AI Policy & Governance, Equity in Civic Technology

State Government Use of AI: The Opportunities of Executive Action in 2025

Following the release of ChatGPT in 2022 when artificial intelligence (AI) – and generative AI more specifically – captivated the public consciousness, state legislatures and governors across the country moved to regulate its use in government in the absence of Congressional action. Efforts to regulate state government use of AI have primarily taken the form of public sector specific legislation (which CDT analyzes here) and executive orders (EOs).

So far, thirteen states (Alabama, California, Maryland, Massachusetts, Mississippi, New Jersey, Oklahoma, Oregon, Pennsylvania, Rhode Island, Virginia, Washington, Wisconsin) and D.C. have issued EOs that primarily address whether and how AI is or should be used in state government. Analysis of these EOs reveals four main trends:

  1. States do not have a consistent definition of AI.
  2. Current state EOs acknowledge the potential harms of AI in the delivery of public services.
  3. The majority of these EOs suggest pilot projects as a starting point for government agencies.
  4. States are prioritizing AI governance and planning prior to implementation.

Digging Into the Trends of State AI EOs

Lack a consistent definition of AI

State EOs vary in their focus — many address only generative AI rather than AI more broadly. But regardless of focus, states largely utilize their own definitions of AI, independent of the federal government. Maryland and Massachusetts are the only states with EOs that draw from an established federal definition of AI, using text directly from the National Artificial Intelligence Initiative (NAII) Act of 2020.

Acknowledge the potential harms of AI to individuals

The majority of state EOs recognize that, although AI holds promise to deliver public services more efficiently, there are risks to individuals’ privacy, security, and civil rights given the highly-sensitive nature of their training data and the high stakes decisions they affect. To this end, many state EOs include language about using AI to deliver services or benefits more efficiently, but responsibly. For example, California’s EO states that the state “seeks to realize the potential benefits of [generative AI] for the good of all California residents, through the development and deployment of [generative AI] tools that improve the equitable and timely delivery of services, while balancing the benefits and risks of these new technologies.” 

Almost all state EOs also incorporate concepts associated with protecting individuals’ civil rights, but only three states explicitly name civil rights as a priority — Washington, Oregon, and Maryland. Maryland’s EO sets out principles that must guide state agencies’ use of AI, including “fairness and equity,” and states that “the State’s use of AI must take into account the fact that AI systems can perpetuate harmful biases, and take steps to mitigate those risks, in order to avoid discrimination or disparate impact to individuals or communities based on their [legally protected characteristics].”

Suggest AI pilot projects as a starting point for agencies

Another major element seen across most state EOs is the encouragement of pilot projects to test how AI can best serve state government. In many cases, however, EOs don’t explicitly identify desired outcomes — Alabama and California are the only states to specify what the goals of agencies’ pilot projects should be: projects should show how generative AI can improve citizens’ experience with and access to government services and support state employees in the performance of their duties.

Prioritize AI governance and strategy

Finally, many of the state EOs create task forces to understand the current uses of AI in state government, effectively creating a centralized body to guide each state’s approach to AI exploration and implementation. EOs that establish task forces define who should be included, but each state varies in its approach. For example, individuals in senior roles across agencies like the Chief Technology Officer or the Secretary of Labor make up the bulk of task force members in Maryland, New Jersey, and Pennsylvania’s, while the remaining states leave it to the Governor or members of the State House to appoint the majority of their task forces.

The goal of these AI task forces generally is to provide recommendations for how agencies should proceed in a number of areas related to AI implementation. One example is Pennsylvania’s EO, which tasks their AI Governing Board to make recommendations when agencies request to use a generative AI tool “based upon a review process that evaluates the technology’s bias and security, and whether the agency’s requested use of generative AI adheres to the values set forward” in the EO. Another example is New Jersey’s EO, which mandates that the task force study emerging AI tools and give “recommendations to identify government actions appropriate to encourage the ethical and responsible use of AI technologies” in a final, publicly available report to the Governor.

Promising Examples of State AI EOs

While each EO has strengths and weaknesses, a few stand out for their scope, specificity, and focus on protecting individuals from AI harms:

Washington EO 24-01

Five primary aspects of Washington’s EO stand out:

  • First, it defines a “high-risk” generative AI system to give agencies a common understanding of what use cases may most acutely impact the privacy and civil rights of individuals. 
  • Second, Washington’s EO uses federal guidance as a starting point — it directs that guidelines for public sector use, procurement, and ongoing monitoring draw from the Biden Administration’s AI Bill of Rights and the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework. It also mandates that vendors who are providing AI systems for high-risk uses certify that they have implemented a governance program consistent with NIST’s framework. 
  • Third, the EO prioritizes protecting marginalized communities, who may be most impacted by AI harms in the delivery of public services. Washington’s AI governance body (Consolidated Technology Services (WaTech)) is required to develop and publicly publish guidelines for agencies to analyze the impact that generative AI may have on vulnerable communities, and the Office of Equity is assigned to develop and implement a framework for accountability on this topic. 
  • Fourth, Washington’s EO recognizes the power of government AI procurement — the Department of Enterprise Services is required to update the state’s procurement and contract templates to meet the generative AI moment. 
  • And finally, the EO requires WaTech to produce guidance on risk assessments for the deployment of high-risk generative AI systems, including evaluations of a system’s fitness for its intended purpose and the likelihood of discriminatory outcomes.

California EO N-12-23

Four details of California’s EO make it a particularly strong example:

  • First, the EO specifically directs government agencies to ensure ethical outcomes for marginalized communities when using AI. The Government Operations Agency, the California Department of Technology, and the Office of Data and Innovation, in partnership with other state agencies, are required to develop guidelines for how agencies should analyze the impact of generative AI on marginalized communities, “including criteria to evaluate equitable outcomes in deployment and implementation of high-risk use cases.” 
  • Second, California’s EO recognizes the importance of AI procurement, requiring an update to existing procurement terms, which has already been released
  • Third, the EO takes positive steps towards transparency and documentation by  requiring state agencies to create and share inventories of all high-risk uses.
  • And finally, California’s EO uniquely requires the Government Operations Agency, the California Department of Human Resources, and the Labor and Workforce Development Agency, in partnership with state government employees or organizations that represent them, to develop criteria for measuring the impact of generative AI on the government workforce. Essentially, agencies must provide evidence that acquiring new generative AI tools will add value to operations and the delivery of public services.

Pennsylvania EO 2023-19

Pennsylvania’s EO stands out for four primary reasons:

  • First, it uniquely states that the development of generative AI policies should not “overly burden end users or agencies,” meaning that guardrails put on the use of generative AI must be reasonable and not detract from the goal of responsibly and more efficiently delivering public services. 
  • Second, Pennsylvania’s EO prioritizes transparency by requiring state agencies to publicly disclose when generative AI is used in a service and if bias testing on the tool has been completed. 
  • Third, as in Washington and California, the EO recognizes the importance of procurement by obligating the AI Governing Board to work with the Office of Administration to develop procurement recommendations for generative AI products. 
  • Lastly, Pennsylvania’s EO specifically identifies community engagement as a vital tool for feedback on the government’s use of generative AI.

What Governors Can Do To Advance Responsible AI Use in the Public Sector

Based on our analysis of current state AI EOs, Governors should incorporate several priorities into their actions on this issue:

  • Align definitions of AI with cross-state government bodies/agencies: Developing consistent definitions of AI across government allows clarity and common understanding of what tools or systems are subject to the guidelines set forth by an EO.
  • Define clear priorities and goals for the adoption and use of AI within state government: Providing a uniform vision for agencies in their exploration and implementation of AI ensures that these tools are deployed with clear objectives that align with constituent needs from the outset. These priorities should align with existing state programs, laws, and regulations.
  • Include robust risk management practices: Use of AI in the delivery of public services carries significant risks due to the sensitive data used and the consequences of potential errors. State agencies should be required to implement appropriate risk management measures, such as pre- and post-deployment monitoring.
  • Promote transparency and disclosure by requiring AI inventories: To build trust with constituents and ensure adequate internal and external visibility into the scope of government AI use, EOs should require annual, publicly available inventories of how state agencies are using AI regardless of the use cases’ risk level.
  • Ensure pilot projects have clear goals and appropriate safeguards: If pilot projects are part of the broader AI strategy, state agencies should have a clear understanding of the desired outcomes and necessary safeguards, with requirements such as not inputting sensitive data and implementing periodic monitoring to discern if the system is working how it is intended.
  • Ensure task forces contain senior level and cross agency members: Individuals in senior technology, privacy, accessibility, and civil rights positions (such as Chief Data Officers, Chief Privacy Officers, Chief Accessibility Officers, and Attorneys General) have the necessary expertise to provide input. Having these senior individuals on a task force can help ensure that decisions and recommendations made by the task force are appropriately incorporated by agencies. Including representatives of cross governmental agencies ensures that different perspectives and voices are heard in the important process of AI governance and planning.
  • Incorporate community engagement requirements: Hearing directly from experts and impacted groups strengthens public trust and ensures that government use of AI is directly responsive to the needs and concerns of the people they serve.

With the rapid evolution of AI and the frenzied push for governments to adopt AI systems, EOs are an important lever for governors to establish responsible practices across agencies. In 2025, governors have an unprecedented window of opportunity to determine whether and how AI is integrated in the public sector in ways that protect individuals and their families.