Skip to Content

AI Policy & Governance, Equity in Civic Technology

Regulating Public Sector AI: Emerging Trends in State Legislation

In light of Congressional inaction and the increasing use of artificial intelligence (AI) by public agencies, states have an important role to play in ensuring that government uses of AI are effective, safe, responsible, and rights-protecting. AI offers potential benefits such as improved customer service and increased efficiency, and legislation is often designed to promote such uses in a manner that is trustworthy and transparent. Many state lawmakers have already acknowledged this as an important subject of legislation, and governors have taken on the issue through executive action

During the 2024 state legislative session alone, state legislatures introduced over 40 bills specifically focused on public sector uses of AI, 12 of which were passed into law (California’s SB 896, Delaware’s HB 333, Florida’s SB 1680, Indiana’s SB 150, Maryland’s SB 818, New Hampshire’s HB 1688, New York’s SB 7543, Pennsylvania’s HR 170, Tennessee’s HB 2325, Virginia’s SB 487, Washington’s SB 5838, and West Virginia’s HB 5690). This trend builds on legislation passed during the 2023 legislative session, which included several state bills that require public agencies to inventory their uses of AI systems (e.g., California’s AB 302 and Connecticut’s SB 1103). To date, at least 16 states, including Maryland, Vermont, and Connecticut, have passed legislation that specifically addresses the use of AI by government agencies. This number reflects only legislation focused solely on government agencies’ use of AI and does not include sector-specific bills, automated decision-making bills, or comprehensive AI governance bills, so the total number of laws regulating public sector uses of AI is likely larger. Moreover, many private sector bills that ostensibly exempt public agencies often indirectly address government uses of AI by imposing requirements on private companies that provide services to government agencies.

Legislative proposals on government uses of AI generally aim to promote the transparent, responsible, and safe use of these tools. Some proposals from 2024 include strong, substantive guardrails on the use of AI in the public sector. For instance, some bills require agencies to implement risk management practices for high risk uses of AI, appoint Chief AI officers to oversee the use and management of AI in government, and publicly document and disclose how they are using AI. 

However, the vast majority of public sector AI bills introduced in 2024 do not impose binding requirements on government agencies. Instead, these bills simply require reporting and the creation of pilot projects to study the use of AI in state government or the establishment of task forces to issue recommendations on the issue.

Analysis of public sector-specific AI legislation from 2024 reveals several common themes across states, as well as key areas for improvement that should inform state lawmakers’ efforts going into the 2025 legislative session. 

Trends in Public Sector AI Legislation

Among the 43 public sector AI legislative proposals from the 2024 session, six themes emerge. These bills would require public agencies to:

  • Create task forces and studies
  • Implement risk management practices
  • Publish AI inventories
  • Impose new procurement requirements
  • Establish pilot programs
  • Hire or appoint chief AI officers

Create Task Forces and Studies

Twenty-one proposed bills would establish task forces or commissions to study or oversee the use of AI within the state and issue recommendations on potential safeguards. This is by far the largest category of state-level public sector AI legislation.

The roles and responsibilities of these task forces, however, vary significantly between proposals. Some bills, like New York’s SB 8755, would confer a significant degree of oversight and regulatory authority to an AI task force, including the responsibility to assess and report on all public sector uses of AI within the state. Other bills, like Virginia’s SB 487, would afford much less power to task forces, making these bodies solely advisory in nature. There is also significant variability between the required representatives who would be appointed to state task forces, with some reserving roles for community members, academics, and civil society, and others largely excluding these constituents.

In addition, five proposed bills would initiate studies to examine the role of AI within states. The studies commissioned under these proposals would differ in focus between assessing current uses of AI within state government (e.g., New Jersey’s AB 4399) and considering the potential risks and benefits of the technology more broadly (e.g., California’s SB 398). Some of these studies would be broadly focused on all types of AI, while others would be  more narrowly focused on generative AI (e.g., California’s SB 896).

Implement Risk Management Practices

Fifteen proposed bills would require state agencies to implement risk management practices when using AI. This is the second largest category of state-level public sector AI legislation. While it is an encouraging sign that risk management practices occupy such a significant area of focus among state policymakers, only three of these proposals passed in 2024, Maryland’s SB 818, New Hampshire’s HB 1688, and New York’s SB 7543

The nature and scope of these practices differ from proposal to proposal but generally share similar core requirements like impact assessments and public notice obligations. Some proposals would impose holistic risk management requirements, similar to those established by OMB’s guidance on federal agencies’ use of AI, including impact assessments, human oversight, notice and appeal, and ongoing monitoring (e.g., Alaska’s SB 177 and Maryland’s SB 818). Within this group of proposals, some would directly specify agency obligations, while others, like Illinois’ HB 4836, would direct agencies to comply with existing federal frameworks such as NIST’s AI Risk Management Framework

Some proposals, however, are more narrowly focused. For example, California’s SB 896 would specifically impose transparency requirements for state agencies that use generative AI, and Kentucky’s HB 734 would prohibit agencies from solely relying on AI to identify fraud or discontinue benefits.

Publish AI Inventories

Twelve proposed bills would require state agencies to create inventories of AI use cases. These proposed requirements, however, vary in important areas such as scope, frequency, and detail. For example, some proposals, such as Hawaii’s HB 2152, would only require public agencies to inventory high risk use cases, while others, such as Indiana’s SB 150, would require public agencies to inventory all use cases regardless of risk. In addition, the amount of detail required as part of these inventories differs significantly between proposals, with some requiring agencies to document their testing and mitigation practices and others only imposing minimal reporting requirements about the tools that agencies use. Some proposals would require public agencies to update these inventories annually (e.g., Illinois’ HB 4705), while others would require these to be updated less frequently (e.g., biennially, as required under Alaska’s SB 177) and others would have no specified requirements for public agencies to update these at all (e.g., Idaho’s HB 568). Importantly, only a subset of these legislative proposals would require AI inventories to be made publicly available.

Impose New Procurement Requirements

Seven proposed bills would establish specific requirements for AI systems procured by state agencies. Some of these bills are solely focused on procurement, while others include procurement requirements within a broader set of obligations for public sector AI. Some of these proposals would establish affirmative obligations for AI systems procured by state agencies (e.g., New York’s AB 5309). Others, like California’s SB 892, would create a process for the state to develop and adopt procurement requirements through consultation with the public, experts, state and local employees, and other stakeholders. While still other proposals, like Illinois’ HB 5228, are more narrowly focused and would require vendors to disclose when AI is used to fulfill a government contract, but would not impose any other obligations.

Establish Pilot Programs

Three proposed bills — Hawaii’s HB 2152, Hawaii’s HB 2245, and Maryland’s SB 818 — would establish pilot programs for state agencies to test and evaluate the use of AI in government benefits and services. In general, these programs are designed to identify potential AI use cases within state government and test these at a smaller scale to assess performance and feasibility. Only one of these proposals, Hawaii’s HB 2245, would require state agencies to retrospectively examine and report on any findings or recommendations arising from such pilots.

Hire or Appoint Chief AI Officers

Three proposed bills — Illinois’ HB 4705, New Jersey’s SB 1438, and New York’s AB 10231 — would establish chief AI officer positions within state government. Some of these proposals would designate one overarching CAIO position (New York’s AB 10231), while others would require state agencies to individually appoint their own CAIOs instead (Illinois’ HB 4705). Several of these proposals (New Jersey’s SB 1438 and New York’s AB 10231) would include detailed competencies and responsibilities for CAIOs, seeking to ensure that the individuals appointed to these positions would have sufficient experience and authority to successfully carry out their duties.

Strong Examples

Lawmakers should consider several promising examples from the 2024 session as potential resources for their own state. Of all the bills that passed in 2024, Maryland’s SB 818 and New York’s SB 7543 are clear standouts. Maryland’s bill imposes strong guardrails on state agencies’ uses of AI, requiring agencies to conduct impact assessments and publicly report about any high risk AI systems. Similarly, the New York bill requires all state agencies to conduct impact assessments prior to the deployment of any automated decision-making system and prohibits the use of such systems in public benefits without human oversight. Maryland’s bill also establishes the Governor’s Artificial Intelligence Subcabinet, which is responsible for developing policies and procedures for state agencies to conduct ongoing monitoring of AI systems.

There were also several promising examples of bills that didn’t pass, including California’s SB 892 that proposes AI procurement standards, Illinois’ HB 5228 that would require every state agency to implement NIST’s AI Risk Management Framework, and Washington’s SB 5356 that would require the State Chief Information Officer to issue guidance on the development, procurement, and use of AI by public agencies. 

Recommendations for State Lawmakers

As state lawmakers look to develop legislation to regulate the use of AI in the public sector during the 2025 session, several key considerations should form the basis of such proposals:

  • Ensure robust transparency through AI inventories that are conducted annually, released publicly, and required for all AI systems regardless of risk level;
  • Implement substantive, robust guardrails on high risk uses by requiring risk management practices for any system used or procured by an agency, including pre- and post-deployment assessments and independent oversight;
  • Establish AI governance processes by requiring every agency to implement AI governance and oversight practices and providing sufficient funding and resources for agencies to do so;
  • Prioritize meaningful public engagement by requiring agencies to consult the public before deploying high risk AI systems and including substantive representation from civil society, academia, and impacted communities in state-wide task forces;
  • Avoid unintended consequences by ensuring that prohibitions are narrowly tailored — for instance, prohibiting the use of AI without human oversight for benefits determinations as opposed to a blanket prohibition on any AI use related to public service delivery — so as to avoid impeding routine service delivery.

Conclusion

As state lawmakers return to state capitols across the country for the 2025 legislative session, AI is poised to be a signifigant area of focus. Public sector uses of this technology should remain top priority for lawmakers as state and local governments increasingly use these tools to deliver critical services to individuals and their families. Crafting legislation that creates real protections for people and is specifically tailored to the unique needs of the public sector is more important now than ever.