AI Policy & Governance, Equity in Civic Technology
AI in Local Government: How Counties & Cities Are Advancing AI Governance
This blog is part of a series of pieces highlighting AI regulation trends across states. See CDT’s other blogs on state AI executive orders, public sector AI legislation, and state education agencies’ AI guidance.
Introduction
While much attention has been paid to the use of AI by state and federal agencies, city and local governments also are increasingly using AI and should implement safeguards around public sector uses of these tools. City and county governments administer a wide range of public services – including transportation, healthcare, law enforcement, veterans services, and nutrition assistance, to name only a few – that have significant impacts on individuals’ health and safety. AI systems can assist in increasing the efficiency and effectiveness of local governments’ provision of such services, but without proper guardrails these same tools can also harm constituents and impede the safe, dignified, and fair delivery of public services.
In response to both the benefits and risks of using AI in local government, an increasing number of cities and counties have released AI policies and guidance. Organizations like the GovAI Coalition and the National Association of Counties are helping local governments craft and implement their own policies. In particular, the GovAI Coalition, a group of state and local public agencies working to advance responsible AI, created several template AI policies that a number of local agencies have since adopted as part of their own AI governance strategies.
To understand local trends, we analyzed public-facing policy documents from 21 cities and counties. Because most cities and counties do not make their internal IT policies publicly available, the following analysis could be skewed by differences in cities and counties that take proactive steps to disclose their AI policies. Analysis of publicly available AI policies and guidance at the local level reveals five common trends in AI governance, in that these policies:
- Draw from federal, state, and other local AI governance guidance;
- Emphasize that use of AI should align with existing legal obligations;
- Identify and prioritize mitigation of risks, like bias, reliability, privacy, and security;
- Prioritize public transparency of AI uses; and
- Advance accountability and human oversight in decision-making that incorporates AI.
AI Policy and Guidance at the County and City Level
Within the past several years, county and city governments across the country have published AI use policies and guidance to advance responsible AI uses and place guardrails on the ways they use the technology. Counties and cities are using various methods in regulating government AI use, including policies, guidelines, and executive orders. In addition, at least two cities – New York, NY, and San Francisco, Calif. – have enacted city ordinances requiring agencies to create public inventories of their AI use cases.
While many of these documents are not publicly accessible, several counties – Haines Borough, Alaska; Alameda County, Calif.; Los Angeles County, Calif.; Santa Cruz County, Calif.; Sonoma County, Calif.; Miami-Dade County, Fla.; Prince George’s County, Md.; Montgomery County, Md.; Washington County, Ore; and Nashville and Davidson County, Tenn. – and city governments – Baltimore, Md.; Birmingham, Ala.; Boise, Idaho; Boston, Mass.; Lebanon, NH; Long Beach, Calif.; New York City, NY; San Francisco, Calif.; San Jose, Calif.; Seattle, Wash.; and Tempe, Ariz. – have publicly released their policies, providing important insight into key trends across jurisdictions. These policies span states that already have existing state-wide policies and those that do not. Regardless of state-level policy, however, additional county and city-level guidance can help clarify the roles and obligations of local agencies.
Trends in County and City AI Policies and Guidance
- Draw from federal, state, and other local AI governance guidance
At both the county and city level, governments are building off of other local, state, and federal guidance as a starting point, mostly through borrowing language. Some of the most commonly cited or used resources are Boston’s AI guidelines, San Jose’s AI guidelines, the National Institute for Standards and Technology’s (NIST’s) AI Risk Management Framework, and the Biden Administration’s since-rescinded AI Executive Order and AI Bill of Rights.
For example, the City of Birmingham, Ala.’s generative AI guidelines acknowledge that the authors drew inspiration from the City of Boston’s guidelines. Likewise, Miami-Dade County’s report on AI policies and guidelines draws from several other government resources, including the cities of Boston, San Jose, and Seattle, the state of Kansas, the White House, and NIST.
- Emphasize that use of AI should align with existing legal obligations
At least 15 of the guidance documents that we analyzed explicitly call out the necessity for public agencies to ensure their use of AI tools adheres to existing laws relating to topics such as cybersecurity, public records, and privacy. On the city front, San Jose, Calif.’s AI guidelines state that “users will need to comply with the California Public Records Act and other applicable public records laws” for all city uses of generative AI, and Tempe, Ariz. mentions that all city employees must “comply with applicable laws, standards and regulations related to AI and data protection.” Several counties similarly affirm public agencies’ obligations to use AI systems in compliance with existing laws. Nashville and Davidson County’s guidance states that “all AI and GenAI use shall comply with relevant data privacy laws and shall not violate any intellectual property use,” and Los Angeles County’s technology directive affirms that AI systems must be used in “adherence to relevant laws and regulations.”
Some cities and counties take an additional step by creating access controls to prevent unauthorized use and disclosure of personal information. Santa Cruz County, for example, prohibits the use of AI systems without authorization, and New York City specifies that employees can only use tools that have been “approved by responsible agency personnel” and are “authorized by agency-specific and citywide requirements.” Likewise, Haines Borough requires employees to have specific authorization to use any AI systems that handle sensitive information.
- Identify and prioritize mitigation of risks, like bias, reliability, privacy, and security
Cities and counties commonly recognize the following three main risks of using AI:
- Perpetuating bias: About 12 of the guidelines mention the potential for AI tools to produce biased outputs. One example of this at the city level is Lebanon, NH’s AI policy, which specifies the different types of bias issues that can show up with AI – biased training data, sampling bias, and stereotyping/societal biases – and expresses that “any biases that are identified must be addressed and corrective actions should be taken.” Alameda County, Calif., similarly highlights these issues, stating that “GenAI models can inadvertently amplify biases in the data the models are trained with or that users provide AI.”
- Accuracy and unreliable outputs: At least 15 cities and counties discuss the unreliability of AI tools (due to issues such as hallucination), often acknowledging this through requiring employees to double-check or verify outputs before using AI-generated information in their work. For instance, Baltimore, Md.’s generative AI executive order prohibits city employees from using generative AI outputs without fact-checking and refining the content, especially if used for decision-making or in public communications. Guidance published by Washington County, Oreg. directs county employees to “fact check and review all content generated by AI,” noting that “while Generative AI can rapidly produce clear prose, the information and content might be inaccurate, outdated, or entirely fictional.”
- Privacy and security concerns: Roughly 18 city and county AI guidelines and policies state the importance of protecting privacy and security. These policies emphasize the potential privacy- and security-related harms if employees, for example, input personally identifiable or other sensitive information into an AI tool. The City of San Francisco, Calif., explains that a risk of using generative AI is “exposing non-public data as part of a training data set” and recommends that employees do not enter information that should not be public into non-enterprise generative AI tools. Long Beach, Calif., also recommends that city employees opt out of generative AI tools’ data collection and sharing whenever possible, and even provides a step-by-step guide on how to do so on ChatGPT. Sonoma County, Calif., notes that “there can be risks in using this technology, including… security and privacy concerns with inputting proprietary or confidential information about an employee, client, operations, etc. when interacting with the AI technology.”
- Prioritize public transparency of AI uses
Roughly 17 city and county guidelines and policies encourage, or even require, employees to publicly disclose use of AI tools. The City of Boise, Idaho, states that “disclosure builds trust through transparency,” encouraging employees to cite their AI usage in all cases, but especially in significant public communications or other important purposes. Seattle, Wash.’s generative AI policy goes even further on the principle of transparency, and commits to making their documentation related to city use of AI systems publicly available. Santa Cruz County, Calif., for instance, requires employees to include a notice “when Generative AI contributed substantially to the development of a work product” and that “indicate(s) the product and version used.”
- Advance accountability and human oversight in decision-making that incorporates AI
About 14 of the guidance documents stress that responsibility ultimately falls on city and county employees, either when using AI outputs or making decisions using AI tools. Some city governments even take this a step further by including enforcement mechanisms for non-compliance with their AI policies, including employee termination. One example is seen in guidance issued by Alameda County, Calif., which directs all employees to “thoroughly review and fact check all AI-generated content,” emphasizing that “you are responsible for what you create with GenAI assistance.” Another example is the City of Lebanon, NH, stating that employee non-compliance with the guidelines “may result in disciplinary action or restriction of access, and possibly even termination of employment.”
Conclusion
Regardless of the level of government, responsible AI adoption should follow the principles of transparency, accountability, and equity to ensure that AI tools are used to serve constituents in ways that improve their lives. Taking steps to responsibly implement and oversee AI will not only help local governments use these tools effectively but will also build public trust.
Similar to what state governors and lawmakers can do to advance public sector AI regulation, cities and counties should consider these components of AI governance:
- Promote transparency and disclosure by documenting AI uses through public-facing use case inventories, such as those maintained by New York, NY and San Jose, Calif., and direct notices to individuals impacted by AI systems.
- Implement substantive risk management practices for high-risk uses by requiring pre- and post-deployment testing and ongoing monitoring of systems with a significant impact on individuals’ rights, safety, and liberties. While specific risk management practices are not included in many local guidance documents, a growing number of state governments have issued requirements for measures like AI impact assessments, and these can serve as valuable resources for city and county governments to draw from.
- Ensure proper human oversight by training government employees about the risks, limitations, and appropriate uses of AI, and empowering employees to intervene when potential harms are identified.
- Incorporate community engagement by seeking direct public feedback about the design and implementation of AI. Some cities, like Long Beach, Calif., have already developed innovative approaches to engaging community members around the use of technology by public agencies.