Skip to Content

AI Policy & Governance, Equity in Civic Technology

To AI or Not To AI: A Practice Guide for Public Agencies to Decide Whether to Proceed with Artificial Intelligence

This report was authored by Sahana Srinivasan

Graphic for a CDT report, entitled "To AI or Not To AI: A Practice Guide for Public Agencies to Decide Whether to Proceed with Artificial Intelligence." Falling dark blue gradient of 1s and 0s.
Graphic for a CDT report, entitled “To AI or Not To AI: A Practice Guide for Public Agencies to Decide Whether to Proceed with Artificial Intelligence.” Falling dark blue gradient of 1s and 0s.

Executive Summary

Public agencies have significant incentives to adopt artificial intelligence (AI) in their delivery of services and benefits, particularly amid recent advancements in generative AI. In fact, public agencies have already been using AI for years in use cases ranging from chatbots that help constituents navigate agency websites to fraud detection in benefit applications. Agencies’ resource constraints, as well as their desire to innovate, increase efficiency, and improve the quality of their services, all make AI and the potential benefits it often offers — automation of repetitive tasks, analysis of large swaths of data, and more — an attractive area to invest in. 

However, using AI to solve any problem or for any other agency use case should not be a foregone conclusion. There are limitations both to AI’s capabilities generally and to it being a logical fit for a given situation. Thus, agencies should engage in an explicit decision-making process before developing or procuring AI systems to determine whether AI is a viable option to solve a given problem and a stronger solution than non-AI alternatives. The agency should then repeatedly reevaluate its decision-making throughout the AI development lifecycle if it decides initially to proceed with an AI system. Vetting the use of AI is critical because inappropriate use of AI in government service and benefit delivery can undermine individuals’ rights and safety and waste resources. 

Despite the emergence of new frameworks, guidance, and recommendations to support the overall responsible use of AI by public agencies, there is a dearth of guidance on how to decide whether AI should be used in the first place, including how to compare it to other solutions and how to document and communicate that decision-making process to the public. This brief seeks to address this gap by proposing a four-step framework that public administrators can use to help them determine whether to proceed with an AI system for a particular use case: 

  • Identify priority problems for the public agency and its constituents: Agencies should identify and analyze specific problems they or their constituents face in service or benefit delivery to ensure that any new innovations are targeted to the most pressing needs. Agencies can identify problems and pain points in their service and benefit delivery through mechanisms such as existing agency data, news reports, and constituent engagement and feedback. Agencies should then vet the severity of their problem and set specific and measurable goals and baselines for what they hope their eventual solution accomplishes. 
  • Brainstorm potential solutions to priority problems: Agencies should identify a slate of solution options for their problem. These options may include AI systems but should also consider non-AI and nontechnological alternatives. Desk research, landscape analyses, consultation with other government agencies, and preliminary conversations with vendors can help agencies ensure that they have identified all options at their disposal before potentially focusing on AI. This report will detail preliminary options for solutions to common agency problems, including AI-based and non-AI options. 
  • Evaluate whether AI could be a viable solution before comparing alternatives: Agencies need to evaluate each potential solution on a set of criteria tailored to that solution before deciding on one with which to proceed. This guidance presents an AI Fit Assessment: four criteria that agencies can use to evaluate any solution that involves an AI-based system. Agencies can use this resulting analysis to decide whether proceeding with an AI-based solution is viable. Agencies should adopt rubrics, no-go criteria, green flags, or other signals to determine how their evaluations of solutions on these four criteria correspond to proceeding with or forgoing a solution. They should also reevaluate the AI Fit Assessment, their analysis of alternatives, and their decision to use AI throughout the development process, even if they initially decide to proceed with an AI-based solution. The criteria of the AI Fit Assessment are the following:
    • Evidence base: the level of evidence demonstrating a particular AI system’s capabilities, effectiveness, and appropriateness, specific to the use case and including evidence of its strengths over alternative solutions. 
    • Data quality: the availability and quality of data, from either the vendor or the agency, used to power the solution as well as the ethics of using that data. 
    • Organizational readiness: the agency’s level of preparedness to adopt and monitor AI, including its infrastructure, resources, buy-in, and technical talent. 
    • Risk assessments: the results of risk and/or impact assessments and any risk mitigation plans. 
    • The results of the AI Fit Assessment will provide agencies with an analysis of an AI solution, which they can then weigh against separate analyses of non-AI alternatives to finally determine which solution to initially proceed with. While non-AI solutions can be evaluated using the AI Fit Assessment, not all of the questions will apply, and additional analysis may be needed.
  • Document and communicate agency decision-making on AI uses to the public: For at least all use cases in which they decide to proceed with an AI-based solution, agencies should document the analysis from the preceding three action steps — including their analysis of AI-based solutions, analysis of non-AI alternative solution options, and comparison of the options — and communicate these insights to the public. Communicating the rationale behind their AI use cases to the public helps agencies build constituents’ trust in both the agency itself and in any AI systems constituents interact with. For the sake of transparency and to help others navigate similar use cases, agencies can also consider documenting situations in which they decided against AI. 

Because this brief refers to any form of AI system when discussing AI, including algorithms that predict outcomes or classify data, the guidance can be used when considering whether to proceed with any type of AI use case. 

Most importantly, these action steps should assist public administrators in making informed decisions about whether the promises of AI can be realized in improving agencies’ delivery of services and benefits while still protecting individuals, particularly individuals’ privacy, safety, and civil rights. This decision-making process is especially critical to navigate responsibly when public agencies are considering moderate- or high-risk AI uses that affect constituents’ lives and could potentially affect safety or human rights.

Read the full report.