Skip to Content

AI Policy & Governance, Equity in Civic Technology, Privacy & Data

Taken for Granted: Where’s the Oversight of AI and Federal Funding?

Debates over the use of artificial intelligence (AI) are inescapable this summer. But the conversation around government use of AI – often focused on how federal agencies acquire and use AI – is missing a critical piece: federal grants backing state and local AI uses. 

That’s a blunder. Some agencies are already cutting checks without sufficient safeguards around the use of emerging technologies. Those funds can be consequential. When states spend money to deliver services to the public, be it the Covid-19 response or education, federal dollars make up a significant chunk of that spending. To understand the levers to pull to bring better governance to federal grants, let’s walk through how grantmaking can go wrong, how agencies make grants, and how federal policymakers might rethink the grantmaking process to help state and local governments use AI responsibly and ethically. 

HUD’s Mistake

Let’s start by taking a look at one agency, the Housing and Urban Development (HUD) department, that already invited tech abuses through its grants. 

Earlier this year, news broke that public housing authorities used HUD dollars to buy surveillance cameras, not only to monitor crime – as HUD intended – but also to monitor residents. The investigation uncovered that authorities harassed residents over minor housing violations caught on camera and evicted residents using surveillance footage as evidence. Raising more alarms, some cameras came equipped with facial surveillance tools. Facial surveillance has been often shown to falsely identify people – indeed, at least one of the camera vendor’s claims of accuracy failed under testing. Legislators took notice of this grant gone awry. In an oversight letter, Congress members Maxine Waters and Ayanna Pressley flagged how the use of such technologies risked furthering housing instability and discrimination against people of color, who are more likely to be subject to false matches. HUD eventually made a simple change – it excluded automated surveillance from eligibility next time. Problem solved? 

Not really. HUD has been awarding these security grants for at least a decade. But the agency’s shift came only in April 2023, when HUD changed its funding notice to list facial recognition and automated surveillance as “non-eligible uses.” This change neither rolls back the harm caused to tenants nor removes the cameras already in misuse. Even for future grants, it is unclear how HUD will identify which grant proposals are ineligible. Indeed, even if a grant seems to pass scrutiny at the time of the award, cameras can be updated later by vendors with disallowed facial recognition software. 

HUD is unlikely to be alone in its missteps. Take the $100 million in grant funds made available through the bipartisan gun safety bill that may be spent on surveillance and identification technologies, potentially jeopardizing student privacy. Or take a recent competition hosted by the Department of Veterans Affairs that led the agency to fund a large language model-based tool that purports to spot suicidal ideation in veterans, but with little independent evidence that the tool works. 

The risk of grants going wrong and the challenge posed in preventing future mistakes underscore the need to think about the governance of grants that could be used to procure AI-driven technology. 

How Federal Grantmaking Works

It’s first important to understand where policy-setting and oversight could happen in grantmaking. Agencies can exert power in grantmaking in three stages:  1) the policy stage 2) the adjudication stage and 3) the enforcement stage. Let’s take these in turn. 

In the policy stage, the government sets the conditions grantees have to meet to receive funding. Congress has the primary role in this stage, given its ability to direct spending and expand or contract the authority of agencies’ grantmaking abilities through law. The President can also issue executive orders, within constitutional limits, that give agencies direction on executive administrative priorities.

For federal agencies, the picture is more complicated. How much discretion an agency has over grant conditions can vary. Agencies must look at their authorizing statute, the type of grant being made, and any cross-cutting statutes that may affect their grant decisions. In practice, when agencies have discretion, they exercise it in a few ways. One way is by articulating policy commitments through agency-wide announcements. Another is by setting the judging standards agencies will use to select grants. Once priorities are set, agencies can communicate them to grant applicants in a Notice of Funding Availability (NOFAs).

In the adjudication stage, federal agencies make decisions over which applicants to award grants. Agencies can evaluate grants using strict eligibility requirements and/or more interpretative criteria, like how well the grant application matches the intended goal of the grant. Reviewers, within and/or external to the agency, make initial recommendations before the grants are approved and distributed. 

The enforcement stage is when federal agencies can take action against grantees in the event that they are not in compliance with fiscal, substantive, or other policies that govern the grant. The most severe options include litigating against the grantee, cutting their funding, or even banning grantees from future funding opportunities. But intermediate steps are also available to resolve disputes. A host of enforcement mechanisms are available to agencies, many found in OMB’s Uniform Guidance.  

Recommendations and what’s next?

Federal policymakers can take a number of steps to avoid the tangible tech harms that may result from existing and future grants funding surveillance, automated decision-making, and AI in the delivery of public services by state and local agencies.

Policymakers and researchers can look to emerging rules around federal procurement of AI to find useful policies. OMB will soon release AI guidance on the federal purchase and use of AI systems that will likely hold lessons, if not direct advice, for the grantmaking process. For example, as suggested by some in the procurement context, agencies may need to retool the grantmaking process to set aside money to fix issues throughout an AI system’s lifecycle.

Agencies can set grant policies that follow tech and civil rights policy principles.  When agencies have the discretion to do so, they should follow the White House’s Blueprint For An AI Bill of Rights. The AI Bill of Rights sets clear principles to advance civil rights and equal opportunity. Agencies can put these principles into practice. For example, an agency could ask for assurances or independent verification that systems are safe, effective, and nondiscriminatory against protected classes – although what would make those assurances credible is still an open question. Agencies could ask applicants proposing the use of automated technologies if they offer notice and opt out to those affected. Agencies could also deliberately exclude certain technologies from grants where those technologies are known to be out of step with the grant’s goals or too high risk to people’s rights, much like how HUD excluded “facial recognition technology.” 

Agencies can bring AI expertise to grant review, drawing on existing examples.  Agencies can bring on grant reviewers who have a background in AI or experience with its improper uses. Some agencies already require subject matter expertise in the evaluation of potential grant recipients; for example, the Health Resources and Services Administration brings on health experts and those with health conditions relevant to a specific grant. Although not without its potential downsides, this kind of “peer review” can also bring agencies closer to the AI Bill of Rights principle of “engaging diverse impacted communities.”

Agencies can apportion funds to projects based on the evidence behind them using approaches such as “tiered evidence grantmaking.” This policy tool can be a good incentive for a grant applicant to show that the technology in their proposal has independent evidence that it works. Agencies can also evaluate the quality of funded programs after the fact. Some agencies, like HUD, set aside funding to evaluate the efficacy of programs and publicly disclose findings; agencies should consider taking a similar approach for grants funding emerging technologies.

Policymakers can mandate better federal grantmaking and provide AI guidance. Besides the grant-making agencies themselves, a number of other bodies have policy-setting power. The Office of Management and Budget (OMB) and its Office of Information and Regulatory Affairs (OIRA) play a major role in guiding other agencies’ grant policies. In the same way that Congress directed OMB to issue procurement guidance, it could do the same for grantmaking. These agencies could also draft internal guidance to use when reviewing grant applications implicating “AI” or automated technologies. OMB, OIRA, the Office of Science and Technology Policy, and the Administrative Conference of the United States are just some of the agencies that might consult on such internal policy guidance. Such internal-facing materials could prompt reviewers with questions to think about and ask when faced with new or novel technology in a grant application. 

Agencies should be more transparent about federal grantmaking and AI. Knowing when public funding is going toward the use of emerging technologies like AI can be difficult for officials and the public alike. Current agency inventories of AI uses are “inconsistent and unclear”; this lack of transparency is all the more prominent in the grantmaking context. It is not even always clear who wins federal grant awards, much less whether any funding is going to develop or procure AI. Similar to how  EO 13960 requires federal agencies to post all of their uses of AI, federal agencies should also be more transparent about the recipients, amounts, and companies that receive federal funding to develop or procure AI. 

Given the potential for grantmaking to fund misuse of emerging technologies, the time is right for policymakers and researchers in the AI space to think through grant governance that centers the public interest.