Skip to Content

AI Policy & Governance, Equity in Civic Technology, Privacy & Data

2021 State Legislative Sessions Off to a Slow Start on AI Oversight, Offer 3 Models for Auditing

As artificial intelligence (AI) becomes more ubiquitous in every part of life, state governments have begun to adopt AI systems, too. State colleges and universities may use automated test proctoring software. State personnel management offices may consider adopting AI hiring tools. State and local court agencies often use AI-based risk assessment in bail determinations. State and local health departments may use AI tools for contact tracing as well as detecting and diagnosing symptoms of COVID-19. And state and local law enforcement may adopt predictive policing tools, including AI-supported facial recognition software. In response to the growing use of AI tools, state legislators have begun to introduce bills aimed to require accountability, transparency, and ethical decision-making in the use of algorithmic or automated decision systems. These types of bills can help lawmakers establish clear standards and processes for accountability and transparency, as well as enforcement mechanisms, that can help prevent procurement, development, or implementation of harmful AI systems.

CDT’s recent report on litigation against harmful algorithm-driven decision-making systems in public benefits determinations illustrates what can go wrong when state governments adopt AI tools that rely on poor data, biased assumptions, and unreliable processes. The cases in our report included numerous instances where states adopted new AI tools to determine disabled people’s eligibility for and approved budgets within Medicaid-funded disability services programs. For example, Idaho’s state health department developed an algorithm used for disability services budgets based on a very limited data sample. In Arkansas, the state’s health department implemented an algorithm that made arbitrary and significant cuts to disabled people’s benefits. The algorithms that caused these changes were ultimately challenged over constitutional and statutory due process violations, as well as violations of the notice-and-comment rulemaking process requiring public input for policy changes.

Last year, legislation relating to ethics, transparency, and accountability in government use of AI was introduced but failed to pass in multiple states. So far this year, legislators in three states have introduced bills addressing accountability and transparency in their state governments’ procurement, development, and use of AI systems.

  • Maryland HB1323 would require any state entity purchasing a product or service that “is or contains an algorithmic decision system” to meet responsible AI standards. These requirements include minimizing harm, avoiding unjustified disclosure of information, transparency about the system, and that systems “give primacy to fairness” by eliminating discrimination and providing an avenue for feedback to redress harms. The bill also requires impact assessments for algorithmic decision systems to assess potential risks.
  • Washington SB5116 would prohibit public agencies from developing, procuring, or using automated decision systems that discriminate against people. The bill also prohibits agencies from using automated tools to make a decision that impacts an individual’s constitutional or legal rights, to deploy a weapon, to profile people in public spaces, or to profile people when making decisions on matters such as educational enrollment, employment, or government services and support. The bill would require an algorithmic accountability report, auditing and transparency, and disclosure to the public regarding any use of automated decision systems. Because these measures would offer significant protection against harmful use of AI tools, CDT has asked the Washington Senate State Government and Elections Committee and Ways and Means Committee to consider this bill favorably.
  • Massachusetts SD457 would create a statewide commission to investigate, report on, and issue recommendations about government use of automated decision-making systems. The bill specifically addresses issues related to data retention, privacy, transparency, disclosure, and bias or disparate impact.

These three bills each address critical aspects of regulating state government use of AI, which are welcome developments establishing minimum essential protections for all state government applications moving forward. Now more than ever, state governments must carefully consider what types of AI tools they are already using or considering adopting, and how those AI tools can unwittingly perpetuate discrimination, use unreliable or insufficient data, or obscure important processes from the public. 

Efforts to assess algorithms for bias must include the full range of ways in which discrimination may occur. Effective, meaningful auditing will require proactive monitoring for potentially discriminatory use, use of unreliable data, and lack of transparency. This includes analyzing for disability bias and discrimination, which CDT has found is often overlooked in discussion of bias in AI. For instance, CDT’s reports have found that disability bias can arise in AI used for predictive policing, automated test proctoring, AI hiring tools, and public benefits

Ultimately, transparency and auditing requirements for state use of AI will not prevent all harmful uses of AI tools; however, they will go a long way toward limiting the most potentially damaging uses and effects of unregulated AI tools. As governments consider increased adoption of AI systems, they will also need to consider limitations on the types of applications and processes for which AI systems can be used, as well as what types of AI systems can be procured or developed.