Skip to Content

AI Policy & Governance, Privacy & Data

Colorado’s Artificial Intelligence Act is a Step in the Right Direction. It Must be Strengthened, Not Weakened.

On Friday, Colorado broke new ground in the fight to bring transparency and accountability to AI-driven decision systems, which increasingly make crucial decisions that can alter the course of our lives and careers. 

A newly passed law, Colorado Senate Bill 24-205 (SB 205), will equip Coloradans with some basic information and safeguards when companies use AI to make high-stakes decisions about them, such as whether a worker gets a job, a consumer qualifies for a loan or lease, a patient gets medical coverage, or a student is admitted to a school. Right now, companies often make AI-driven decisions in these crucial spheres without informing consumers or workers that they are doing so. This lack of transparency obscures errors and biases, and prevents civil rights and consumer protection laws from working effectively; Colorado’s new law is thus an important basic step for AI accountability.

Industry groups conducted a concerted veto campaign the week before Governor Jared Polis signed the bill, and the Governor’s signing statement repeated some of those criticisms, suggesting that the bill should be watered down before it goes into effect. That would be a mistake. For the reasons outlined below, SB 205 is both manageable for companies and an important first step to help workers and consumers understand how AI systems might affect them. While the bill falls short of the protections that CDT and other public interest groups have called for, Colorado legislators should defend their work and ensure the full adoption of the law.

1. What SB 205 does

SB 205 ensures consumers receive basic but essential disclosures about AI-driven decisions

SB 205’s disclosure and explanation provisions would help alleviate the near-monopoly on information related to AI-driven decisions that businesses currently enjoy, and often exploit, to the detriment of consumers and workers. AI developers would have to publish a very basic statement summarizing what AI systems they sell and how they test them for bias. Companies that use AI systems to help decide whether a person gets a job or a house would have to tell them they are using AI and what its purpose is, provide a “plain language description” of it, give the person a basic explanation if it rejects them, offer them an opportunity to correct incorrect personal information, and in some cases, appeal.

Finally, companies whose AI systems interact with consumers (including, but not limited to, the AI decision systems that are the subject of the rest of the bill) must “ensure the disclosure to each consumer who interacts with the AI system that the consumer is interacting with an AI system.” In other words, companies must tell you at the outset when you are speaking with a machine rather than a human.

SB 205 requires companies to do simple due diligence before marketing or using AI systems that can alter the course of consumers’ lives and careers

Under SB 205, deployers of AI decision systems would have to conduct annual impact assessments, including assessing whether an AI decision system creates a risk of algorithmic discrimination. Deployers must also describe the steps they take to mitigate those discrimination risks–though the bill does not require them to actually implement those steps before using the AI system.

Beyond that, the impact assessment is really a recordkeeping obligation: the impact assessment must include “overviews,” “statements,” or “descriptions” of the AI decision system’s purpose, intended uses, data used and produced, performance, and post-deployment monitoring. Contrary to public interest advocates’ recommendation, the impact assessment need not be conducted by an independent third party. While this reduces the burden on businesses, it also raises the risk of impact assessments that are hampered by conflicts of interest.

Public interest advocates have called for AI systems to have stronger auditing requirements than those outlined in SB 205, and even industry-driven proposals such as the Better Business Bureau’s Principles for Trustworthy AI and the Future of Privacy Forum’s Best Practices for AI-driven hiring technologies have more detailed auditing or assessment requirements than SB-205. SB 205’s impact assessment provision is better described as a requirement that companies do basic due diligence and retain documentation provided by vendors, rather than a true impact assessment. Nevertheless, that due diligence and recordkeeping is valuable, as it will ensure companies that deploy AI systems better understand the risks and benefits of the systems they use.

(SB 205 also requires developers and deployers to take “reasonable care” to prevent algorithmic discrimination. As I will explain in a future blog post, however, this does not create significant additional rights or obligations because companies already have an absolute duty under civil rights laws to avoid making discriminatory decisions in nearly all the contexts that the bill covers.)

2. What SB 205 Doesn’t Do

In the days before SB 205 was signed into law, industry groups sent letters and issued statements mischaracterizing key elements of the law. Governor Polis’s own signing statement echoed some of this messaging, including points that suggest a profound misunderstanding of the basic tenets of civil rights laws. These arguments—which seem to be a prelude to efforts to weaken a bill that, in fact, needs strengthening—all ring hollow.

SB 205 doesn’t change civil rights laws

Governor Polis’s signing statement for SB 205 included a troubling mischaracterization of how the bill interacts with existing civil rights laws. The signing statement said:

Laws that seek to prevent discrimination generally focus on prohibiting intentional discriminatory conduct. Notably, this bill deviates from that practice by regulating the results of Al system use, regardless of intent, and I encourage the legislature to reexamine this concept as the law is finalized before it takes effect in 2026.

It’s crucial to know that laws seeking to prevent discrimination don’t generally focus on intentional conduct. The Supreme Court long ago rejected this argument for Title VII (the most influential federal employment discrimination law), holding in 1971 that an employer can be liable for discrimination if it assesses workers using criteria that have a discriminatory impact on members of a protected class. Congress codified this disparate impact theory into the text of Title VII in 1991. Colorado’s antidiscrimination laws also recognize disparate impact discrimination in employment, housing, and other decision settings that SB 205 would cover.

The signing statement’s mischaracterization of antidiscrimination laws is not only badly wrong but dangerous. A rule requiring a showing of intent to discriminate arguably would make AI-driven discrimination impossible to prove. After all, can an AI system even have intent? Would courts look to the intent of the deployer or the tool creator, and what evidence could ever support a successful case? The main concern about algorithmic discrimination systems is not that developers and deployers will nefariously engage in a conscious effort to screen out consumers or workers from vulnerable groups. Rather, it is that they will recklessly market or use AI decision systems that are deeply biased or error-prone due to flaws in their design, training, testing, or implementation.

Recognizing this, SB 205’s definition of algorithmic discrimination does not impose a new, higher standard of preventing algorithmic discrimination on companies that make or use AI decision systems: it reflects a central tenet of our civil rights laws, that courts should also look to the disparate impact of a practice or system in determining whether it violates the law. Revising SB 205 to add a new requirement of discriminatory intent would overturn decades of civil rights precedent and weaken longstanding definitions of discrimination in the context of AI-driven decisions. That certainly would not benefit workers or consumers.

SB 205 does not impose complex or unusual obligations and will not burden small businesses

Business and tech industry lobbying groups sent letters to Governor Polis urging him to veto SB 205, mainly claiming that it would impose undue burdens, especially on small businesses. These arguments are flawed.

First, and as described further above, the obligations that SB 205 imposes are not complex.  The law requires large developers and deployers to disclose basic information on their AI decision systems that is already in their possession and perform due diligence that companies seeking to comply with civil rights laws should already be doing as a best practice.

Second, opponents falsely state that the bill would require “online platforms” to “disclose data used to train their AI systems and services on their website.” In fact, the bill merely requires companies that use AI to make life-altering decisions to post a simple “statement summarizing … the nature, source, and extent of the information” that the company collects and uses in making those decisions. The bill’s other transparency requirements (summarized in the first section of this Insight) similarly require companies to provide basic information already in their possession.

Third, the bill contains a broad exemption allowing companies to withhold any information that they consider a trade secret. Consumer advocates advised that this exemption is unnecessary; the bill does not call for companies to reveal source code, training data, or any other “secret sauce” that could plausibly be considered a trade secret. Regardless, its inclusion underscores the hollowness of objections to the bill’s transparency requirements.

Finally, the bill exempts small businesses (defined as companies with fewer than 55 full-time employees) from most of the modest obligations it places on AI deployers. Such an exception for small businesses is not “narrow”: nearly half of private-sector employees work for small businesses. Exempting them from many of the bill’s already-modest requirements is a broad carve-out, not a narrow one–especially since Colorado’s civil rights laws cover all companies, regardless of how many employees they have.

3. What Happens Next

SB 205 takes effect in February 2026. Word has it that a task force will examine potential changes to the bill before it takes effect, and policymakers elsewhere will almost certainly look to SB 205 as a model for legislation in their own states. Those looking to amend or adapt SB 205 should look past overstated claims about the bill and prioritize the needs of the voters they’re in office to protect. In part because of the short time frame taken to negotiate SB 205, labor and consumer voices were largely absent during its development. Going forward, policymakers must hold firm on SB 205’s foundational protections and obligations, and the additional protections sought by consumers, workers, and public interest groups that help represent their interests should take center stage.

In a statement released Saturday, Consumer Reports lays out some of the improvements that need to be made:

There are several loopholes that ought to be closed and provisions that must be updated over the course of Colorado’s next legislative session. For example, the bill exempts AI technology that performs “narrow procedural task[s]” from its definition of high-risk AI. This term is undefined, and companies may argue that all manner of high-stakes decisions – screening out resumes, scoring college applicants – are “narrow procedural tasks.” The bill’s trade secret protections are overbroad. Companies should not be able to unilaterally withhold crucial information or hide evidence of discrimination by claiming that such information is a trade secret. The enforcement provisions must be strengthened.

I’ll add a couple more:

  • Pre-decision notice provisions should be expanded to ensure that disabled workers and consumers who face potential accessibility barriers receive more detailed disclosures before a decision is made so that they have an opportunity to request accommodation or be assessed through an alternative process.
  • Right now, SB 205 would allow companies to avoid compliance with the law if a federal standard includes “substantially similar” requirements. This is a vague standard that companies could exploit to evade the law. Companies should only be exempted from following Colorado’s law if there’s a federal law that preempts SB 205; they should not be able to simply pick and choose which laws and standards they comply with.

Colorado policymakers should work with public interest advocates to address these issues and ensure that the bill’s impact lives up to its groundbreaking potential.