Skip to Content

AI Policy & Governance

FAQ on Colorado’s Consumer Artificial Intelligence Act (SB 24-205)

In 2024, Colorado passed its Artificial Intelligence Act (“AI Act”), which provides some basic but critical transparency and accountability for AI-driven decisions. Since the AI Act’s passage, opponents have raised a series of arguments seeking to weaken or even repeal the law. This FAQ provides a detailed explanation of what the AI Act covers and requires, why the basic protections it provides are necessary, why the most common arguments against the law are unpersuasive, and why the law should be strengthened to ensure consumers, workers and companies alike are properly protected.

Overview

Why do we need legislation on AI-driven decisions? 

What types of AI does the law apply to? Does it cover generative AI systems like ChatGPT? 

What rights does the Colorado AI Act give workers and consumers?

More on the Law’s Transparency Requirements

Does the law require companies to publish their source code, training data, or intellectual property?

What type of explanation does the law require companies to give consumers and workers who are rejected in an AI-driven decision? And why is that explanation necessary? 

The law requires companies to give some information to consumers proactively, without the consumer having to request it. Why is that proactive disclosure necessary?

Isn’t proactive disclosure unusual? 

But we’ve never required this level of transparency for decisions made by humans. Why do we need it now for automated decisions?

More on the Law’s Other Requirements

What other obligations does the Colorado AI Act place on companies? 

Rebutting Other Arguments Against the Law

Does this law target Colorado companies? Would this give companies a reason to move to another state?

Doesn’t Colorado’s law risk creating a “patchwork” of different state laws that companies will have to comply with? Why shouldn’t we wait for Congress to act so the same rules apply everywhere?

Where the Law Should be Strengthened

Is the Colorado AI Act enough? Are any other protections needed to ensure workers and consumers know about AI-driven decision systems that affect them and are protected from any harmful effects?

Why prohibit the use or sale of discriminatory AI decision systems? Isn’t it enough that the law gives companies a duty to take reasonable care to prevent algorithmic discrimination? 

What additional disclosures are needed?

What about the post-decision explanation? What more is needed there?

What should be changed with the law’s exemptions and defenses?

What needs to be improved about the law’s enforcement provisions? 

OVERVIEW

Why do we need legislation on AI-driven decisions?

AI decision systems play a growing role in deciding whether you get a job, an apartment, a mortgage, or health care, as well as how much you earn and how much you pay for a product or service. The AI tools used in these circumstances have the potential to increase efficiency by processing much more information much faster than a human can.

But that promise is accompanied by considerable risk and potentially masks a number of harmful effects. Current AI decision systems are incapable of reasoning and generally appear quite bad at predicting human behavior and social outcomes. These tools have the potential to  violate peoples’ privacy by collecting and analyzing massive amounts of sensitive personal information without individuals’ knowledge or consent. Consumers and workers may not be aware that the AI decision system exists, much less what AI-driven decisions are based on. In the worst cases, algorithmic systems have also been used to drive up rents and prices, drive down wages, and keep workers from forming unions.

These harmful effects are possible because companies have no incentive or obligation to tell consumers or workers when they use AI to make crucial decisions. This lack of transparency can hide errors and biases and interferes with longstanding labor, civil rights, and consumer protection laws. Colorado’s new law is an important step to ensure basic accountability for AI systems making decisions that affect our lives.

What types of AI does the law apply to? Does it cover generative AI systems like ChatGPT?

Nearly all of the law’s provisions apply only to AI systems that make or could alter the outcome of “consequential decisions” about consumers or workers—meaning decisions that have a “material legal or similarly significant effect” on a consumer’s or worker’s access to, cost of, or terms of major economic and life opportunities, specifically employment, education, financial or lending services, health care, housing, insurance, legal services, or essential government services. In other words, most of the law’s provisions only apply to AI systems that are used to make certain key decisions that have major impacts on ordinary people’s lives.

The bill’s transparency, impact assessment, and duty-of-care requirements do not apply to any other AI systems, including generative AI systems like ChatGPT—unless the system is used in making one of those key decisions. Otherwise, the law requires only that companies tell consumers when they are interacting with an AI system. And even that disclosure isn’t necessary “if it would be obvious to a reasonable person that the person is interacting with an artificial intelligence system.”

What rights does the Colorado AI Act give workers and consumers?

The Colorado AI Act ensures consumers receive basic but essential disclosures about AI-driven decisions and requires companies to do simple due diligence before marketing or using AI systems that can alter the course of consumers’ lives and careers.

Under the law, AI developers are required to publish a simple statement summarizing what AI systems they sell and how they test them for bias. Developers will also have to provide companies deploying their tools with certain information about how the AI system works and whether it poses any reasonably foreseeable risks of algorithmic discrimination. Companies that use AI systems to help decide whether a person gets a job, a house, or similar economic opportunities would have to tell them they are using AI and what the AI system’s purpose is; provide a “plain language description” of the AI system; give the person a basic explanation if the AI system leads to an adverse decision; offer them an opportunity to correct inaccurate personal information; and in some cases, provide an opportunity to appeal the decision.

The AI Act’s Transparency Requirements

Does the law require companies to publish their source code, training data, or intellectual property?

No. Before a decision is made, the AI Act requires companies to disclose only the purpose of the AI system, the decision it will affect, the deployer’s contact information, and a “plain-language description” of the system. (Even these requirements leave significant leeway for companies to decide how much information to disclose.) This type of information is already routinely shared by developers and sellers of AI-driven decision systems on their websites and in their marketing materials and pitches. 

The AI Act will require company websites to list what types of AI decision systems the company makes or uses, how the company tests them for bias, and, if a company uses the AI system to make decisions about consumers, “the nature, source, and extent” of information about consumers that the company collects. That last requirement is just a subset of the information companies already have to provide consumers under Colorado’s Privacy Act.

Developers would have to provide deployers of AI decision systems (that is, companies that use AI systems to make key decisions about consumers and workers) with “high-level summaries of the type of data used to train” their systems, but not the training data itself. Knowing the general types of data an AI system was trained on is essential to ensure deployers know how the system works and to comply with their obligations under Colorado’s data privacy law. Moreover, even these high-level summaries need only be provided to deployers, not to consumers, regulators, or the general public.

What type of explanation does the law require companies to give consumers and workers who are rejected in an AI-driven decision? And why is that explanation necessary?

Companies would have to tell consumers or workers rejected through an AI-driven decision the “principal reasons” for the decision, the degree to which and manner in which AI played a role in that decision, and what personal data was processed in making the decision (which, again, is already required under Colorado’s privacy law).

Giving consumers explanations of adverse decisions is not a new idea. Congress adopted very similar rules for consumer finance decades ago during the earliest wave of automated decision-making. In the 1960s, banks began using computer-generated credit scores to decide which consumers could get loans and what interest rates each would pay. It became clear that many consumers were being denied credit without any explanation or based on inaccurate information, and consumers were outraged. Congress responded by passing the Fair Credit Reporting Act (FCRA) in 1970, giving consumers the right to know what factors go into credit decisions and a right to review and correct the underlying information that those decisions were based on.

It makes sense for companies to give consumers and workers receive similar transparency for AI-driven decisions so that they know when companies use AI to make key decisions about them, what factors those decisions are based on, and how they can correct inaccurate information.

The law requires companies to give some information to consumers proactively, without the consumer having to request it. Why is that proactive disclosure necessary?

Proactive disclosure obligations are essential in areas where consumers and workers wouldn’t otherwise have a full picture of important situations they face. That is certainly the case with respect to many AI-driven decisions. Without proactive disclosure, most consumers don’t even know when, how, or why companies use AI to make key decisions about them, and thus wouldn’t even know which companies they might want to request information from. 

Because data is spotty, we don’t have a clear picture of how many companies use AI-driven decision systems, but there’s lots of anecdotal evidence that the practice is already widespread. Surveys of companies indicate that anywhere from a third to the vast majority already use AI in recruitment and hiring alone. Research shows experts on AI hiring tools have serious concerns that many-to-most such systems have a disparate impact based on race, sex, or ethnicity. But we don’t know which companies are using biased or otherwise flawed tools or which workers and consumers are being affected by them.

Stories about harmful uses of AI-driven decisions have occasionally become public thanks to whistleblowers and dogged investigative journalism. ProPublica has published a trio of reports on how the health care giant Cigna secretly used an algorithm to mass-reject policyholders’ claims—and then threatened to fire a physician who pushed back. But consumers and workers should not be forced to rely on whistleblowers and nonprofit news outlets to bring these issues to light.

Isn’t proactive disclosure unusual?

Not at all. Many laws impose disclosure obligations on businesses in their dealings with consumers and workers. Banks must provide truth-in-lending statements. Appliance manufacturers have to disclose the expected energy consumption of an appliance. Food manufacturers have to label food packages with ingredients and nutritional information. Sellers of real property have to disclose their knowledge of asbestos and lead pipes in a building. Medical providers must disclose their HIPAA privacy policies. Car companies have to disclose the fuel efficiency of their cars. Employers in Colorado and many other states have to list deductions and withholdings on workers’ paystubs. And so on.

But we usually don’t require this level of transparency for decisions made by humans. Why do we need it now for automated decisions?

Laws often require companies to be more transparent when processes normally done by humans are automated or digitized. Take privacy law. Until the Digital Age, there were no laws limiting someone’s ability to go up to any consumer on the street and ask them to reveal their shopping preferences, stores visited, personal interests, and buying habits. Such a law probably seems unnecessary—after all, there are only so many consumers a human can talk to in a day. Each consumer would know they are being asked to reveal private information, would have repeated opportunities not to provide it, and would know exactly who to blame if their information was leaked or used inappropriately.

But Colorado, like many other states, decided in 2021 to restrict the digital collection of such data about consumers. Why? Because using modern technology, companies can collect massive amounts of information about numerous people in a short amount of time, without the consumers ever knowing it was happening.

The same dynamic is at play with AI-driven decisions. A biased AI video interview platform can reject more candidates in an hour than a biased human recruiter can in a year. Moreover, the role of AI in companies’ decision-making processes is often hidden, so consumers and workers might be subject to critical AI tools without ever knowing it. The scale raises the risk of harm for individuals and companies alike, and the lack of transparency undermines accountability for erroneous or biased decisions. There is no way to attack that problem without strong, proactive disclosures.

The AI Act’s Other Requirements

What other obligations does the Colorado AI Act place on companies?

The AI Act would require deployers of AI decision systems to conduct annual “impact assessments,” the most significant component of which is a requirement that companies analyze whether an AI decision system creates a risk of algorithmic discrimination. (Notably, in the sphere of employment decisions, longstanding federal regulations already require most companies to conduct such analyses.)

This assessment is otherwise essentially a set of recordkeeping obligations. Deployers must describe the steps they take to mitigate discrimination risks, though the AI Act does not require them to implement those steps before using the AI system. Additionally, companies must describe the AI decision system’s purpose, intended uses, data used and produced, performance, and post-deployment monitoring. Developers also have to conduct impact assessments, which are comparable in scope to deployers’ impact assessments.

Rebutting other arguments against the law

Does this law target Colorado companies? Would this give companies a reason to move to another state?

No, because the law does not distinguish between companies based in Colorado, another state, or another country. What matters is whether a company sells or uses AI systems that make crucial decisions about Colorado’s consumers and workers. If it does, the company has to provide the information to the consumer and conduct the impact assessment that the law requires, regardless of whether it is based in Colorado or elsewhere.

Doesn’t Colorado’s law risk creating a “patchwork” of different state laws that companies will have to comply with? Why shouldn’t we wait for Congress to act so the same rules apply everywhere?

Congress hasn’t passed significant consumer protection legislation for years, and Colorado shouldn’t have to wait for Congress to take action on behalf of the whole country to protect its own residents. Waiting for Congress to act may well lead to years of inaction even as AI decision systems become more widely used.

States are supposed to be the laboratories of democracy, testing out different approaches to legislation on new issues so that Congress and other states can evaluate which approaches are most and least effective. Holding Colorado hostage to other states’ attempts to address this critical issue so as to avoid a “patchwork” would undermine the legislature’s role and lead to a race to the bottom. Only the weakest forms of regulation would survive because the states willing to push for stronger regulation would have to water down their bills to match states that are more cautious or that oppose regulation altogether. (And that’s actually a best-case scenario; in the worst-case scenario, trying to find a bill that a broad swath of states all find acceptable would lead to gridlock and complete inaction).

In the past, the “patchwork” argument has been used to push back against proposed regulations on everything from smoking to data privacy. To use the latter example, states have passed a number of different models of data privacy laws over the past half-decade. Despite similar arguments that a “patchwork” of data privacy laws would stifle innovation, the digital economy has continued to thrive as companies quickly adapted their operations to address legal compliance. 

Companies will have more than a year to prepare before the Colorado law’s February 2026 effective date. They won’t be starting from scratch, either. Most of the disclosure requirements pertain to information already in developers’ and deployers’ possession. Industry and government standards organizations, nonprofits, and academics have been working on explainability, testing, and compliance tools for AI systems for years, and there is already a rapidly expanding ecosystem of companies offering those services.

Colorado’s AI law is carefully constructed to minimize the burden it places on companies; its requirements aren’t onerous, but they can go a long way towards addressing the potentially serious impacts that biased or error-prone AI systems can have on ordinary people..

Where the law should be strengthened

Is the Colorado AI Act enough? Are any other protections needed to ensure workers and consumers know about AI-driven decision systems that affect them and are protected from any harmful effects?

The labor, consumer protection, and civil rights organizations that have weighed in on the Colorado law unanimously think that while it is a step in the right direction, more is needed to protect workers and consumers. The specific asks are laid out in a statement that public interest groups released on the law. That letter states:

Policymakers should also strengthen the law and further protect Coloradans by:

  • Building on existing civil rights protections by prohibiting the sale or use of discriminatory AI decision systems;
  • Expanding the law’s transparency provisions so that consumers understand why companies are using AI decision systems and what and how these tools measure, including requiring explanations to be actionable;
  • Strengthening impact assessment provisions to require companies to test AI decision systems for validity and the risk that they violate consumer protection, labor, civil rights, and other laws;
  • Eliminating the many loopholes that exclude numerous consumers, workers, and companies from the law’s protections and obligations, as well as unnecessary and overbroad rebuttable presumptions and affirmative defenses that allow companies to escape accountability; and 
  • Strengthening enforcement by giving consumers and local district attorneys the right to seek redress in court when companies fail to comply with the law.
Why prohibit the use or sale of discriminatory AI decision systems? Isn’t it enough that the law gives companies a duty to take reasonable care to prevent algorithmic discrimination?

A duty of “reasonable care” is not enough. Civil rights laws do not contain exemptions based on tort-like “reasonable care” standards—they simply make discrimination illegal, full stop. It makes little sense to create a different, lower standard for AI-driven decisions.

What additional disclosures are needed?

It is important for the law to be clear and to ensure disclosures are useful and meaningful to consumers and workers who are subjected to AI-driven decisions. The AI Act’s requirement that deployers provide impacted individuals with a pre-evaluation notice that includes “a plain language description of” the system should be clarified. The Act’s current notice provisions would not provide consumers or workers with crucial information about what AI decision systems are supposed to measure and how they measure it. At a minimum, consumers and workers should know what types of decisions AI systems are used to make, what personal data and attributes are considered, and how systems use that information to generate outputs. That information is essential for consumers and workers to make an informed choice about whether to subject themselves to such an assessment. In the employment context, robust notices are particularly critical for individuals with disabilities, who otherwise will not know whether they may need to seek accommodation. 

What about the post-decision explanation? What more is needed there?

Here too, the law should be made more specific; the AI Act’s explanation requirement is, for instance, far less specific than the explanations consumers are entitled to under the Fair Credit Reporting Act. Specific explanations are even more critical now because AI systems may scrape information about people off the internet. Such information may be inaccurate, confidential, or harmful — for example, where the information was tied to a data breach or non-consensual intimate images (NCII). There are also documented instances of applicants to rental units being rejected because algorithmic screening software mistakenly confused them with someone else who has a similar name and a criminal record. Companies that use algorithmic systems to make consequential decisions about consumers should provide explanations that are clear enough to ensure that consumers understand the output, what information it is based on, and whether it is based in whole or in part on inaccurate or improper information or inferences.

Additionally, an explanation should be provided for all decisions in which an AI system’s output was a substantial factor rather than just “adverse” decisions. In many instances, such as decisions involving premiums, pricing, or educational placement, it is impossible to draw clear lines between favorable and adverse decisions. Moreover, all consumers benefit from understanding the factors that go into consequential decisions.

What should be changed with the law’s exemptions and defenses?

Most of the AI Act’s dozens of exemptions and loopholes were included without any explanation as to why they are needed. Some appear to be aimed at types of AI that the Act does not cover. Others are worryingly vague, such as an exception for health care decision systems that “are not considered high risk,” a loophole that exempts any system “approved, authorized, certified, cleared, developed, or granted by” any federal agency, and an ambiguous carve-out that allows companies not to comply with the AI Act so long as they follow federal standards that impose “substantially equivalent or more stringent” obligations. There’s nothing wrong with tailored exemptions (e.g., protecting developers from liability if a deployer makes major changes to the AI system without authorization). But companies seeking exemptions should come forward with proposed exceptions and justifications in writing so that they can be the subject of public discussion.

The law also contains a right to cure that could allow bad actors to flaunt the law and wait until they are caught before fixing problems. That defense should be narrowed so that it is available only if violations were inadvertent and affected only a small number of consumers.

What needs to be improved about the law’s enforcement provisions?

Many laws purporting to protect workers, consumers, or the public at large have failed to accomplish their goals because they lacked strong enforcement provisions. If a company can write off potential fines, penalties, or other consequences under a law as a mere inconvenience or as the cost of doing business, then companies may simply ignore that law.

The current enforcement provisions threaten a similar fate for the AI Act. The current text would make violations of the act an “unfair trade practice” under Colorado’s business laws, but it then strips away the most effective enforcement mechanism: a consumer’s right to pursue a civil action. This limitation will likely result in an overburdened AG’s office and an underenforced law.

Consumers harmed by violations of the AI Act should have access to all remedies that Colorado’s consumer protection laws allow, including a private right of action. Individual consumers are in the best position to know when a company has used an AI tool improperly or unfairly to make an important decision about their lives, and they should have the ability to take action to vindicate their rights under this law.