Skip to Content

AI Policy & Governance

Politico – Are These States About to Make a Big Mistake on AI?

This op-ed – authored by CDT’s Matt Scherer and Grace Gedye, policy analyst at Consumer Reports – first appeared in Politico on April 30, 2024. A portion of the text has been pasted below.

At first glance, these bills seem to lay out a solid foundation of transparency requirements and bias testing for AI-driven decision systems. Unfortunately, all of the bills contain loopholes that would make it too easy for companies to avoid accountability.

For example, many of the bills would cover only AI systems that are “specifically developed” to be a “controlling” or “substantial” factor in a high-stakes decision. Cutting through the jargon, this would mean that companies could completely evade the law simply by putting fine print at the bottom of their technical documentation or marketing materials saying that their product wasn’t designed to be the main reason for a decision and should only be used under human supervision.

Sound policy would also address the fact that we often have no idea if a company is using AI to make key decisions about our lives, much less what personal information and other factors the program considers.

Solid regulation would require businesses to clearly and directly tell you what decision an AI program is being used to make, and what information it will employ to do it. It would also require companies to provide an explanation if their AI system decides you aren’t a good fit for a job, a college, a home loan or other important benefits. But under most of these bills, the most a company would have to do is post a vague notice in a hidden corner of their website.

Read the full op-ed in Politico.