On November 8, CDT VP of Policy Samir Jain spoke at the U.S. Senate’s Artificial Intelligence Insight Forum on privacy and liability. The major point: the best way to protect against AI-related privacy issues is to pass legislation that places the privacy burden on companies that collect/profit from data rather than continue to place that burden on people. AI presents a variety of privacy harms, including the rampant collection of data to train AI models. Further, the power of AI will supercharge harms related to targeting and personalized harmful content.
Our testimony offered a few specific proposals: first, pass legislation that requires data minimization, heightened protections for sensitive data, limits on targeted ads, limits on using data in a discriminatory way, and impact assessments of AI systems; second, require companies to take steps to reduce privacy risks of AI training datasets, conduct red team testing, and ensure any federally-funded AI research and models promotes the development of privacy-protecting technologies and methods; and last, ensure there is a robust liability and enforcement mechanism that both compensates people who are harmed and creates appropriate incentives to prevent such harm in the first place.