2023 Annual Report: A Vision for Responsible, Rights-Respecting AI
In 2023, no topic in technology was hotter than artificial intelligence. Generative AI products like ChatGPT, Bard, Midjourney, and a suite of tools powered by large language models entered widespread public use, prompting speculation about how AI technologies would shape every aspect of society.
Teams across CDT weighed in on how AI is already impacting people’s rights and our democracy, and how highly capable foundation models will cause further change – in areas as wide-ranging as employment; housing and lending; government surveillance; elections; the administration of government programs; the use of AI in schools; and more. We testified in Congress four times, including in hearings before the Senate Judiciary Committee (on AI and human rights), Senate Committee on Homeland Security & Government Affairs (on government use of AI), and two of the Senate’s bipartisan AI Insight Forums. We served as a civil society delegate to the United Kingdom’s AI Safety Summit, the U.S.-EU Trade & Technology Council meeting in Lulea, Sweden, and the 2023 Summit for Democracy.
Meanwhile, on both sides of the Atlantic, we brought together civil society organizations to ensure that regulatory efforts focus on AI’s effects on people’s rights, social inequality, and our democracy. In the U.S., we led a coalition of over 50 organizations calling on the Biden Administration to ensure the federal government’s use of AI is safe, effective, and free from discrimination. Over 85 civil society groups joined our call for Congress to prioritize civil rights, consumer protection, and other public interest considerations in U.S. AI legislation. In Europe, we convened a high-level roundtable for the Spanish Presidency of the EU and senior representatives from 15 Member States to meet with civil society advocates on the EU’s AI Act. We followed these high-level engagements with action, sharing detailed plans for how policymakers could address AI harms while advancing responsible innovation.
We were pleased to see the Biden Administration’s AI Executive Order address many of CDT’s priorities. The EO launched efforts by the Department of Education, Department of Labor, Department of Housing and Urban Development, Department of Health and Human Services, and other agencies to address algorithmic discrimination and other harms within their respective sectors. The Administration also took significant action on the federal government’s own use of AI, including in guidance from the Office of Management & Budget. CDT commended the Administration for addressing high-risk uses of AI by government agencies in a range of areas, while urging OMB to issue more detailed guidance for agencies to mitigate risks, and to strengthen transparency requirements to help the public better understand how agencies are using AI and addressing areas of concern.
In the EU, a final agreement was reached by the European Parliament and the European Council on the AI Act in the waning days of 2023. Civil society on the whole faced challenges getting regulators to hear their concerns, but CDT Europe fought for the regulation to account for harms and human rights considerations, particularly for AI systems that process biometric data or intersect with EU laws on equality such as laws that prevent discrimination in hiring.
As generative AI broke into mainstream use, CDT devoted particular attention to the technology. At CDT’s seventh annual Future of Speech Online event, we explored how to build a rights-respecting future where people benefit from generative AI. Our Elections & Democracy team rapidly launched work on how generative AI companies, social media companies, and election officials should address deep fakes and election-related mis- and disinformation. Another sector that faced particular shockwaves was education, and our Civic Technology team quickly issued guidance on how schools could support teachers and students, while avoiding risks of over-disciplining students for AI use amidst unclear policies.
As AI continues to transform sectors across the economy, it’s more important than ever to develop effective ways to evaluate and govern the technology. In October, CDT was proud to launch our AI Governance Lab, which develops and promotes adoption of robust, technically-informed solutions for the effective regulation and governance of AI systems. Led by experts experienced in guiding the responsible development of AI products, the Lab provides a strong public interest voice to engage with AI companies and multistakeholder initiatives around best practices. It also provides expertise for policymakers and civil society.
By the end of 2023, CDT had partnered with the National Telecommunications & Information Administration to launch its work around the benefits and risks of open foundation models, and helped the National Institute of Standards and Technology kick off its AI Safety Institute Consortium, which convenes over 200 organizations to develop guidelines and standards for AI policy. As AI and the policies to help govern it continue to evolve, we’re poised to help policymakers, practitioners, and public interest partners push strong ideas over the finish line and make sure policies are implemented effectively.