Skip to Content

AI Policy & Governance, European Policy, Privacy & Data

CDT CEO Alexandra Reeve Givens’ Remarks as Civil Society Delegate to U.S.-E.U. Trade & Technology Council

CDT President & CEO Alexandra Givens spoke as an invited civil society representative at the fourth meeting of the U.S.-EU Trade & Technology Council in May 2023.

Further information about the meeting is available here.

Video of the panel is here.

***

Secretary Blinken, Secretary Raimondo, Vice President Vestager and other representatives, thank you for the opportunity to speak today. My nonprofit organization, the Center for Democracy & Technology, works to protect human rights and democratic values in the digital age. 

Others have spoken today about the positive uses of generative AI, and there is no question there are such uses. Generative AI tools are allowing people to express themselves in new ways: to bring images from their imagination to life, and, for some, to find new efficiencies and opportunities in how they work – and this is just the beginning of these tools. 

But as Mr. Amodei described, the risks are real, and they’re manifesting already. We are seeing already the professional, reputational and potential physical harms when people rely on generated text results as accurate, unaware of the likelihood of “hallucinations”, or fabricated results. There is the risk that generative AI tools will supercharge fraud, as tools make it easier to quickly generate personalized scams, or to trick people by impersonating a familiar voice. There are risks of deepfakes that misrepresent public figures in a way that threatens elections, national security, or general public order, and risks of fake images being used to harass, exploit and extort people.

None of these harms are new, but they are made cheaper, faster, and more effective by the ease and accessibility of generative AI tools. In this context it is gratifying to see the U.S. and EU continuing their dialogue through the TTC. I will briefly share four priorities for your consideration.

The first is to recognize that while generative AI is grabbing headlines around the world, there are other uses of AI that are directly impacting people’s rights, freedoms and access to opportunity today, from the use of AI to determine who gets a job or receives public benefits, to the use of AI surveillance tools by law enforcement and more. Policymakers cannot lose sight of those core issues even as they expand their focus to generative AI.

Second is the important work that must happen on measuring AI harms and the effectiveness of mitigation strategies. Whether as part of TTC efforts, in assessing new voluntary initiatives or in passing regulations, policymakers must be crystal clear that  efforts to evaluate and manage AI risk must meaningfully address the full spectrum of real world harms. For example, policymakers must ensure AI audits are rigorous, comprehensive, and escape issues of capture. Policymakers must address the danger that frameworks for “measuring risk” often address only those harms that can be easily measured – which privileges economic and physical harms over equally important harms to people’s privacy, dignity, and right not to be stereotyped or maligned. As policymakers in the U.S. and the EU look to industry efforts or technically-oriented standards bodies to consider questions of measuring and managing risk, they must ensure these fundamental rights-based concerns are appropriately addressed.

Third, the TTC roadmap calls for joint tracking of emerging risks and incidents of harms. This valuable work should be lifted up and expanded, because lack of transparency is a significant barrier for policymakers and civil society as we assess the potential risks of AI tools. Whether we are discussing generative AI or AI decision-making tools, the U.S. and EU should enhance their information gathering strategies and share findings with domestic policymakers and the public sphere. Robust information about incidents of harm will help us all work from a common foundation.

Finally and perhaps most critically, the TTC’s work must more deeply engage civil society voices – a point that is also true for domestic AI efforts and the standards processes I referenced earlier. The participants in “AI governance” conversations are often self-selecting, of people who feel comfortable in a technical realm. Impacted communities and other civil society voices must have a seat at the table – which is to say, they must be invited in or met where they are to share their perspectives and expertise. Such diversity of thought is needed to identify risks in the first place, to consider trade offs and red lines, and to weigh what approaches to explainability, accountability and governance work for real people in the real world. Not only will these conversations benefit from such engagement, but the TTC’s reach and impact will be that much greater if communities on both sides of the Atlantic can benefit from the pathways that have been forged. 

[ PDF version ]