Skip to Content

AI Policy & Governance, Elections & Democracy

CDT Hosts Roundtable on Generative AI and Elections with U.S. Department of State Bureau of Cyberspace and Digital Policy

On Wednesday, March 13th, CDT hosted a roundtable with the State Department’s Bureau of Cyberspace and Digital Policy on the sidelines of the bilateral US-France Cyber Dialogue. The event, which focused on the risks and opportunities of generative AI in elections in 2024, featured U.S. Special Envoy and Coordinator for Digital Freedom Eileen Donahoe and French Ambassador for Digital Affairs Henri Verdier, alongside over 20 civil society experts and representatives of major technology companies. 

The roundtable was moderated by Tim Harper, Senior Policy Analyst for Elections and Democracy at CDT. Tim’s opening remarks for the roundtable are summarized below.

***

2024 will represent the largest single election year since the advent of the internet – roughly 60 countries with over 2 billion people will go to the polls. Generative AI has become a major issue in the news – from the DeSantis campaign making fake images of Trump hugging Doctor Fauci last year to a deepfake robocall of President Biden discouraging voting in New Hampshire. 

But this is a global problem that requires global solutions. We are seeing similar instances emerge around the world – a deepfake of Ukraine President Zalynsky told soldiers to lay down their arms. The PRC used completely artificial news avatars to interfere with Taiwan’s elections this year. Synthetic audio of a leading candidate for parliament in Slovakia was released two days before elections in which he claimed to rig the election.     

And the problem is not only deepfakes. Generative AI poses both information manipulation and cybersecurity risks. Among these, I want to highlight two specific risks that CDT is deeply concerned about.

First, Generative AI facilitates the generation and spread of hyperlocal misinfo at scale. Take the example of polling conditions at individual precincts on Election Day. Bad actors could use publicly available precinct location data and phone numbers to distribute AI-generated content with highly specific claims about named polling places. For example: “FYI voting at Peace Auditorium is paused due to fire sprinklers” may be more persuasive, as voters are likely to trust information that contains recognizable facts about their local voting station.  

Second, language-minority influence: Generative AI can also be used to more persuasively target people in foreign languages. A traditional limitation on influence operations has been language expertise – in the past, translating misinfo into other languages was expensive, labor intensive, and difficult without having people who are familiar with a community. Generative AI can target language communities with bespoke persuasion and misinformation content to discourage or direct voting. It can create and distribute many variations of “news” coverage or advocacy in language, especially in swing jurisdictions. 

Generative AI can be used in ways that pose both cybersecurity and information security risks to campaigns and election officials. Among these, AI capabilities could be used to generate convincing fake election records. Phishing campaigns can be more personalized and persuasive, Gen AI can produce FOIA requests that can overwhelm an elections office. And Election officials’ voices could be cloned to send inauthentic communications about election results, or to direct staff to give bad actors access to their systems. All of this can be done more affordably and at a larger scale with generative AI.

Given these risks, we can’t admire the problem (or even wait for legislation), we need concrete action now. During this conversation, we’ll talk about actions that should be taken by five key stakeholders in this space – AI developers, distribution platforms, political campaigns and parties, and election officials that can make an impact this year.

But first, the “how”: These actions will need to be developed and maintained with accountability. Commitments need to be developed with civil society experts at the table. Many of the commitments made to date have been behind closed doors – from the White House Voluntary Commitments, to the Munich Tech Accords to content policies being announced by OpenAI & other developers.It’s understandable that these companies have been moving quickly, but they need to learn from the field of trust & safety which emerged to address similar concerns in social media. Efforts like Santa Clara Principles spelled out how companies can develop policies & programs with transparency and public consultation. 

In addition to public consultation in developing content policies, a key lesson from trust and safety is that commitments don’t work without mechanisms for oversight & accountability. It’s not enough for companies to have Usage Policies about political content. They need a meaningful infrastructure for enforcement. And we need transparency to see how well this enforcement is working. Here again, there are lessons from the field of trust and safety about norms for transparency reporting, and mechanisms for allowing independent researchers to understand trends in usage and spot emerging harms.

Finally, allow me to provide some context for this conversation. These risks exist under new conditions that make this a challenging election cycle. On the company side, new developments like widespread layoffs of policy and enforcement teams and changes to usage policies that have loosened protections against harmful election mis- and disinformation, we are seeing that many companies may be less ready for elections this year than in 2020. In the U.S., there are partisan court cases and politically-motivated Congressional investigations to stifle independent election misinformation researchers. In the wake of Murthy v Missouri, the U.S. Cybersecurity and Infrastructure Security Agency has confirmed they are no longer providing intel on election interference operations to social media companies. 

Add to this that the information ecosystem has changed quickly since 2020. Newer and smaller sites with less trust and safety experience like Twitch and Discord have grown large audiences and balkanized the information environment, while studies have shown that other alt social media sites like Truth Social and Parler have all played a role in increasing election misinformation. All of this has resulted in making tracking off-platform coordinated interference campaigns more difficult, and it means the threats of generative AI to information integrity are coming just as the infrastructure to address those attacks is most under attack.

I will open the conversation up to the group to hear more about the risks and opportunities presented by generative AI in elections this year, but want to conclude by emphasizing that technology companies and government leaders – both in the room today – must play an active role in rejecting this trend and calling for meaningful investment in the trust and safety work needed to protect a healthy information environment.