Elections & Democracy, Free Expression
CDT Comments to President’s Council of Advisors on Science and Technology Working Group on Generative AI
Also written by CDT Intern Clare Mathias
The Center for Democracy & Technology welcomes the opportunity to provide comments to the President’s Council of Advisors on Science and Technology on the potential risks and benefits of generative artificial intelligence systems. We focus our comments in particular on the impact of generative AI on our electoral processes.
Rapid advances in generative AI are spurring creativity and innovation, but also raise significant threats for human rights. The threats to elections and democratic discourse are worth highlighting. In previous elections, operatives used robocalls to spread incorrect information about mail-in voting in an effort to suppress Black voter turnout, and used (sometimes illegal) micro-targeted social media campaigns to prevent people from voting. Operatives also used deceptive text messages to spread intentionally misleading voting instructions for a Kansas ballot initiative in 2022. It is easy to imagine bad actors using AI to exponentially grow and personalize voter suppression or other targeting efforts, increasing their harmful impact. Today, consumers can often spot a scam email, text or robocall because it uses non-personalized language and it may have grammatical or language errors (or, in the case of robocalls, a notably automated voice). Generative AI tools will make it easier to create tailored, accurate, realistic messages that draw victims in.
Generated images can also twist public understanding of political figures and events. Recordings of public figures’ voices have been manipulated to trick senior government officials into thinking they are speaking with government leaders. Videos and images have been digitally altered to make public officials appear incompetent, compromised, or to misrepresent their policy positions. Experts have warned how deepfakes, which are difficult to authenticate or rebut, could impact an election in the closing days of voting, when there is little time to set the record straight, or before a debate. More generally, the growth of inauthentic content makes it harder for people to know what news and content they can trust, such that even authentic content is undermined. Journalists, whistleblowers, and human rights defenders are experiencing these effects already, facing higher hurdles than ever before to establish and defend their credibility.
While the rise of affordable AI-generated content poses new threats to public discourse, policy interventions must be approached with care. This is because there are many legitimate reasons why people use software to generate and alter content: from laypeople and artists using AI to make creative works, to people engaging in parody, actors being de-aged in a movie, voices being sampled for a music track, or researchers altering images of North American and European cities to show what they would look like if they faced the same bombardment as the cities attacked in the Syrian war. Barring or heavily restricting such activities would harm free expression, creativity, and innovation, and would quickly run afoul of the First Amendment.
Efforts to restrict or condition the distribution of generative images may also suppress protected expressive activities. To give one example, in recent years a number of companies and stakeholders have come together in the Content Authenticity Initiative, an impressive undertaking that allows photographers and other content creators to attach immutable provenance signals showing the authenticity of their work (such as details of the image’s creator, date/time/location, tracked edits and more). This is a creative solution to help newspapers, human rights watchdogs and others reassure the public about the authenticity and provenance of images they create and display. But mandating the use of such an authenticity standard (or prohibiting the distribution of materials without such standards) would be deeply problematic, because it would suppress the posting and sharing of lawful images whose creators lacked the resources or awareness to use a provenance tool, who face safety risks if their work can be traced back to them, or who simply do not want to do so.
The challenges of regulating deepfakes does not mean policymakers must sit idle. To the contrary, the PCAST Working Group on Generative AI can recommend concrete steps to increase transparency and accountability in the design, development and use of generative AI tools. The Working Group should consider how federal agencies and executive actions can advocate for and employ best practices and novel innovations to address potential harms.
Read the full comments here.