Skip to Content

AI Policy & Governance, Elections & Democracy, Privacy & Data

Brief – Generating Confusion: Stress-Testing AI Chatbot Responses on Voting with a Disability

[ PDF version ]

Introduction

Even as the “year of elections” draws to a close, the United States’ elections loom. From cyberattacks to mis- and disinformation spread on social media by foreign and domestic actors, digital technology has impacted the discourse, information environment, and perceived legitimacy of American elections in recent cycles. In 2024, the growth in popularity and availability of chatbots powered by artificial intelligence (AI) introduces a new and largely untested vector for election-related information and, as our research found, misinformation. 

Many communities are concerned that digitally available misinformation will impact the ability of community members to vote, including the disability community. However, up until this point, there has been little research done surrounding the integrity of the online information environment for voters with disabilities, and even less focus on the quality and integrity of information relating to voting with a disability that one can receive from a generative AI chatbot. 

Voters, both with and without disabilities, may use chatbots to ask about candidates or ask practical questions about the time, place, and manner of voting. An inaccurate answer to a simple question, such as how to vote absentee, could impede the user’s exercise of  their right to vote. There are numerous opportunities for error, including potentially misleading information about eligibility requirements, instructions for how to register to vote or request and return one’s ballot, and the status of various deadlines – all of which may vary by state. Similarly, misleading or biased information about voting rights or election procedures, including the role of election officials and what accessibility measures to expect, could undermine voters’ confidence in the election itself. Both of these concerns – diminishing an individual’s ability to or likelihood of voting, and reducing perceptions of election integrity – can be amplified for voters with disabilities, particularly considering that the laws surrounding accessible voting are even more complex and varied than those regulating voting more generally. 

This report seeks to understand how chatbots, given the range of ways they interact with the electoral environment, could impact the right to vote and election integrity for voters with disabilities. In doing so, we tested five chatbots on July 18th, 2024: Mixtral 8x7B v0.1, Gemini 1.5 Pro, ChatGPT-4, Claude 3 Opus, and Llama 2 70b. Across 77 prompts, we found that:

  • 61% of responses had at least one type of insufficiency. Over one third of answers included incorrect information, making it the most common problem we observed. Incorrect information ranged from relatively minor issues (such as broken web links to outside resources) to egregious misinformation (including incorrect voter registration deadlines and falsely stating that election officials are required to provide curbside voting).
  • Every model hallucinated at least once. Each one provided inaccurate information that was entirely constructed by the model, such as describing a law, a voting machine, and a disability rights organization that do not exist.
  • A quarter of responses could dissuade, impede, or prevent the user from exercising their right to vote. Every chatbot gave multiple responses to this effect, including inaccurately describing which voting methods are available in a given state, and all five did so in response to prompts about internet voting and curbside voting.
  • Two thirds of responses to questions about internet voting were insufficient, and 41% included incorrect information. Inaccuracies about internet voting ranged from providing incorrect information about assistive technology, to erroneously saying electronic ballot return is available in states where it is not (like Alabama) and, inversely, that it is not available in states where it is (like Colorado and North Carolina).  
  • Chatbots are vulnerable to bad actors. They often rebuffed queries that simulated use by bad actors, but in some cases responded helpfully, providing information about conspiracy theories and arguments for why people with intellectual disabilities should not be allowed to vote.
  • Responses often lacked necessary nuance. Chatbots did not provide crucial caveats about when polling places would be fully accessible, and misunderstood key terms like curbside and internet voting.
  • When asked to provide authoritative information, a positive use case for chatbots, almost half of answers included incorrect information. The scope of inaccuracies included incorrect webpage names and links and a recommendation for users to seek assistance from an organization that does not exist. This is particularly concerning because using chatbots as a starting point for finding other sources of information is an important and frequently recommended use case.
  • Outright bias or discrimination were exceedingly rare, and models often used language that was expressly supportive of disability rights.

Read the full report.

Check out the full Chatbot Responses on Disability Rights and Voting Dataset as a .CSV file (download.).