Skip to Content

AI Policy & Governance, Cybersecurity & Standards, Equity in Civic Technology, Privacy & Data

Combatting Identify Fraud in Government Benefits Programs: Government Agencies Tackling Identity Fraud Should Look to Cybersecurity Methods, Avoid AI-Driven Approaches that Can Penalize Real Applicants

Across the country, states are trying to manage a large number of fraudulent applications for unemployment benefits. This, coupled with a surge of legitimate applications due to the pandemic, has left many states overwhelmed and struggling to manage the volume. As a result, some states are turning to automated systems to help verify the identity of applicants or otherwise detect fraud in an effort to prevent incorrectly dispersing funds.

Unfortunately, these methods can come at a severe cost for legitimate applicants, such as if the system incorrectly rejects their applications or requires them to have access to modern technology like smartphones. Given the significant impact of states losing substantial funds to fraud and the devastating impact that ineffective systems can have on legitimate applicants, the Department of Labor has announced plans to dedicate funding to improve the nation’s unemployment insurance system. This is a welcome aid for an important component of the social safety net, but the Department and state agencies need to ensure that any attempts to improve the system leave it accessible to legitimate users, particularly when it comes to detecting and preventing fraud. For the types of large-scale, organized fraud attacks that many states are seeing, solutions grounded in cybersecurity methods may be far more effective than creating or adopting the sort of automated systems that have proven incredibly damaging in the UI context.

For instance, systems that rely on facial recognition, often by comparing a selfie uploaded by the applicant in real time to existing government documentation like a driver’s license, have been adopted in at least 20 states. However the risks of facial recognition systems are well documented: facial recognition systems often exhibit racial and gender bias, resulting in higher error rates for women and people of color; applicants with cheaper or older phones are likely to have lower quality cameras, resulting in higher error rates; and users unfamiliar with technology may struggle to use the systems effectively, if at all. The stakes for applicants seeking unemployment insurance and other benefits are exceptionally high, making these shortcomings deeply concerning.

Facial recognition is not the only way that ill-considered AI-based systems can wreak havoc with citizens’ lives in the unemployment context. For example, inference-based systems that seek to discover characteristics of applications that would indicate fraud can cause significant harm if they are deployed without appropriate protections or remediation channels. A particularly egregious example of such a system is the Michigan Integrated Data Automated System, or MiDAS, used by Michigan’s Unemployment Insurance Agency (UIA) to process unemployment claims. From 2013 and 2015, MiDAS wrongly classified between 20,000 and 40,000 people’s applications as fraudulent. Frequently MiDAS, and thus the UIA itself, was unable to explain why the system had classified the application as fraudulent. In many cases, these errors destroyed applicants’ credit and financial security, and restitution and remediation has often been ineffective in repairing the damage done. (MiDAS is still in use by the UIA, and although it reportedly has more human oversight now, there are still concerns about its use.) Unfortunately, MiDAS is hardly unique – there are numerous examples of automated systems denying benefits and support to people in need.

Regardless of the specific type of automated system being used, such systems can let down the very people they are meant to serve in a number of ways:

  • Lack of transparency about how the system works, what sort of data it takes in, and how it makes determinations limits citizens’ ability to either provide the right input to make the system perform as expected or to challenge erroneous results.
  • These systems are often developed by external vendors rather than built in-house. Unfortunately, many government agencies lack the expertise to vet their vendors appropriately, leaving them ill-equipped to determine if the system will behave as expected or to audit the system effectively once it is in use.
  • Agencies often fail to provide resources, customer service, and avenues for redress when a user struggles to use the system or the system fails to perform as expected. This can be exacerbated when over-reliance on the system leads agencies to neglect hiring and training human workers with the knowledge to help users use the system.

Ultimately, however, these types of identity verification systems present a more systemic issue: they aren’t fit for purpose for the most-pressing problem facing unemployment agencies right now. Much of the surge in fraud appears to be coming from organized criminal operations or nation-state attackers that use stolen identity information (often from prior data breaches), bots, and other techniques to engage in large-scale fraud. Workforce agencies are less accustomed to dealing with that specific brand of attack and would be better served by relying on measures akin to those used in cybersecurity rather than individualized fraud detection measures.

The key tension that agencies must resolve is how to introduce measures that will make it harder for the attacker to commit fraud, without undermining the ability of legitimate users to obtain benefits in a timely and straightforward way. The decision to use facial recognition illustrates this tension: while it may prevent some fraud, it can also make a critical system practically unusable for at least some legitimate users at a particularly vulnerable time in their lives. Given those risks, policymakers should consider whether there are ways to more easily differentiate attackers from legitimate users, including by relying on measures similar to cybersecurity protections designed to combat large scale attacks.

Some approaches are relatively straightforward and well known. For instance, agencies should consider using email or text message verification for applicants, such as emailing a code to a previously-known email address, perhaps one previously on file with another state department like a DMV, and asking the user to verify the code. For many modern email providers, email accounts offer relatively robust security against random attacks. A motivated attacker could likely get into any given email account, but doing so at the scale needed to make unemployment benefits fraud financially feasible for a large organization is more challenging. An approach such as this has the additional benefit of being easier for most users, unlike the more novel requirements of facial recognition verification, which require a user to take a sufficiently clear and well-lit selfie, and assumes the user owns the technology required to do so. The friction introduced into the system (e.g. access a specific, previously-known email account and enter a code) is significantly more problematic for an organization trying to submit fraudulent claims at scale than it is for a legitimate user.

In addition, there are a number of technical indicators of fraud that agencies could consider in an effort to stymie attackers without unduly impacting legitimate applicants. For example:

  • IP addresses linked to a different country could indicate an attack by a foreign actor (although agencies should provide technical assistance to users, for example to help users employing a VPN to avoid getting improperly flagged).
  • Multiple applications from the same IP address, physical address, phone number, or device may indicate an organized attack (although agencies should have procedures in place to handle legitimate instances of shared resources like group homes, libraries, or families sharing IP addresses or phone numbers).
  • Multiple applications with the same Social Security Number or bank account number may also indicate an organized attack (although, again, it is important to allow for circumstances like family members sharing a bank account).
  • Application forms filled out in an unreasonably short amount of time can indicate a bot, which could be a likely indicator of an attack.

As with all cybersecurity interventions, it is important to consider the totality of the situation when assessing an application, since each individual indicator may be explicable on its own, but reviewing a range of factors can allow agencies to detect likely instances of fraud and respond in an efficient way.

By recognizing that this new type of benefits fraud requires new solutions, workforce agencies may be able to design defenses that are effective against these attacks, while still enabling individuals to access critical, life-sustaining benefits.