Skip to Content

AI Policy & Governance, Equity in Civic Technology, Privacy & Data

What Happens When Computer Programs Automatically Cut Benefits That Disabled People Rely on to Survive

Bradley Ledgerwood and Tammy Dobbs both live in Arkansas and have cerebral palsy. They both need help with most tasks of daily life – moving their body positions, eating, dressing, going to the bathroom, and going out. Bradley’s parents have provided his care for his entire life, relying on funding from Medicaid’s ARChoices program because his mother quit a well-paying job so she’d have more time to help. Tammy used the same funding to pay professional support workers. Before 2016, they each received funding to pay for 56 hours of care at home each week.

In 2016, Arkansas started using a new algorithm to calculate how many hours of care each person in the ARChoices program should receive. Instead of nurses using discretion to make the decision, the computer program did it automatically. And instead of receiving funds for 56 hours of care each week, the state told Bradley and Tammy they’d only be reimbursed for 32 – a cut of nearly half. Both Bradley and Tammy asked Legal Aid of Arkansas for help, sued the state, and won important victories.

In our new report, Challenging the Use of Algorithm-driven Decision-making in Benefits Determinations Affecting People with Disabilities, we examine the arguments that people with disabilities and their lawyers have made when challenging tools that reduce or terminate their benefits, like the one in Arkansas. While the case law is still evolving and plaintiffs do not always win, advocates are challenging programs under three main theories:

  • Algorithm-driven benefits determinations can violate important Constitutional and statutory due process protections. Under the Constitution and federal statutes, states are supposed to give people advance notice before reducing or terminating their benefits. This notice must also give people enough of an explanation so that they can understand how and why the states have made the decision in the first place. This gives people the opportunity to request a review or make an appeal before the cuts go into effect – including making sure that people receive any support they need to craft an appeal, and that they have a fair chance at winning their appeal. Due process also means making sure the government makes decisions that are as fair as possible – if states or private vendors do not design the algorithm-driven decision-making tools properly, they may not be able to make fair decisions at all.

    In K.W. v. Armstrong, an Idaho judge found that the state government erred in using an algorithm that relied on limited, unreliable, and inaccurate data, and failing to regularly audit it despite knowing they needed to do so. The judge also said that the state had to give people an explanation about why it decided to cut their benefits, including a description of how and why an evaluator decided a person’s needs had changed. Likewise, in Arkansas, the judge in Jacobs v. Gillespie said that the state must explain the specific factors that led to a cut in someone’s benefits, including the person’s relevant health assessments and relevant factors used by the algorithmic system – but the state doesn’t have to give a detailed explanation about how it designed and applies the algorithm overall.
  • States may violate statutory rulemaking requirements when they start using new algorithm-driven decision-making tools without engaging in a public notice and comment process. Federal and state laws outlining “notice-and-comment” rulemaking require states to explain the rule they want to make, and to consider all public input about whether, how, and when they should use any tool that will change people’s access to benefits. In Arkansas Department of Human Services v. Ledgerwood, the judge found that the state had failed to meet its obligations because it did not tell the public that it planned to adopt the algorithm-driven decision-making tool, and plaintiffs secured an injunction against its use. Ultimately, after continued litigation and public pressure, the state decided not to keep using the algorithm that had affected Bradley and Tammy’s lives.
  • Finally, algorithm-driven decision-making tools may violate the community integration mandate of the Americans with Disabilities Act when they cut people’s benefits so much they are at risk of going into an institution to receive necessary care instead of being able to stay at home. Here, plaintiffs in both Idaho and Oregon have argued that algorithmic cuts to their benefits have jeopardized their ability to stay at home and out of institutions, but courts have not ruled on those issues yet.

Sometimes these court cases have led to important victories, though those victories have not always resulted in long-term relief – and other times advocates have not been successful at all in their claims. We hope this report offers useful guidance for state decision makers as they decide whether and how to develop algorithm-driven tools for benefits. We also aim to help inform litigators, advocates, and other community members in developing strategies to successfully challenge algorithm-driven decision-making in future cases, and to uplift the priorities and aims of disabled people ourselves.

Find the full report, as well as a plain language version, here.