Skip to Content

AI Policy & Governance, Privacy & Data

Big Brother Meets Bedlam: Resisting Mental Health Surveillance Tech

People with mental health disabilities are routinely not trusted, and, as a result, they are increasingly surveilled by caretakers, doctors, family, and others. For example, In 2017, the FDA approved Abilify MyCite, a psychotropic medication that tracks whether a person has actually taken their pills and can send that data to their doctor. The pill contains a sensor that sends signals to a wearable patch. The patch sends data about when the pills are swallowed to the company’s app. Unfortunately, policymakers have joined in, routinely advocating for expanding and increasing surveillance over and institutionalization of people with mental health disabilities.

People with psychosocial disabilities have long advocated for self-determination, community-based services and treatment, and resistance to surveillance as a means of exercising control or advancing public safety. Despite the pushback, there has been a growing use of predictive policing tools and other surveillance technologies at school, at home, and in the workplace, which has led to a rush among researchers and developers to create new systems that can assess or predict a diagnosis of mental illness. 

In October, CDT and the Kiva Centers hosted a panel of experts to discuss emerging mental health surveillance technologies that included Karen Nakamura, an anthropologist at UC Berkeley working on disability, technology, and access related to AI, in conversation with psychiatric abolition activist Leah Harris and activist Vesper Moore, chief operating officer of an Indigenous-led organization. 

What does the title mean?

“Big Brother” is the name of the controlling government in George Orwell’s novel, 1984, about a dystopian society where people are oppressed, afraid, and controlled. In the book, a state entity known as “Big Brother” controls every part of people’s lives. People who try to fight back can be tortured or killed. Big Brother also surveils every part of people’s lives. Big Brother is watching and listening to everyone at all times, using technology. 

“Bedlam” refers to a former mental hospital in England, St. Mary of Bethlehem, which was notorious for poor living standards and inhumane treatment of residents, many of whom were there against their will. Today, the name Bedlam is used to talk about any institution for people with mental illnesses. 

Using “Bedlam” and “Big Brother” together connects the abuse of mentally ill people with surveillance technology.

What terminology and language should we use when discussing mental health issues? 

Karen shared that people may use a variety of terms to describe their experiences. Some that people use – including Leah and Vesper – are “mad,” “psychosocial disabilities,” “neurodivergent,” and “psychiatric survivors.” Other people talk about having “lived experience.” 

The term mad is often used to insult people with mental illnesses. But some people are taking back (reclaiming) this word for themselves. They use the word “mad” with pride. They say they are mad to call attention to social issues. Mad people are dealing with discrimination, unfair treatment, and abuse. Many mad people do not believe there is anything wrong with them. They do not need to be fixed. Read more about reclaiming the word “mad.”

The term psychosocial disabilities is another term used to talk about mental illness. This language shows that mental illness is about how people’s brains work and is also affected by society. People’s brains do not work the same way. People think, feel, and remember in different ways. Sometimes, people experience distress because of how their brains work. At the same time, mentally ill people are treated differently in society than people who are considered “normal.” People often experience distress because of how they are treated. Some examples of psychosocial disabilities or mental illnesses are depression, schizophrenia, dissociative identity disorder, anxiety, bipolar, and borderline. Read more about the term “psychosocial disability.”

The term neurodivergent means someone whose brain isn’t considered “normal” or “neurotypical.” This can relate to how someone thinks, learns, remembers, communicates, and senses. Some examples of neurodivergence are bipolar, autism, borderline, post-traumatic stress response, and attention deficit disorder (ADD). All people with psychosocial disabilities or mental illnesses are also neurodivergent. But not all neurodivergent people have mental illnesses. Neurodivergent people often face discrimination and oppression. Read more about neurodivergence.

The term psychiatric survivor means a person who has been harmed or abused by psychiatry professionals. Those professionals can include specific doctors, hospitals, or programs. A psychiatric survivor might use other language to talk about their experiences or identity too. Some psychiatric survivors might want support or treatment. Others do not want medical treatment at all. Read more about what it means to survive psychiatric trauma.

How does surveillance tech affect people with psychosocial disabilities? 

New surveillance technologies are a continuation of past discrimination. Before the current age of surveillance technologies, Leah’s mother spent a lifetime dealing with surveillance as a first-generation psychiatric survivor. Leah’s mother was diagnosed with schizophrenia. She died in 2001. Leah’s mother lived in a different time. She was forced to go to psychiatric hospitals, a type of disability institution. She was forced to take psychiatric medications she did not want to take. People with psychosocial disabilities are still forced to go to hospitals and take medications against their will. Specific treatments are different. 

Today, doctors usually don’t perform surgeries like lobotomies (cutting out part of the brain). And electroshock therapy (also called electroconvulsive therapy or ECT) for depression is not as common. Doctors know that ECT can be dangerous and that lobotomies do not help people. But judges can order people with psychosocial disabilities to take medications and go to the hospital against their will. And new surveillance technologies cause many of the same harms that Leah’s mother dealt with when she was alive. 

Technology is changing faster than policy. Mad people have always dealt with surveillance. But new technology will create new types of surveillance and potentially new types of harm, like with the medication that tracks whether you take it. And mad people in Black and Brown communities will face worsened harms that reflect existing discrimination. 

Algorithms are not value-neutral, they carry baked-in bias. Some professionals think that algorithms will be better than humans at figuring out what kinds of care disabled and mad people might need. They think algorithms might be able to replace support and care workers. But they aren’t thinking about economic policies and unfair treatment of care workers leading to worker shortages. Advocates should not fall for the false promises of techno-optimism (the idea that technology will always be good and will help solve problems). Instead, advocates should embrace techno-skepticism and understand where technology will fall short while recognizing where technology can be helpful. 

Companies can use algorithms to track people and make dangerous predictions about their lives. People don’t always know who controls or what happens to their mental health-related data. Companies often use it for many purposes beyond treatment. Amazon’s algorithm suggested that people who looked at books about suicide might be interested in buying rope for a noose or poisonous chemicals. Better Help offers convenient online therapy but shares people’s data with other companies. Crisis Text Line, a hotline for people thinking about suicide, was caught sharing extremely personal information with other companies to increase revenues

Algorithms used to predict whether someone has a mental health disability constitute surveillance. Predictive analytics can use data about groups people belong to, their health data, or what zip code they live in. Suicide prediction tools are one example of predictive analytics. When a tool flags a person, they are at risk of being sent to a hospital or another institution. Threat assessment tools are also dangerous. There are employers, schools, and police departments using threat assessment tools. Those tools might be used to predict who could become violent or dangerous, but they can rely on stereotypes and biases about mentally ill people. These tools may be used to send someone to jail.

Tracking people’s location can be dangerous. The new 988 crisis line does not collect geo-location information that can show where a person is. But some people wanted 988 calls to collect geo-location information and are still pushing for 988 to collect this information in the future, arguing that this will facilitate timely intervention in dangerous situations. But police often respond to mental health crisis calls, and sometimes they hurt and kill people in crisis. People in a crisis might not feel safe or comfortable asking for help if they are worried about being tracked. 

Surveillance technologies can enforce neurotypicality. People with psychosocial disabilities spend a lot of energy and do a lot of labor to present a more “neurotypical” demeanor. For example, exam-monitoring software keeps track of eye contact and makes sure an exam-taker isn’t doing anything “atypical.” This has intersectional impacts, too. If the software has trouble recognizing darker skin tones, an exam-taker with a darker skin tone is forced to shine a bright light on their face so the software recognizes them, as if the exam-taker is in an interrogation room. All of the extra effort people have to put in to prevent these technologies from punishing them is very draining.

What kinds of surveillance technologies are particularly concerning?

Abilify MyCite medication surveillance puts people at risk of forced treatment. People who know their compliance is being tracked leaves them with little choice but to take them. They may feel forced to take them even if they would prefer not to. Additionally, these pills have a 6% false positivity rate. That means that even when people are taking the pills as prescribed, the computer chip sends a report that they aren’t taking their medication. False reports can put people at risk of forced treatment. They could be taken to a hospital against their will. 

Other pharmaceutical (medication) companies might follow the Abilify MyCite example. Companies might want to cash in on mentally ill people by treating “noncompliance” (people not following instructions) as a problem. Not all mad people take psychiatric medications. And taking medications isn’t always helpful for everyone. Disability rights and disability justice approaches believe that people should be able to make their own decisions about their bodies. This includes deciding whether they want to take medication. If companies create more pills that track “compliance,” it will limit people’s ability to make decisions for themselves. 

Digital phenotyping can be dangerous, too. The word “phenotype” means what people can observe about a person. This can mean physical appearance, behavior, or other characteristics. Digital phenotyping means tracking how people use their devices to make predictions about them. A company might want to guess whether someone is depressed based on how much they turn down their phone brightness. They might guess that people who are depressed will turn down their brightness more, so they will track this information. Predicting what kind of mental illness someone has could also lead to forced treatment. 

Threat assessment tools are also dangerous. Employers, schools, or police might use threat assessment tools to predict who could become violent or hurt themselves. Usually, professionals talk about threat assessment tools as a way to stop mass violence, like mass shootings. Automated threat assessment technology can profile people with mental illnesses as dangerous. These assessments can put mentally ill people at risk for arrest and forced treatment. When models use demographic data to predict suicide, we need to consider what assumptions are tied to that data. Some models might flag someone in a low-income or predominantly Black or Brown neighborhood because people in those neighborhoods are assumed to be more likely to need human intervention. 

But you don’t know if that person might actually be living a richer life. Predictive models might also take data out of context. Some models match words in a chat message or post to a list of words that are supposed to mean someone might be in crisis, no matter how those words are used. But more context doesn’t always help. Companies might try to get better at understanding context by collecting more data, which increases the risk of privacy violations. For example, they might store voice samples collected through voice-activated assistants to try to improve their products.

What should developers, policymakers, and researchers do to avoid dangerous or risky surveillance of disabled people?

Bring in the most impacted folks at the beginning. People with lived experience are often asked to give advice or guidance when policymakers or researchers already know what they want to do. They need to be involved as partners from the beginning of a project or process. As Vesper highlighted, data privacy is key to protecting people from data-based discrimination. Data privacy policymaking should include impacted people throughout the policymaking process.

Informed consent has to be meaningful, not just a one-time opt-in. Given the intrusiveness and sensitivity of company data practices related to disabled people’s data, there should be ongoing consent check-ins with real choices about what happens with people’s data. People need to be able to understand their options. People need to be able to change their minds. And people should be able to decide if they want the app or company to delete their data. 

Hold researchers accountable when they have conflicts of interest. Researchers who might profit from surveillance technology should disclose their interests. 

Advocate changing the narratives in the media. Journalists who report on surveillance technologies should be more responsible in their reporting. They should listen to the concerns of people who are directly impacted. Good reporting can drive necessary change. 

Read more

About the Kiva Centers

Kiva Centers are peer-led and trauma-informed communities that offer training, technical assistance, and networking opportunities statewide across Massachusetts. Kiva Centers’ focus is the development, promotion, and delivery of healing communities for people experiencing different social class impacts like trauma, mental health, and substance use. The Kiva Centers’ mission is to support individuals who have lived experience with trauma, mental health diagnoses, substance use and/or extreme states. These principles apply to the Kiva Centers, RLCs, and to any other workshops, trainings, classes, groups, or individual interactions that occur under the Kiva Centers’ umbrella. Learn more about the Kiva Centers.

About CDT

CDT is a 27-year-old 501(c)3 nonpartisan nonprofit organization that fights to put democracy and human rights at the center of the digital revolution. It works to promote democratic values by shaping technology policy and architecture, with a focus on equity and justice. CDT’s Elections and Democracy team fights election disinformation, supports technology that bolsters a fair and secure vote, and works to build a trusted and trustworthy democracy.  

###