Skip to Content

AI Policy & Governance, Privacy & Data

Healgorithms: Understanding the Potential for Bias in mHealth Apps

Technology is putting health data directly in the hands of individuals, allowing them to gain new insight into their bodies and minds. The mobile health (“mHealth”) industry is at the forefront of this new health frontier, offering a wealth of tools for improving personal health and wellness. These tools include fitness and nutrition trackers, chronic disease management platforms, and cognitive behavioral therapy (CBT) apps for things like smoking cessation and weight loss. The health data generated by consumer-facing apps, devices, and platforms, however, raise important questions about personal privacy and inclusion, particularly when this data is the basis for statistical analysis and prediction that may directly impact an individual’s health, and feeds into public health research that informs broader medical practices.

Trends in data-driven personalization have placed an emphasis on applying machine learning techniques to available data to create tailored and adaptable health interventions. Some mHealth apps use automated interventions to mediate access to relevant health information, advocate for a particular course of treatment, or identify patterns indicating the regression or progression of disease.

mHealth apps have the potential to improve access to quality care and information, and there is some evidence that they can help support positive behavioral changes through self-management strategies, such as tracking symptoms, receiving timely medication reminders, or adhering to treatment recommendations. However, they also carry risks of producing interventions based on biased data, historical discrimination, or incorrect assumptions. These risks are disproportionately borne by historically marginalized groups, including people of color, women, and the LGBTQ community, as well as those who are in poverty, unhoused, or disabled. There is some evidence that longstanding systemic health disparities might be ameliorated through accessible health technologies like mHealth apps, which offer mHealth companies the opportunity to improve community health and establish a market advantage. Perhaps adding to the appeal of entering this market, there are few, if any, regulations governing the fairness or inclusivity of automated interventions, lowering compliance costs that can create a high burden to entry for companies. For most mHealth apps in the U.S., there are few, if any, regulations governing the fairness or inclusivity of automated interventions. Responsible and ethical app development practices are currently one of the few backstops available to prevent automated discrimination that might discourage user engagement from different communities.

This report explores the potential for harmful bias in mHealth interventions and considers the impact of such bias on individuals, companies, and public health, ultimately providing recommendations for app developers to ensure that the tools they build are inclusive and nondiscriminatory. This report seeks to advance the conversation about — and implementation of — equity and inclusivity in automated decisions in the health sector in ways that benefit both the public and the companies using data to make decisions by: (a) providing a landscape of the mHealth ecosystem; (b) synthesizing research and investigations to draw out key issues and concerns related to bias in automated decision-making in the commercial health context; and (c) making recommendations that advance identification and mitigation of bias and discrimination in processes that produce commercial health app content.

Part II of this report provides an overview of the mHealth marketplace, covering the types of mHealth apps available, how data flows in and out of these apps, who uses these apps, how these apps are regulated, and how effective these apps are. Part III discusses the efficacy of mHealth and suggests that reducing bias is vital to delivering effective health interventions with these tools. Part IV examines how and when bias can be introduced into mHealth interventions. Part V provides a recommended roadmap of inquiry for developers and others involved in mHealth to identify and mitigate bias. Part VI is a review of areas for future research, and Part VII is a brief conclusion.