Skip to Content

AI Policy & Governance, Equity in Civic Technology, Privacy & Data

Report – Fostering Responsible Tech Use: Balancing the Benefits and Risks Among Public Child Welfare Agencies

CDT report, entitled "Fostering Responsible Tech Use: Balancing the Benefits and Risks Among Public Child Welfare Agencies." Illustration of an adult hand and a child's hand, palms up, with a digital image of a home floating above them.
CDT report, entitled “Fostering Responsible Tech Use: Balancing the Benefits and Risks Among Public Child Welfare Agencies.” Illustration of an adult hand and a child’s hand, palms up, with a digital image of a home floating above them.

Across the country, child welfare agencies work with over 390,000 youth in foster care each year by temporarily placing them in foster homes, facilitating adoption if parental rights are terminated, and managing their cases. These agencies are tasked with the high-stakes responsibility of ensuring the safety and wellbeing of youth in their care, but face many challenges, such as lack of coordination across agencies that work with foster youth, insufficient or biased data about a child’s environment, and heavy administrative burdens that contribute to high rates of social worker turnover.

To address these issues, child welfare agencies are using, or considering, data and technology systems including artificial intelligence (AI) tools. However, despite the promises that data and technology provide, these systems risk entrenching racial and socioeconomic disparities, stigmatizing foster youth based on social and academic achievement, and compromising the privacy and security of personal data. 

This report highlights the ways that data and technology can mitigate some of the problems that child welfare agencies face, while also recognizing their inherent risks. The Center for Democracy & Technology (CDT) offers recommendations to maximize benefits and mitigate the harms, including:

  • Identifying the problems that data or technology may solve, and the potential harms it could introduce;
  • Engaging affected stakeholders, from caseworkers to foster youth;
  • Establishing and/or enhancing inter- and intra-agency data and technology governance to guide decision making;
  • Implementing and managing AI tools safely and responsibly, and;
  • Being diligent in vetting third party vendors.

Though these recommendations pertain to all data and technology uses, they are especially important as more and more public agencies are looking to take advantage of AI-powered tools.

Read the full report here.