Skip to Content

CDT Research, Free Expression

More Tools, More Control: Lessons from Young Users on Handling Unwanted Messages Online

CDT Research report, entitled “More Tools, More Control: Lessons from Young Users on Handling Unwanted Messages Online.” Illustration of a chat bubble, secured in a room with active controls (a door, menu options, notifications and action options, and more) for the (young) user sending and receiving messages online.
CDT Research report, entitled “More Tools, More Control: Lessons from Young Users on Handling Unwanted Messages Online.” Illustration of a chat bubble, secured in a room with active controls (a door, menu options, notifications and action options, and more) for the (young) user sending and receiving messages online.

Executive Summary

According to reports, about 95% of U.S. teens have access to smartphones and use the internet daily, which for most includes social media and messaging platforms (Vogels et al., 2022; Nesi et al., 2023). This ubiquity has raised much concern about its contribution to a teen mental health crisis among parents, educators, and legislators (Lima, 2023b). Research paints a more complex picture of the impacts of social media and messaging, finding both benefits and risks to young people (Weinstein, 2018; Valkenburg, 2022). As a result, it is difficult to determine the most effective protective measures for online services and legislators to take, even as the pressure to act grows. 

Recent work highlighted the importance of considering both platform and user activity on platforms to better understand potential impacts, as well as to create safer online environments for young people (Griffioen et al., 2020; Valkenburg, 2022). In this study, we aim to examine and understand young people’s experiences with direct messaging content across messaging apps (e.g., WhatsApp, iMessage) and private messaging on social media apps (e.g., Instagram DMs, Snapchat DMs). Messaging is one of the most popular activities for young people online, but these can also be a channel for concerning interactions, such as harassment (Lwin et al., 2012; Copp et al., 2021) and sexual solicitation (Jones et al., 2012; Wolak et al., 2018). 

Working directly with 32 U.S. teenagers (ages 14-17, n=18) and young adults (ages 18-21, n=14) who use direct messaging regularly, we conducted a month-long, mixed-methods study. The study included an online survey, a 3-week diary study, and a semi-structured interview. We aimed to understand young people’s experiences when interacting through digital messages and to identify key opportunities to increase their sense of safety, agency, and control. 

Summary of Findings

  • Some American teenagers and young adults, especially young African American men or boys, regularly receive unwanted content through direct messaging.
  • Participants are aware of bad actors online and proactively try to assess risks and strategize how to handle privately received unwanted content. 
  • Participants’ definition of “unwanted, unpleasant, or concerning” messages described mostly unsolicited messages that came from strangers and frequently included sexual content. Such messages were not equally distributed; some participants in the study received as many as 7 unwanted interactions over the course of 3 weeks, while 6 participants received one message, and 7 received none at all. 
  • The unequal distribution of unwanted messages might be partially explained by some participants’ strategies to minimize their exposure, including setting their accounts to private, interacting primarily with close circles, and keeping their participation online to a minimum. Unfortunately, the latter may also contribute to a “chilling effect” among young people and may lead them to self-censor their activity and expression online.
  • Participants had strategies for dealing with unwanted messages after receiving them, including assessing and escalating them as needed. When deciding to escalate messages (rather than blocking or ignoring them), they primarily reported them and occasionally chose to share with friends and family.
  • Participants identified several tools they would like platforms to offer to better assess and address unwanted interactions; this surfaces opportunities for platforms to act to support users’ needs.

The findings lead to several trauma-informed baseline guidelines for service providers, alongside areas for additional research that would improve available features for user safety on messaging platforms.

Baseline guidelines for service providers: 

  • Platforms should provide users with basic response tools to unwanted messages, such as the ability to delete, block, and report.
  • Platform accounts should be defaulted to private settings with limited discoverability, with the ability to select a public profile if desired.
  • Platforms should introduce more friction or “speed bumps” in interactions with unknown profiles or potential strangers.
  • Platforms should include more “just-in-time” notices that inform and educate users about potential risks on messaging platforms and tools for mitigating those risks.
  • Platforms should be more transparent about reported cases by allowing users to track the outcome of a message or a person they had previously reported.

Areas for additional consideration:

  • Platforms could introduce more user-side filtering by allowing users to define their own filtering and blocking criteria for private messages. 
  • Platforms might support interoperable blocklists that can be applied across different platforms, or allow third-party applications to apply a single blocklist.
  • More efforts should be invested in educating young users and adults (e.g., parents and educators) about the platforms they select to communicate with others, and the possible benefits of selecting safer messaging spaces (e.g., the choice of phone number-based applications or end-to-end encrypted (E2EE) messaging platforms).

In line with the larger body of literature investigating social media use (e.g., Whiting & Williams, 2013; Scott et al., 2017; Anderson & Jiang, 2018; Valkenburg et al., 2022; Vogels, 2022; Scott et al., 2023), our recommendations focus on equipping young people with tools and knowledge that could assist them in dealing with a range of risks of messaging, while still allowing them access to the many positive aspects and interactions that direct messaging platforms offer. Instead of attempting to fully control and shape users’ exposure to messages, which is both impossible and not necessarily desired, we encourage platforms to hand over some of the control to users. In line with trauma-informed approaches, providing all users, not just young people, with tools and knowledge preserves their sense of control and agency, while helping them help themselves.

Read the full report here.

Read the summary brief here.