Date: October 28, 2021
Time: 12 PM to 1 pm ET
People who use messaging services need to be able to exercise agency in how they communicate. This includes being able to manage privacy trade-offs and also to address unwanted or abusive content such as spam, mis- and disinformation, harassment, and sexually exploitative content.
In the current debates around addressing child sexual abuse material (CSAM) in end-to-end encrypted (E2EE) environments, technical experts have proposed a variety of approaches to addressing abusive content, including user reporting, metadata analysis, and automated scanning of user-generated content.
Given that there are many different kinds of users with unique needs and perceived risks to their online communications, how can we enable meaningful user choice and control around E2EE communications to address unwanted or abusive content?