Content Moderation

CDT has worked on content moderation and related issues, including intermediary liability and the role of artificial intelligence and automated decision-making systems, since its inception. In recent years, as interest in these topics has grown, CDT has continued to be a leader in both domestic and global conversations about the intersection of content moderation and transparency, accountability, and human rights. We collaborate with a variety of partners, including academics and advocates, through work such as the Santa Clara Principles. We also engage directly and one-on-one with tech companies and policymakers. Through these informal and formal engagements, we give input, shape policy, and successfully push for greater transparency around platform practices that impact user rights. We are also conducting research on the mechanics of content moderation at different platforms, and developing educational opportunities that bring the realities and challenges of content moderation to different audiences.

Recent Content

EU Tech policy brief December 2024

EU Tech Policy Brief: December 2024

Trusted Flaggers in the DSA: Challenges and Opportunities

CDT brief, entitled "Beyond English-Centric AI: Lessons on Community Participation from Non-English NLP Groups." Black and white document on a grey background.

Beyond English-Centric AI: Lessons on Community Participation from Non-English NLP Groups

White document on black background.

CDT Joins Amicus Brief Urging Rehearing in Anderson v. TikTok

CDT's Michal Luria presenting Research at the Kid Safety event.

Insights from a Child Safety Online Symposium: Bridging Research and Policy

CDT report, entitled “Moderating Maghrebi Arabic Content on Social Media.” Illustration of a hand, wearing rings as well as bracelets with hand of Fatima around its wrist, holding a red and purple phone that shows messages with Arabic letters in them being moderated or acted upon.

Moderating Maghrebi Arabic Content on Social Media

Read More