Related Posts

De-Identification Should be Relevant to a Privacy Law, But Not an Automatic Get-Out-of-Jail-Free Card

Stakeholders recommend exempting de-identified data, which includes anonymized, pseudonymized, and aggregated information, from the scope of privacy legislation. However, completely exempting these types of data is not just untenable; it is dangerous. In some cases de-identification fails to hide individual identities, and does not always prevent harms to groups of people. So what is the policy solution? In this post, we make three key recommendations.

Read More

The American AI Initiative: A Good Start, But Still A Long Way to Go

This week, President Trump signed an executive order titled the “American AI Initiative.” While this order lays out some useful first steps toward a larger national policy and course of action for artificial intelligence, the administration will need to do more to ensure its goal of maintaining American leadership in AI technologies. Although the order’s broad “policies and principles” section includes calls to preserve civil liberties, privacy, and American values, it is not entirely clear what those values are or whether they might conflict with the other priorities listed in the order. In this post, we’ll talk about what the order does and does not do.

Read More

A “Smart Wall” That Fails to Protect Privacy and Civil Liberties Is Not Smart

Congress needs to be smart about this “smart wall.” CBP’s history of grossly mismanaging technology projects, and its liberal use of surveillance tools beyond the physical border, caution against a hands-off approach. Any funding Congress provides to invasive border surveillance technologies should be conditioned on efficacy requirements and limitations on use that are designed to preserve the human and civil rights of those against whom they will be used.

Read More

Double Dose of FTC Comments Discuss Remedial Measures and Algorithms in Advertising

The FTC sought comment on a wide range of issues, and for this initial go-around, CDT submitted comments on two key questions: (1) the FTC’s remedial authority to deter unfair and deceptive conduct in privacy and data security matters; and (2) the implications associated with the use of algorithmic decision tools, artificial intelligence, and predictive analytics.

Read More

On Managing Risk in Machine Learning Projects

Written by CDT summer intern Galen Harrison. The white paper “Beyond Explainability,” published by the Future of Privacy Forum and Immuta, is an attempt to sketch out how, organizationally, one can manage risk in a machine learning (ML) project. The FPF template seems appropriate for most, but not all, ML projects. When considering whether to form a process modeled after this template, practitioners should carefully consider the scope and setting of their ML operations and whether they share this template’s main concerns.

Read More

Tech Talk: Teaching Data Ethics and Defending Nonprofits Against Cyber Attacks

CDT’s Tech Talk is a podcast where we dish on tech and Internet policy, while also explaining what these policies mean to our daily lives. In this episode, we talk about Cloudflare’s Project Galileo and Google’s Project Shield, which both offer nonprofits and journalists free services to defend against cyber attacks. We also talk to a data scientist about her course on data ethics.

Read More

Tech Talk: Privacy Past and Present

CDT’s Tech Talk is a podcast where we dish on tech and Internet policy, while also explaining what these policies mean to our daily lives. In this episode, we look at the impact the EU’s General Data Protection Regulation will have on global privacy, and we hear from a historian about how the concept of privacy in the U.S. has evolved over time.

Read More