Skip to Content

CDT Welcomes Encryption-Protecting Updates to Apple’s Child Safety Features

(WASHINGTON) — The Center for Democracy & Technology (CDT) applauds Apple for reversing its plans to build surveillance capabilities into its messaging app in iPhones, iPads, and other products, reaffirming the company’s commitment to protecting its users with end-to-end encryption. These changes come after CDT and more than 90 U.S. and international organizations dedicated to civil, digital, and human rights urged Apple to protect children’s safety on its services without risking the privacy, security, and agency of all of its users.

“We pressed Apple to adopt an approach to child safety that preserves the encryption that protects all users, and fortunately, the company listened,” said CDT President and CEO Alexandra Reeve Givens. “These changes put the focus on improving the user experience and keeping children safe, instead of punishing or surveilling them.”

The Messages app will now only provide warnings and information to children when they receive or send images containing nudity, in contrast to the child safety features Apple previously announced that also included a notification to parents. Incoming images will be blurred as a result of on-device analysis, and children will also be given the option not to view them. When sending or receiving such images, children will be presented with the option to notify someone they trust and request help. The set of features, first available for developers to beta test, requires parents with a Family Sharing plan to opt in.

Because Apple will never have access to the images, and they are no longer building in a notification feature, the new approach would not create the risk — as the previous approach did — that governments could compel Apple to extend notification to other accounts or detect images that are objectionable for reasons other than being sexually explicit.

CDT has previously advocated for further research on systems like the one Apple is testing, in which on-device, user-controlled machine-learning classifiers could be trained to detect unwanted content on behalf of the user. If no information about the message is disclosed to a third party, the approach could help protect the guarantees of end-to-end encryption.