Skip to Content

Free Expression, Government Surveillance, Privacy & Data

What Could Go Wrong? Apple’s Misguided Plans to Gut End-to-End Encryption

Last week, Apple dropped a bombshell, announcing that it plans to introduce backdoors into its encrypted Messages and iCloud file storage services. CDT, along with many other human rights advocates and security experts, is urging Apple to halt these changes and stop the roll-out of  surveillance and censorship infrastructure to iOS devices worldwide. 

These changes are proposed as a way to address the distribution of child sexual abuse material (CSAM) and to interrupt “grooming” or other types of child exploitation that can happen via Messages. Although these are laudable goals, Apple is taking the wrong approach to meeting them. Apple is relying on several policy choices designed to limit the reach of the new features, but policy choices can fall to government mandates to expand and repurpose the new technical capabilities. Apple’s changes will create new risks for children, teens, and other vulnerable people who depend on end-to-end encryption to communicate safely online and will threaten the privacy and security of all users of Apple’s products.

Here are six major risks that Apple is creating for its users:

  1. Governments will demand that Apple expand its image scanning tool to include other types of content. Apple intends to enable client-side scanning of images to detect potential CSAM before allowing users to upload those images to iCloud. To do this, it plans to use hashes–the digital fingerprints of files–of content that has been previously reported to the National Center for Missing and Exploited Children (NCMEC) and unidentified “other child safety organizations.” Apple’s update will mean that every photo a user wants to upload to iCloud will be scanned and evaluated as a potential match for CSAM first.

    While Apple says it has “steadfastly refused” previous “government mandated changes that degrade the privacy of users before”, with this update to iOS, it is voluntarily introducing technical architecture that could enable significant additional censorship. Governments around the world are pressuring tech companies to take more proactive steps to combat everything from alleged terrorist propaganda to copyright infringement — and to use filtering and hash databases to do it. Hash-based identification of problematic content is a blunt tool: hashes can only indicate a likely match, and reveal nothing about the broader context of an image or the reasons a user possesses it. While there is no contextual justification for possession of CSAM, which is a crime in most countries, other types of images must be considered with more nuance and understanding of the legitimate creative, research, educational, archival, or journalistic purposes for which a person might store them.
  1. Governments will push Apple to expand the use of its hash database beyond scanning content in iCloud. In addition to the demands for substantive scope-creep that Apple will surely face, it will also confront pressure from governments to use client-side hash-matching in other apps and services. Though Apple is clear that it does not currently intend to use the hash database built into iOS to detect or block images sent through Messages, it will undoubtedly face pressure to make that tool more widely available.

    Once Apple introduces the capability of client-side scanning for CSAM, it will open the door to demands for such scanning to occur on all images stored on the phone, or sent through any messaging app, or shared through any type of file upload to a user-generated content site. It will also face pressure to go beyond reporting CSAM images to NCMEC (as it is required by US law to do) and to share information it obtains with law enforcement. (The EU’s draft Digital Services Act, for example, includes a provision that would require service providers to notify law enforcement of information that appears to relate to “serious criminal offenses”.)  By framing its client-side scanning as a “privacy protective” way to inspect images, Apple is obscuring the real privacy implications of its decision and inviting government demands for widespread use of this technology.
  1. Apple’s “sexually explicit images” classifier will make mistakes. As part of its changes to Messages, Apple plans to install a machine learning classifier on users’ devices that can scan all images sent to and from a “child” account in Apple’s “Family Sharing” service. The “parent” account can enable this feature, which will detect “sexually explicit” images, send a warning to the “child” account about the image, and, for children 12 and under, notify the “parent” account when the child sends or views the image.

    Unlike hash databases, which seek to identify near-exact matches of content that the provider already knows to look for, a classifier is designed to identify never-before-seen examples of whatever it has been trained to detect. This makes these classifiers potentially powerful, but also prone to error: when Tumblr rolled out its classifier designed to detect “adult content”, it mistakenly flagged art, advocacy, memes, photos of people’s dogs, and posts about design patents. Facebook also uses classifiers to enforce its ban on Adult Nudity and Sexual Activity, mistakenly blocking news, journalism, and health information.

    Apple’s classifier will also inevitably make errors–meaning that parents will be told that someone is attempting to send their young child “sexually explicit photos” when in reality, it’s just grandpa trying to share a photo of the beach. This kind of mistaken identification could cause significant strife within a family. Teenagers who receive mistaken warnings about the images they’re sending or receiving are also at risk, whether it’s the shame a girl might feel when her innocuous selfie is flagged as “sensitive” and she’s told it might hurt the person who receives it, or the trouble she might get into when the friend she sends it to receives the same warning and shows the false accusation to a parent or teacher. These mistaken warnings could also discourage teens from sharing or reading useful information about art, biology, and sexual and reproductive health. 
  1. Governments will demand that Apple use the Message backdoor to block political messages. Once Apple launches client-side scanning in Messages, that tool can be repurposed. Although this feature is currently opt-in and only available on family accounts, Apple will push out the software that runs the tool to all Apple devices, where it will be ready to be activated. And once the surveillance tool is on the device, it will be a small step to add new categories of content. China could demand that Apple scan for memes involving Winnie the Pooh or other imagery deemed insulting to Xi Jinping. Or governments could demand that Apple scan Messages for “extremist” content or memes that demonstrate a disfavored political perspective.

    In addition to expanding the types of content for which Apple scans, governments could further demand that Apple not only flag or provide notice to users, but outright block these messages. Once Apple has built the detection tool, it will be much harder for Apple to argue that it lacks the technical ability to comply with such government demands. Even if Apple “only” provides a warning to users that they are about to send or receive inappropriate content, this would still send the strong message that their communications are being watched, thereby chilling speech, discouraging dissent, and inhibiting engagement between activists for fear of government detection.
  1. Abusive parents will use Apple’s Messages tool to surveil and punish their children. Apple’s changes to Message will also enable one user to surveil another user’s account, which will have disproportionate impacts on vulnerable groups like LGBTQ+ children or kids with mental health challenges. There’s no guarantee that the “parent” in a set of linked Apple accounts is actually the parent vested with decision-making authority over a child’s online activity — or is an adult with the child’s best interests at heart.

    A child may live with one parent (or another family member) while another parent continues to pay their phone bill. Imagine, as an example, a 12-year-old who lives full time with a supportive parent, and comes out as trans. Her supportive parent may want to send health and wellness information to the child, some of which could get flagged by Apple’s “sexually explicit content” classifier. The child may click through the notification’s warning screen, which only mentions that their “parents” get a notification, not realizing that her phone account is actually managed (and now, surveilled) by her non-custodial parent.

    Even in the best-case scenario, where the other parent is equally supportive, this child has just had deeply personal information about herself unintentionally revealed to someone else. In much worse scenarios, where the other parent is transphobic or otherwise abusive, this inadvertent disclosure of information could be weaponized in future custody disputes, or directly endanger the child’s safety.
  1. There’s no guarantee that only “parents” and “children” will use Apple’s Messages surveillance feature. Apple has no way of ensuring that only actual parents of actual children under the age of 13 will use this new tool. Apple is connecting this new feature to its “Family Sharing” service, which enables up to six accounts to share media and subscriptions purchased through Apple services, as well as to contribute photos to a shared family album. Those features may not currently inspire many people to abuse the “Family Sharing” setting, but if Apple introduces its spyware feature, that will change.

    By adding the ability to surveil Messages, Apple will convert “Family Sharing” into a tempting form of “stalkerware,” or technology that abusers can use to conduct surveillance of their victims. Controlling parents could manipulate the age field of their child’s account in order to maintain surveillance capabilities past the 13-year age cutoff. Abusive domestic partners often seek to control their victims’ access to mobile phones and the Internet, and will have a new tool from Apple that allows them to monitor whether their partner may be attempting to seek help by sending images documenting their abuse.

    Apple says that this Messages backdoor doesn’t break end-to-end encryption because “Apple never gains access to communications as a result of this feature”, but that completely misses the point—and utterly fails to appreciate how appealing their account surveillance feature will be to a host of bad actors.

By making the announced changes, Apple, which had been a leader in providing secure messaging and cloud storage services, is putting users of those services at risk. 

Apple is also opening itself, and other tech companies, up to increased pressure from governments to further assist with censorship of various types of information. CDT is calling on Apple to refrain from making these proposed changes to Apple devices and return to offering their users truly end-to-end encrypted messaging and secure file storage.