Advertisement

Apple’s new solution to combat child abuse imagery could radically shift encryption debate

Privacy experts say the scanning system sets off a number of concerns for ordinary users.
SAN FRANCISCO, CA - JUNE 02: Attendees gather at the Apple Worldwide Developers Conference at the Moscone West center on June 2, 2014 in San Francisco, California. Apple CEO Tim Cook kicked off the annual WWDC which is typically a showcase for upcoming updates to Apple hardware and software. The conference runs through June 6. (Photo by Justin Sullivan/Getty Images)

Apple announced Thursday it will introduce a feature to detect child sexual abuse images being uploaded to iCloud Photos from iPhone devices in the United States. The company has framed the feature as a privacy-preserving way to combat the scourge of images of sexually explicit content involving children shared online.

It’s a radical shift in approach to device privacy by Apple, which has often found itself at the forefront of the clash between tech companies and law enforcement over encrypted technologies. Security researchers and privacy experts say that the company’s decision could lead to a slippery slope of government abuse and has radically shifted the debate over encrypted technologies.

“They’ve really changed the rules around what the debate around encryption is,” said Christopher Parsons, a senior research associate for Citizen Lab at the Munk School of Global Affairs and Public Policy at the University of Toronto.

Most major cloud services including Dropbox, Google and Microsoft already scan for child sexual exploitation materials. What makes Apple’s new system distinct is that search begins on the user’s phone, not in the cloud server.

Advertisement

Apple compares photos on a device to know existing child exploitation material using a hashing technology called NeuralHash. It assigns each photo a unique number specific to an image. It then compares those numbers to hashes the company created from a database of known child sexual abuse materials (CSAM) provided by the National Center for Missing & Exploited Children and other child-safety organizations.

If the photos appear nearly identical, they will produce the same hash number. That image is then flagged when it’s uploaded to iCloud Photos. Once an account receives a certain number of matches, it escalates to Apple for review. The company will then share them with NCMEC to review before sharing with law enforcement.

Apple disputed in a call with reporters that the comparison system amounts to scanning as some experts have suggested. The company said that unless a photo is a match and uploaded to the cloud, the hash assigned to the photo will never be interpreted into an image by Apple.

The technology is designed to prevent Apple from learning about every single photo on your phone — something that it already has the power to do if it wanted.

“I think Apple’s system very effectively addresses the technical privacy properties you would want,” says Jonathan Mayer, an assistant professor of computer science and public affairs at Princeton University.

Advertisement

Mayer co-wrote a forthcoming paper on the technical feasibility of preserving privacy while using hash matching for end-to-end encryption services.

“What Apple’s system doesn’t address is many societal problems associated with implementing the system,” he added.

These include weighing the privacy of ordinary users against combatting child sexual abuse material as well as the potential for the technology to be abused for government surveillance.

Apple has long been the voice of privacy in an ongoing debate between Silicon Valley and the government. Apple famously refused a series of government requests between 2015 and 2016 to unlock the phone of a San Bernardino shooter who killed 14 people. The company said there was no way to bypass the phone’s encryption without developing software that could be used to violate other customers’ privacy.

By expressing willingness to begin search processes on user devices, the company has signaled that it’s willing to recant on those principles, Parsons said.

Advertisement

“Client-side scanning isn’t just a privacy or data security issue, either; it’s a freedom of expression issue,” Riana Pfefferkorn, a research scholar at the Stanford Internet Observatory, wrote in an email. “Once the door is opened to client-side scanning, there’s no way it stops with CSAM.”

Both Parsons and Pfefferkorn agree that the move will only redouble efforts from governments to get the company to comply with surveillance requests.

“Apple’s move won’t deflect government pressure; it will only attract more,” Pfefferkorn wrote. “And not just from authoritarian states like China, a huge market where Apple has compromised its stated values in the past.”

Parsons also raised the possibility that governments could use spyware to place abuse imagery on the phones of targets to attract law enforcement attention.

Apple disputed that the new system will be seen by authoritarian regimes as a potential new form of surveillance in a call with reporters.

Advertisement

Apple’s change comes in response to a building wave of pressure from governments and child advocacy groups for tech platforms to better address child sexual abuse material. Apple has, in the past, lagged behind peers in detecting such material. In 2020, Facebook reported 20,307,216 pieces of child sexual abuse images and Google reported 546, 704 across all their platforms, according to NCMEC. Apple reported just 265.

The dangers of child exploitation material have long been used by law enforcement as an argument against mass encryption. When Facebook announced its plans to introduce encryption across its messaging platforms in 2019, it was sieged with criticism from the Justice Department. The DOJ and law enforcement in the United Kingdom and Australia urged the company to halt its plans.

Last year, lawmakers Sen. Richard Blumenthal, D-Conn., and Lindsey Graham, R-S.C., introduced legislation that threatened to take away platforms’ liability protections if users shared CSAM — unless they followed guidelines set by a government task force. Privacy advocates and members of the tech industry alleged the legislation was a clear attack on encrypted technologies.

“Apple’s plans to combat child sexual exploitation are a welcome, innovative, and bold step,” Blumenthal tweeted Thursday. “This shows that we can both protect children & our fundamental privacy rights.” A number of child safety groups, NCMEC, Thorn, Family Online Safety Institute and former Attorney General Eric Holder have come out in support of the technology

Technical questions about the new technology remain. Experts say that Apple has done very little to show how it will safeguard its hash set from outside interference. It also hasn’t provided any technical insights into the effectiveness of its computer image matching technology, which will be just as important as the cryptography component, said Mayer.

Advertisement

In its technical paper, Apple says that there is an “extremely low” probability, one in 1 trillion, of incorrectly flagging an account. The algorithm used by NeuralHash was not made available to independent researchers.

Apple is also introducing an on-machine feature that will detect explicit imagery on the phones of minors in order to inform their parents that they may be sending or receiving such material. Siri and iPhone search will also flag when a user is searching for CSAM content.

Parsons warns that there’s no reason those same technologies couldn’t be expanded to identify other content.

“It begins with child sexual assault material but we are one bad moment away these sorts of tools being repurposed,” said Parsons. “The worry isn’t just that this is being monitored for child sexual abuse material now … But beyond that, how might this be used in the future, what other kinds of content might be monitored.”

Updated 8/6/2021: This story was updated to include comments from Apple.

Tonya Riley

Written by Tonya Riley

Tonya Riley covers privacy, surveillance and cryptocurrency for CyberScoop News. She previously wrote the Cybersecurity 202 newsletter for The Washington Post and before that worked as a fellow at Mother Jones magazine. Her work has appeared in Wired, CNBC, Esquire and other outlets. She received a BA in history from Brown University. You can reach Tonya with sensitive tips on Signal at 202-643-0931. PR pitches to Signal will be ignored and should be sent via email.

Latest Podcasts