Clicky

Apple’s plan to scan users’ private photos and messages branded “mass surveillance for the entire world”

Privacy experts fear that that Apple's new surveillance tech will constantly be expanded and misused.

Tired of censorship and surveillance?

Defend free speech and individual liberty online. Push back against Big Tech and media gatekeepers. Subscribe to Reclaim The Net.

This week, Apple, the company that touted “what happens on your iPhone, stays on your iPhone,” in one of its ad campaigns, revealed that it would be pushing new surveillance tech to its iPhones, iPads, and Macs and introducing on-device scanning of user photos and iMessages.

Apple has positioned this new tech as “expanded protections for children” but privacy advocates and rights groups are sounding the alarm over the potential expansion and misuse of this technology.

“No matter how well-intentioned, @Apple is rolling out mass surveillance to the entire world with this,” National Security Agency (NSA) whistleblower Edward Snowden tweeted. “Make no mistake: if they can scan for kiddie porn today, they can scan for anything tomorrow. They turned a trillion dollars of devices into iNarcs—* without asking.*”

There are currently 1.65 billion active Apple devices worldwide and Apple will initially roll out this surveillance tech to US devices later this year before expanding to other regions over time.

There are two main components to Apple’s new surveillance tech – photo scanning and iMessage scanning.

For now, the photo scanning technology will be coming to iPhones and iPads only and involves a database of known Child Sexual Abuse Material (CSAM) image hashes being transformed into “an unreadable set of hashes that is securely stored on users’ devices.” The hashes are provided by child safety organizations such as the National Center for Missing and Exploited Children (NCMEC) which works in collaboration with US law enforcement agencies.

When a user uploads a photo to Apple’s cloud photo backup and sharing service, iCloud Photos, an on-device matching process will be performed for that image against the known CSAM hashes. Once an undisclosed threshold of matches is exceeded, Apple can see the photos flagged by the technology, and conduct a manual review. If a match is confirmed by manual review, Apple disables the user’s iCloud account and sends a report to NCMEC.

The iMessage scanning technology will be coming to iPhones, iPads and Macs and will use on-device machine learning to scan all iMessage images sent or received by child accounts. If a photo is determined to be sexually explicit, the photo will be automatically blurred and the child will be warned. Depending on the age of the child, their parents may also be notified if their child views a sensitive photo or sends a sensitive photo to another contact after being warned.

Privacy advocates’ and rights groups’ concerns about this surveillance tech fall into three broad areas – the potential for errors, the potentially endless expansion, and the lack of transparency.

Apple promises that its photo scanning technology provides “an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account.”

But Matthew Green, a cryptography professor at Johns Hopkins University, has warned that the perceptual hash functions that power this type of on-device scanning are “imprecise” “on purpose” and could result in harmless photos being flagged and reported.

“Depending on how they work, it might be possible for someone to make problematic images that ‘match’ entirely harmless images,” Green said. “Like political images shared by persecuted groups. These harmless images would be reported to the provider.”

The next major concern critics have raised is that, as with most types of surveillance tech, once introduced, there’s nothing stopping the scope of the surveillance being expanded beyond it’s original purpose.

Steven Murdooch, Professor of Security Engineering and a Royal Society University Research Fellow in the Information Security Research Group of the Department of Computer Science at University College London, noted that in 2014, UK internet service providers (ISPs) were forced to expand the scope of the system they had created to block child-abuse images so that it could also be used to block access to websites that advertised and sold counterfeit goods.

The court justified the decision by arguing that that once the blocking system has been built, the cost of implementing additional blocking orders is “modest.”

And with governments around the world constantly pushing for access to private communications, it’s almost inevitable that similar arguments will be presented to Apple as justification for expanding the scope of its on-device scanning and flagging system.

This possibility becomes even more alarming when you factor in how Apple operates in China. In 2018, Apple handed over its iCloud operations in China to a state-owned company. And earlier this year, a report alleged that Apple had compromised iCloud security in China and provided encryption keys to the Chinese government. In addition to these iCloud reports, Apple has consistently censored for China.

“All it would take to widen the narrow backdoor that Apple is building is an expansion of the machine learning parameters to look for additional types of content, or a tweak of the configuration flags to scan, not just children’s, but anyone’s accounts,” digital civil liberties group the Electronic Frontier Foundation (EFF) wrote in response to Apple’s announcement of this surveillance tech. “That’s not a slippery slope; that’s a fully built system just waiting for external pressure to make the slightest change…The abuse cases are easy to imagine: governments that outlaw homosexuality might require the classifier to be trained to restrict apparent LGBTQ+ content, or an authoritarian regime might demand the classifier be able to spot popular satirical images or protest flyers.”

The kicker is that with most other forms of Big Tech surveillance, it happens on files that users have already shared to the company’s servers. Apple is scanning private photos on-device. For now, this on-device surveillance will only be triggered when users upload photos to Apple’s servers via iCloud photos but as the EFF points out, these parameters can be changed by Apple at any time.

Furthermore, unlike most other Big Tech companies, Apple has sold its hardware on the promise of privacy. Not only did the company run its “what happens on your iPhone, stays on your iPhone” ad campaign but CEO Tim Cook has repeatedly asserted that “privacy is a fundamental human right.”

Finally, the lack of transparency compounds these concerns. Only Apple and its partners know the true error rate and what’s in the hashes that are saved to user devices. If the error rate changes or the hashes are expanded to scan a wider range of content categories, users will only find out when and if Apple decides to tell them.

We only need to look to the Global Internet Forum to Counter Terrorism (GIFCT), a Big Tech censorship alliance, to see how a lack of transparency around similar hashing technology has manifested. Researchers have no access to the hash database that dictates which content will be censored by GIFCT members. And despite the stated purpose of GIFCT being to block terrorist content, digital rights groups claim that it has expanded far beyond this scope and now blocks content opposing terrorism, satire, media reports, and other types of legal content.

While numerous privacy advocates and rights groups have raised these concerns, Apple and the NCMEC’s initial response has been dismissive. In an internal memo that was distributed to the Apple teams that worked on this surveillance tech, Apple included a message from NCMEC’s executive director of strategic partnerships, Marita Rodriguez, which referred to pushback against these new surveillance measures as “the screeching voices of the minority.”

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net.

Tired of censorship and surveillance?

Defend free speech and individual liberty online. Push back against Big Tech and media gatekeepers. Subscribe to Reclaim The Net.

Read more

Share