216.696.8700

What Apple’s New Image Monitoring Technology Means for Your Data Privacy

August 14, 2021
NCAA

Apple has announced that its iOS 15 update will actively scan text messages and images on iPhones for sexually explicit materials involving minors in an effort to combat the spread of child pornography.

The spread of child exploitation materials on the Internet is growing problem, and victims face years of internet cleanup to repair their online reputations. For victims of child pornography, including high school aged victims of “revenge porn”, this technology presents a major development to prevent the spread of exploitative content. But Apple’s application of this technology raises major privacy concerns and highlights the increasing power that big technology companies have over our lives.

What will Apple’s Monitoring Technology Do?

Apple’s new software update will include the operation of technology called NeuralHash. This technology will actively scan texts and photos for sexually explicit content. Apple has stated that this monitoring is automated and will occur via “on-device machine learning,” meaning that the software will be stored and operated on each individual Apple user’s iPhone.

While social platforms and technology companies already monitor user content, this monitoring power has been largely limited to the scanning of content published on public platforms and company servers. Apple’s new policy would expand monitoring from company servers to the personal device of every iPhone user. Apple has announced that its new technology will work in two ways:

  • Monitoring Messages of Minors for Sexually Explicit Photos: Messages will utilize “on-device machine learning” to analyze images sent or received by designated minors. Sexually explicit photos will be blurred with a warning message, which will include “helpful resources” recommended by Apple and a warning that if the child views or sends the image, the child’s parents will receive a notice.

 

  • Monitoring Images for Child Sexual Abuse Material (CSAM): Sexually explicit images will be scanned before they are uploaded to iCloud photos. If the image has a hash value that matches the hash value of reported CSAM content on file at the National Center for Missing and Exploited Children (NCMEC), Apple will assign that image a cryptographic safety voucher. Apple will then report the match to NCMEC and disable that user’s account. Apple’s original announcement wasn’t clear on whether the on-device scanning of images would occur through the private scanning of all photos on the device, or whether the scanning would only occur right before an image is uploaded to iCloud. This confusion sparked outrage among Apple users and employees over data privacy concerns, causing Apple to make a subsequent announcement that images will only be scanned for CSAM upon being uploaded to iCloud.

The Issues with the Increased Monitoring of Private User Data

No one supports the distribution of child pornography. CSAM content is disturbing and preventing its dissemination on the internet is laudable. But Apple’s decision to monitor the billions of devices that will run iOS 15 is unprecedented, will unintentionally hinder removal efforts by CSAM victims, and has serious implications for iPhone users.

Apple’s Technology Will Unintentionally Harm CSAM Victims from Securing its Removal

At first glance, Apple’s attempt at preventing the spread of child pornography seems admirable. After all, CSAM is sexual abuse content that has perpetually damaging effects on victims. While documenting, reporting, and banning Apple users from possessing this type of content will probably slow the spread of CSAM, it will not prevent it. The unfortunate reality is that the people who spread CSAM on the internet can probably sidestep Apple’s process simply by altering the image so that the hash value is no longer flagged by NCMEC. This is something that leaked pornography websites already do to prevent revenge porn victims from monitoring and removing explicit content depicting them.

Perhaps more disturbingly, Apple’s actions will likely hinder removal efforts by victims. While NCMEC makes a database of CSAM and works with Google to remove reported images from search results, this agency does not submit takedown requests to the underlying websites publishing CSAM. CSAM tends to rapidly spread among pornography sites, and while FBI agents work hard to prosecute some of the perpetrators of this content, they often do not have the time or resources to submit takedowns to every website publishing the offending content. As a result, victims often have no option but to monitor and remove CSAM images of themselves by conducting reverse image searches and sending takedown requests. Victims of CSAM now face the harsh reality of being banned by Apple and reported to NCMEC for attempting to monitor and remove CSAM images of themselves. Apple’s technology has the potential to hinder victims from submitting takedowns.

Discretionary Application of Technology

Commercial moderation of content already occurs on platforms like Facebook and Google. Each platform monitors content published on its site based on the application of community guidelines or terms of use. Each site also has different methods for reviewing infringing content, which can range from human review teams to bots. Because each social platform is governed by its own guidelines and uses different methods of review for analyzing and removing infringing content, moderation varies significantly from platform to platform. Likewise, technology companies will have discretion to apply private monitoring software in accordance with its terms of use. Beyond creating a situation in which the accuracy of monitoring technology will vary from technology company to technology company, this also gives technology companies the power to determine what type of private content they will monitor. Ultimately, if a technology company wants to monitor content beyond CSAM and report users to government agencies (and subsequently law enforcement), they could, barring any action by Congress or state legislators to enact privacy laws. While Apple recently announced that they do not intend to monitor other types of content beyond CSAM, this choice is within Apple’s sole discretion, and places a lot of pressure on Apple users to keep up to date with Apple’s Terms and Conditions.

Fourth Amendment Implications

Apple’s decision to monitor on-device actions and report them to NCMEC (and ultimately law enforcement) has the potential to sidestep 4th Amendment protection from unreasonable search and seizure. The 4th Amendment requires that state actors, like police, attain a warrant, court order, or subpoena to search private communications on cell phones. While a private company can be considered a state actor if it is acting on behalf of the government, courts have held that technology companies and internet service providers (ISPs) that inspect files and report them to NCMEC are not operating as state actors even if NCMEC reports the file folders to police. United States v. Gregory, No. 8:18CR139, 2018 U.S. Dist. LEXIS 206644, at *6-7 (D. Neb. Dec. 7, 2018); United States v. Reddick, 900 F.3d 636, 638-39 (5th Cir. 2018). This means that ISPs and technology companies can bypass 4th amendment protections by simply reporting suspect activity to a government agency to report to police, instead of reporting them directly to police. Thus, Apple’s new monitoring policy constitutes Apple’s decision to expand their monitoring powers to include the policing of cell phones.

New Evidentiary Data Trials for Civil Lawsuits

When an infringing account is flagged, Apple will have to document the potential CSAM content to conduct human review and potentially submit a report to NCMEC. This documentation would constitute a data trail that could be subpoenaed in civil lawsuits. When a technology company is subpoenaed for user data, that technology company has discretion to disclose data or object to the production of that data to protect user privacy. While evidence of being flagged for the possession of CSAM is probably not likely to implicate someone in a civil lawsuit, Apple’s decision to monitor and flag other kinds of activities could.

The Breakdown of End-To-End Encryption

A secure messaging system is one in which the sender and receiver can communicate without interference from a third party. An end-to-end encrypted message can effectively pass through a server without that server knowing the content of that message. When a server has the capacity to reveal the contents of private messages, this is no longer end-to-end encryption. Apple’s update effectively breaks down the end-to-end encryption previously provided to its users.

As monitoring technology increases, so do the intersection of privacy issues. KJK’s attorneys will continue to monitor developments with big technology companies and how these developments impact data privacy. If you have any questions about the implications of Apple’s announcement, please contact Ali Arko at ala@kjk.com or 216.716.5642 or a member of KJK’s Cyber Security & Data Breach team.