
Apple defers its plan to detect child abuse images on devices
Apple had previously planned to scan devices for child sexual abuse material (CSAM) but is now taking a break temporarily as it received a backlash that the tool could be used for mass surveillance and crumble the privacy of users.
Apple stated on its website that due to the feedback from customers, support groups, researchers, and many others, they have planned to take some extra time in the coming months to gather input and make advances before releasing these key child safety features.
The claim, however, does not clarify the kind of inputs it would be collecting, the description of changes it looks forward to adding, or the way to implement the system in a manner that it eliminates the concerns that could arise after its deployment.
In August 2021, Apple defined new features envisioned to limit the spread of CSAM on its platform. This included scanning the iCloud Photos libraries of users to detect illicit content, warning adults or users of sexual photos through Communication Safety in the Messages app, and extended help in Siri and Search when searches for CSAM-related topics were made.
The changes were to be incorporated in iOS 15 and macOS Monterey later in 2021, starting with the United States.
When introduced, the proposal suffered an immediate backlash, with the Electronic Frontier Foundation (EFF) calling out Apple to create a surveillance system. The Center for Democracy & Technology (CDT) also pointed out that once this ability is created into Apple products, the company along with its competitors will be confronted with enormous pressure from government entities around the world to scan photos not limited to CSAM only and would include other images any government finds objectionable.
Apple clarified that this technology is limited to sensing CSAM stored in iCloud, and they will not approve requests by governments. But still, Apple did not do anything to relieve fears as such a scanning could lead to invasions of privacy and that it could be extended for further abuses along with offering a blueprint for crashing end-to-end encryption. It was also noted that the researchers could create “hash collisions” by reverse-engineering the algorithm, bringing a scenario where two different images gave the same hash value, deceiving the system to think the images were identical where they weren’t.
Johns Hopkins professor and security researcher, Matthew D. Green gave his suggestion asking Apple to consult technical and policy communities before undergoing any changes. He even asked the company to talk to the public, as this involved 1 billion users.