Apple scanning software already cracked

Security researchers have managed to fool Apple’s scanning software. A few hours after the source code of NeuralHash appeared online, they managed to cause a hash collision. The software then thinks that two photos are identical, but in reality, they are two different images.

Apple wants to do its part to combat child pornography. For this, the company has developed a program called NeuralHash. The moment someone downloads child abuse photos and syncs these images with their iCloud account, the software scans the code or hash of the image. Apple does not generate this hash itself but gets it from organizations such as the National Center for Missing and Exploited Children (NCMEC). Apple’s scanning software is able to recognize variations on the supplied image material, such as black and white photos or mirrored versions.

To prevent the software from giving false positives, Apple has built two security measures into the process. First of all, there is a manual check by an employee if there is a match. In addition, alarm bells only go off with at least thirty hits. The latter is what Apple calls Threshold Secret Sharing.

After Apple introduced NeuralHash to the outside world, the American tech company received fierce criticism. Human rights organizations fear that the software is the first step towards setting up a surveillance system. In theory, Apple can build a backdoor into its products that makes it possible to listen in on encrypted messages. The Electronic Frontier Foundation, along with other organizations, published an open letter on the Internet to change Apple’s mind. People can also sign an online petition against Apple’s plans, which has happened about 7,000 times.

Internally, Apple also gets the full load. Employees are normally reluctant to comment on their employer. More than eight hundred messages about NeuralHash have been spotted via the internal Slack channel. In it, workers express their concerns that the system could be abused by authoritarian regimes to monitor dissidents and activists and impose government censorship. Apple’s technology could also be used to scan for other materials, such as anti-government pamphlets, homosexuality, flyers to demonstrate or insulting a head of state.

Arda Gerkens, director of the Online Child Abuse Expertise Agency (EOKM), questioned Apple’s initiative earlier this week. She assumes that no victims are rescued with Apple’s scanning software. She fears the program will mean a lot more work for detectives. “Imagine what tens of thousands of reports per year mean for the workload of the police. They have to conduct an investigation, search the house and seize data carriers. And it is not the case that one detective is working on that,” she said in an interview with NU.nl. According to Gerkens, cleaning the internet should be Apple’s main goal.

But how well does NeuralHash work? Apple is convinced that it is an effective tool in the fight against child pornography. Security researchers have their doubts about that. Developer Asuhariet Ygvar reverse-engineered Apple’s NeuralHash into a Python script and published this code on GitHub. He told Reddit that the code is already hidden in iOS 14.3. Thus, he was able to reconstruct the source code before it rolls out to iOS 15.

Then Cory Cornelius, a security scientist at Intel Labs, went to work with Ygvar’s source code. He managed to create a so-called ‘hash collision’. The algorithm then thinks that two pictures match each other, but in reality, they are two different pictures. Both images are incorrectly given the same hash value. In other words, this classifies a photo as a false positive. He tells more about this on GitHub. Sometime later, Ygvar confirmed the hash collision.

Normally, it takes months for such an error to come to light. Now it was a matter of hours. TechCrunch asked an Apple spokesperson for clarification, but the company declined to comment. Apple did say that measures have been taken to prevent such errors. Furthermore, Ygvar would have reverse-engineered a ‘generic version’ of NeuralHash and not the full version. The technology company is therefore not worried.

Craig Federighi, head of the software department at Apple, admitted last weekend that communication about NeuralHash was lacking. “We wish this had come out a little more clearly because we’re very supportive of what we’re doing here, and we can see it’s been completely misunderstood. The message was picked up last week as ‘Apple is scanning my phone for images’. That’s not what we’re doing,” the Apple CEO told The Wall Street Journal.

He also emphasized to the newspaper that Apple’s method is a lot more privacy-friendly than the competition. They check all files in the cloud, Apple only looks at photos that are uploaded and leave the rest untouched. Furthermore, the system only monitors images that are already known and provided by external parties.

Catch up on more articles here

Follow us on Twitter here

Popular

Must read

MORE ON THIS TOPIC:

Related Posts