Apple recently announced that it will soon begin scanning messages on iPhones in the U.S. for known images of child sexual abuse.
Apple stated that the new tool, called "neuralMatch" allows it to detect child sexual abuse images without decrypting people's messages. Apple's messaging app uses machine learning to notify the company about possible illicit images without Apple employees being able to read messages.
If neuralMatch identifies a known image of child sexual abuse, an employee will review the image and then notify law enforcement if it is child sexual abuse.
Some researchers have voiced concern that the tool could be used by authoritarian governments to surveil citizens, particularly dissenters or protesters. In fact, governments and law enforcement have pressured Apple to allow for surveillance of encrypted data, requiring Apple to balance child safety with protecting user privacy.
A cryptography researcher at Johns Hopkins said he worries that the technology could be used to frame innocent people by sending them seemingly harmless images designed to trigger a match with known child sexual abuse images.
On the other hand, the president and CEO of the National Center for Missing and Exploited Children has called Apple's new tool "a game changer" with "lifesaving potential for children who are being enticed online."
Tech companies such as Microsoft, Google, and Facebook have been sharing "hash lists" of known child sexual abuse images for years. Apple also already scans user files stored in iCloud, which is less securely encrypted than iPhone messages, for child sexual abuse images. Barbara Ortutay and Frank Bajak "Apple to scan U.S. iPhones for images of child sexual abuse" apnews.com (Aug. 06, 2021).