Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Privacy Apple Your Rights Online

Apple Kills Its Plan To Scan Your Photos for CSAM (wired.com) 41

Apple plans to expand its Communication Safety features, which aim to disrupt the sharing of child sexual abuse material at the source. From a report: In August 2021, Apple announced a plan to scan photos that users stored in iCloud for child sexual abuse material (CSAM). The tool was meant to be privacy-preserving and allow the company to flag potentially problematic and abusive content without revealing anything else. But the initiative was controversial, and it soon drew widespread criticism from privacy and security researchers and digital rights groups who were concerned that the surveillance capability itself could be abused to undermine the privacy and security of iCloud users around the world. At the beginning of September 2021, Apple said it would pause the rollout of the feature to "collect input and make improvements before releasing these critically important child safety features." In other words, a launch was still coming. Now the company says that in response to the feedback and guidance it received, the CSAM-detection tool for iCloud photos is dead.

Instead, Apple told WIRED this week, it is focusing its anti-CSAM efforts and investments on its "Communication Safety" features, which the company initially announced in August 2021 and launched last December. Parents and caregivers can opt into the protections through family iCloud accounts. The features work in Siri, Apple's Spotlight search, and Safari Search to warn if someone is looking at or searching for child sexual abuse materials and provide resources on the spot to report the content and seek help. Additionally, the core of the protection is Communication Safety for Messages, which caregivers can set up to provide a warning and resources to children if they receive or attempt to send photos that contain nudity. The goal is to stop child exploitation before it happens or becomes entrenched and reduce the creation of new CSAM.

This discussion has been archived. No new comments can be posted.

Apple Kills Its Plan To Scan Your Photos for CSAM

Comments Filter:
  • The bigger news is that they're finally rolling out E2E encryption for iCloud Backups, which have—far and away—been the biggest privacy hole in Apple's lineup for years. A lot of people read the tea leaves awhile back to infer that the CSAM protection was a precursor to E2E encryption eventually rolling out. Turns out, those folks were right, and E2E is rolling out even without the CSAM protection at this point.

    https://www.macrumors.com/2022... [macrumors.com]

    E2E encrypted iCloud Backups is the big one, but it

    • Is the encryption tool open source? If not, why would you trust it?

    • by dgatwood ( 11270 )

      Now if only we could get non-cloud backups. That's the biggest reason not to use iOS for anything serious, IMO.

      You can back up your Mac with Time Machine to a local NAS, replicate it across a private link to another site, etc., and none of your data leaves the corporate network. And if something gets compromised, you can roll back to a backup from a week earlier, a month earlier, a year earlier, etc.

      With iOS, you have only the most recent full backup. No history. No end-to-end encryption. No on-premise

      • by tlhIngan ( 30335 )

        With iOS, you have only the most recent full backup. No history. No end-to-end encryption. No on-premises storage. No backups of your backups. It's downright sad by comparison.

        You can have on-premise backups of your iOS device. It involves using an awful tool on Windows, but it exists, and it can be encrypted to capture everything in the backup.

        Apple has information on how to backup on a Mac [apple.com] and on Windows [apple.com].

        Apple has loads of information on why you'd want to back up iOS to your computer over iCloud [apple.com].

        This has

        • by dgatwood ( 11270 )

          With iOS, you have only the most recent full backup. No history. No end-to-end encryption. No on-premises storage. No backups of your backups. It's downright sad by comparison.

          You can have on-premise backups of your iOS device. It involves using an awful tool on Windows, but it exists, and it can be encrypted to capture everything in the backup.

          Apple has information on how to backup on a Mac [apple.com] and on Windows [apple.com].

          The missing key words are "automatic" and "wireless". Prior to Catalina, I'm pretty sure you could do syncing and backups over Wi-Fi using iTunes; I'm assuming that the new Finder-based backup lacks that ability. But either way, it is still a manual process, and it is still backing up to your local drive, so you still have to provide additional automation yourself to take those backup files and move them to a more useful network-attached drive or whatever.

          Also, because you get the entire backup as a blob,

      • What you’re describing is already possible and has been from the start via the original backup method that you can still do via iTunes (and its successors). Physically connect the device, initiate a backup (optionally encrypted) via iTunes, then backup those snapshots as you please. iCloud Backups were introduced as a more convenient option, but iTunes backups never went away and they have the properties you described.

        • by dgatwood ( 11270 )

          What you’re describing is already possible and has been from the start via the original backup method that you can still do via iTunes (and its successors). Physically connect the device, initiate a backup (optionally encrypted) via iTunes, then backup those snapshots as you please. iCloud Backups were introduced as a more convenient option, but iTunes backups never went away and they have the properties you described.

          See my comments above. Comparing iTunes backups of iOS devices to Time Machine is like comparing Carbon Copy Cloner to Time Machine. It isn't incremental, and if you have a 1TB iPhone that's full, thanks to iPhone hardware still being limited to USB 2.0 speeds, it would take almost five HOURS to back up just one time.

          And of course, it ties up a computer for that entire time, plus the time to move the backup from the computer to long-term storage on top of that.

          Using iTunes backups is completely impractica

          • I agree that it’s absolutely nothing like Time Machine (I’m a huge fan of Time Machine too), but the aspects of Time Machine you specifically cited (e.g. encrypted, on-site, multiple backups, has a history, can be stored on NAS, can itself be backed up, etc.) are all present in iTunes backups, so it seemed as if you didn’t care about the massive differences that separate them. It’s also why I specifically called out the convenience factor, because, as is apparently the case for you a

            • by dgatwood ( 11270 )

              I agree that it’s absolutely nothing like Time Machine (I’m a huge fan of Time Machine too), but the aspects of Time Machine you specifically cited (e.g. encrypted, on-site, multiple backups, has a history, can be stored on NAS, can itself be backed up, etc.) are all present in iTunes backups, so it seemed as if you didn’t care about the massive differences that separate them. It’s also why I specifically called out the convenience factor, because, as is apparently the case for you as well, that’s the biggest differentiator to me.

              Yeah, it's a combination of things. I don't mind having to start a backup manually, but the "tethered to a computer thing" is a giant pain, and the lack of incremental backups is probably also the reason why Apple's iCloud backup doesn't keep more than one backup at a time.... :-/ If keeping multiple backups is too much of a pain for even Apple to do it, then it's way painful to do in a corporate environment, or for individuals to realistically do on their own.

              BTW, I'm pretty sure iOS is the only UNIX-bas

  • by Kiddo 9000 ( 5893452 ) on Wednesday December 07, 2022 @03:14PM (#63111560)
    During the protests in China recently, several people reported that photos and videos of the protests that were stored on their iCloud accounts disappeared. Considering that Apple likely has the technology for scanning iCloud content implemented and deployed, it wouldn't be much of a stretch to believe that Apple is only cancelling their plans for CSAM specifically, and plans to continue using the scanning tech for other purposes. The privacy and surveillance implications of this are worrying, but it seems that these companies don't share the same concerns as us.
    • by Anubis IV ( 1279820 ) on Wednesday December 07, 2022 @03:20PM (#63111590)

      It's likely much simpler than that: the CCP has direct access to that data in the Chinese iCloud data centers. All of the iCloud servers for mainland Chinese users are geographically located within China (Apple allowed China to twist its arm into doing so), and we have every reason to believe that the CCP has the keys to access that data due to local laws that Apple says they are complying with.

      That said, Apple has said they plan to roll out the new E2E encryption for photos worldwide, including China, and the executive they had talking about it joked about not consulting with China for that plan, so that's an encouraging sign.

      • by AmiMoJo ( 196126 )

        I'm sure Apple will continue to comply with Chinese laws. They will have some backdoor into those photos and messages, otherwise they will get booted out of the country.

        Microsoft is the other major Western tech company that operates in China. Stuff stored by Chinese residents in Azure cloud (e.g. OneDrive) is likely subject to the same monitoring.

    • by gweihir ( 88907 )

      Indeed. Also, I believe, Apple, dropbox, etc. have been scanning for suspected copyrighted material forever, so the infrastructure is already in place.

      • Google, Facebook, Twitter, Microsoft, Amazon, Box, and others have been scanning for such for years. Facebook is at least 8 years. Google at least 5. It's pretty standard across cloud services. With Congress pushing to allow such providers to be held accountable for the content their user upload, they really have to act to protect themselves, when a single violation could potentially bring them down.

        • by gweihir ( 88907 )

          Sure. And they did it quietly and hence respected the time-honored strategy for abrading civil rights and privacy. Apple tried virtue-signalling and hence got noticed.

          • People seemed far more upset that Apple was doing the scanning on the phone, rather than in the cloud. They didn't understand that it's MUCH more secure on-phone. In the cloud doesn't allow E2EE.

            • by gweihir ( 88907 )

              Well, yes. But it makes it much more clear what is going on because people are under the illusion that an Apple phone they bought is _theirs_.

  • by MindPrison ( 864299 ) on Wednesday December 07, 2022 @03:23PM (#63111602) Journal

    I'm actually kinda fine with that.

    What I'd worry about though is cases like the guy who seeked medical help for his kid and got flagged and reported for ...well.. you know that story by now, and who says it will stop there? Next thing is who has opinions against their government, and images of protests? Where does it stop?

    I'd personally LOVE if they managed to clean the entire internet of "known" images that flourish the deep dark web, to find and detect real kids in trouble, people who has known cases for abuse, there's plenty of indentifiers in the databases for that.

    But again, where does it stop? At work we have installed a special agent in our windows software that searches our work computers for "known" images, names of those images etc. I've had it on my work computer for years, I just became aware of that software the last 2 years, and no one at our big corporate told us we had that installed, and still I'm fine with it, hasn't bothered me in any way.

    But the monitoring of what we say, what we think and what opinions we have can sometimes go really wrong. And this is my worry, who's to say that it won't be searching for protests, people who go against certain governments, certain people identified as trouble makers. Did you guys remember when it was supposed to be people with terrorist backgrounds who "runs Linux"? Yeah it's that stupid.

    You have the right to privacy, your thoughts an opinions are yours, not something you should be in a register for, you have the rights to discuss matters without being a suspect, you have the rights to voice your opinions any way you like to, but those rights are starting to fade away, and everyone is becoming a suspect if you deviate from the common consensus of whatever is politically correct today, and wasn't yesterday.

    • by gweihir ( 88907 ) on Wednesday December 07, 2022 @03:29PM (#63111618)

      There is a nice attack based on this: Get some of this stuff (supposedly not difficult), get them to your target, which could be as simple as sending them an email, watch the fireworks. And that is why this is an exceptionally bad idea.

      • by AmiMoJo ( 196126 )

        You don't even have to get the stuff. When Apple announced this and published the algorithm, people quickly found they could generate images with the same signature in fairly short timeframes.

    • by WankerWeasel ( 875277 ) on Wednesday December 07, 2022 @03:39PM (#63111656)

      For more than a decade the standard in computer forensics in law enforcement has been lists developed and maintained by NIST. They contain the hashes of known bad files. Images of children are sadly the vast majority of what's flagged.

      Thus far (over a 15 years that I've been part of the forensic community) we haven't seen it abused as many believe. That's not to say it couldn't be. But it's still the standard used in criminal investigations to more easily locate such files.

      It's generally used to make finding such content easier. Without such, an examiner would have to look at every single image and file on a machine. That's not realistic these days, when that can be millions of files on a single drive. With these hash lists, the computer can scan and identify suspect images that may be of interest to the investigation. Then they can be verified by the investigator. They'll also use other tools to further minimize the manual review required in identifying potentially relevant files. For instance, we developed a skin tone filter that identified images that contain a set amount of skintone. This means removing things like photos of a green grassy hill from the pile of photos someone may have to go through. We've seen this remove hundreds of thousands or even millions of images from the group that needs to be reviewed by a human investigator.

      • So with your experience, would you say that it does more good than bad?

        • Does a lot of good in catching child predators. There's still a huge backlog of cases, because the number of examiners are dwarfed in comparison. But it really helps quickly identify files of interest for an examiner to then review.

          We'll certainly see AI help with this in the future. Much the way Google is training AI to identify items in images or how your iPhone now allows you to search something like "beer" in the Photos app and it'll attempt to show all photos that have a beer in them. But for the time

        • Re: (Score:2, Interesting)

          by Anonymous Coward

          I'll say this... the bio father of my kids was caught this way.

          I've always leaned heavily toward the privacy/freedom side of things, but that experience tends to make a person step back and re-evaluate their position a bit. If not for a cloud provider scanning images, he likely would have never been caught (particularly as it seems he was a consumer, not a producer).

          I still very strongly value privacy, but... I also value having people involved in this stuff caught. I think having cloud providers check ha

  • The tool was meant to be privacy-preserving

    And as even the most devout Apple fanboi now knows, Apple's word on privacy is really trustworthy...

    The other problem is, "Think of the children" is literally THE sterotypical counter-example used to explain how that kind of thinking is a slippery slope towards tyranny.

    I'd say Apple couldn't have worked on a more densely packed minefield than with their CSAM initiative. It was never going to be anything but controversial.

  • The original report was that your iPhone would use their latest chips to scan every photo that you took. It would have a sophisticated algorithm for detecting nudity, and it would also check a database (using machine learning, not just exact matches). If you're phone sees something offensive, you would automatically be reported to the authorities. The database of offensive content would be provided by the Government and would be opaque to Apple. (Nobody but some feds know what's in the database.)

    This still

    • by cstacy ( 534252 )

      Clarifying: The scanning is NOT DONE IN THE CLOUD. It's done locally on your phone. Uses their fancy new chips.

  • by RightwingNutjob ( 1302813 ) on Wednesday December 07, 2022 @03:52PM (#63111702)

    who had his google accounts closed down without warning and was reported to the cops because he sent a picture of his child's genitals to his pediatrician via a monitored channel.

    That got flagged to google and triggered a thorough search of the contents of his account, which turned up a picture of his wife naked in proximity to his kid.

    Then they called the cops and terminated his account.

    Now I do question the wisdom of putting nudie pix of your spouse up in the cloud, but I have myself needed to send pictures of my kids' genitals to their pediatrician. I didn't go through the cloud to do it, but given the amount of transparency in these magic gizmos in our pockets, it wouldn't surprise me if at some point uploading a photo to a website (any website, including my pediatrician's) would trigger an automated search for pr0n through Google or amazon web services.

    The problem of detection had three complications, quantified by three quantities:

    Detection rate and false alarm rate, which are familiar to all. With the latter referring to flagging as kiddie pr0n a picture of, for instance, a nude sculpture or painting in a museum. That's easy enough.

    The third quantity, which is less well known, is *clutter rate* which refers to true images of kids' junk but used in a valid, legal context. The clutter rate in general is a function of the environment you're observing quantifies true detections that are there, that are real, but are irrelevant.

    Csam search needs to quantify all three, not just the first two.

  • Obviously if you are against Big Tech scanning your files you are for loliphilia and a bad person.
  • So they've killed their plans to scan for child abuse images, but I'm absolutely certain that they've still got all the infrastructure in place, and I'm sure it will be exploited heavily by authoritarian and "free" nations alike to punish those the state dislikes.

    They already built the system. Don't forget that.
    Do you really think they'll just not use it at all? A company with Apple's ethics record?

    If you really care about privacy and security, don't upload your shit to giant tech companies. Keep it
  • i don’t believe many of apple’s consumers called or wrote requesting that their photos potentially be reported to law enforcement. who did apple think was the customer in this case? what were they thinking?

"The great question... which I have not been able to answer... is, `What does woman want?'" -- Sigmund Freud

Working...