[HN Gopher] The Deceptive PR Behind Apple's "Expanded Protection...
       ___________________________________________________________________
        
       The Deceptive PR Behind Apple's "Expanded Protections for Children"
        
       Author : arespredator
       Score  : 337 points
       Date   : 2021-08-12 19:52 UTC (3 hours ago)
        
 (HTM) web link (piotr.is)
 (TXT) w3m dump (piotr.is)
        
       | jliptzin wrote:
       | There is something you can do about it: don't use Apple products
        
         | blairbeckwith wrote:
         | That strategy will last ~15 minutes until Google is doing the
         | same thing.
         | 
         | Then what? I would argue that what Google is doing already is
         | way more privacy-compromising than this.
        
           | hirundo wrote:
           | That's a great argument for a Linux phone or de-googled
           | Android build.
        
       | zug_zug wrote:
       | Good thing this didn't exist in 1776, or I'd be living in Great
       | Britain.
        
       | atbpaca wrote:
       | I doubt Apple has not thought about the PR & policy consequences
       | of such an iPhone backdoor. For me, it's even more sad to see
       | Apple using the fight against CSAM, a noble cause, as a shield
       | and a way to convince the masses that breaking its promise to
       | protect privacy is OK. "What happens in your iPhone stays on your
       | iPhone [no longer]"
        
       | farmerstan wrote:
       | Whoever controls the hash list controls your phone from now on.
       | Period. End of sentence.
       | 
       | Apple has not disclosed who gets to add new hashes to the list of
       | CSAM hashes or what the process is to add new hashes. Do
       | different countries have different hash lists?
       | 
       | Because if the FBI or CIA or CCCP or KSA wants to arrest you, all
       | they need to do is inject the hash of one of your photos into the
       | "list" and you will be flagged.
       | 
       | Based on the nature of the hash, they can't even tell you which
       | photo is the one that triggered the hash. Instead, they get to
       | arrest you, make an entire copy of your phone, etc.
       | 
       | It's insidious. And it's stupid. Why Apple is agreeing to do this
       | is disgusting.
        
         | fortenforge wrote:
         | There are numerous incorrect statements in your comment.
         | 
         | First: Apple has disclosed who gets to curate the hash list.
         | The answer is NCMEC and other child safety organizations.
         | https://twitter.com/AlexMartin/status/1424703642913935374/ph...
         | 
         | Apple states point-blank that they will refuse any demands to
         | add non-CSAM content to the lists.
         | 
         | Second: Why can't the FBI / CCCP inject a hash into the list.
         | Here's a tweet thread gamifying that scenario:
         | https://twitter.com/pwnallthethings/status/14248736290037022...
         | 
         | The short answer is that at some point an Apple employee must
         | visually review the flagged photo, and confirm that it does
         | represent CSAM content. If it does not, then Apple is under no
         | legal obligation to report it.
        
           | farmerstan wrote:
           | How do you know there aren't bad actors working at the NCMEC?
           | If I know that adding a hash to a list will get it flagged,
           | and I could conveniently arrest or discredit anyone I wanted,
           | I would certainly send people to work there.
           | 
           | How will Apple know whether a hash is for non-CSAM content?
           | Spoiler alert: they won't.
           | 
           | And Apple claims it will be reviewed by a human. Sure, just
           | like YouTube copyright claims? Or will it get automated in
           | the near future? And what about in China? Or Saudi Arabia or
           | other countries with less human rights?
           | 
           | The point is that it is completely an easy way to get tagged
           | by a government or bad actors as a pedophile. It's sickening
           | that Apple would let this "technology" into their products.
        
         | robertoandred wrote:
         | What on earth? To clear up some of your false statements:
         | 
         | - You need several hash matches to trigger a review - The
         | reviewer can of course see what triggered the review (the
         | visual derivative) - The reviewer would see that the matches
         | are not CSAM, and instead of the report being sent on to the
         | NCMEC it would instead start an investigation of why these
         | innocuous images were matched in the first place - If the CIA
         | or FBI or CCP wanted to arrest you, there are much easier ways
         | than this
        
       | shmerl wrote:
       | This ad seems fitting in the context:
       | https://www.youtube.com/watch?v=tdVzboF2E2Q
        
       | tuatoru wrote:
       | Unless Apple can demonstrate that the techniques they are using
       | are _intrinsically_ specific to CSAM and to CSAM only--the
       | techniques do not work for any other kinds of photo or text--
       | slippery slope arguments are perfectly valid and cannot be
       | denied.
       | 
       | Apple is a private company and as such its actions amount to
       | vigilantism.
        
       | FabHK wrote:
       | Question:
       | 
       | Would Apple report CSAM matches worldwide to one specific US NGO?
       | That's a bit weird, but ok. Presumably they know which national
       | government agencies to contact.
       | 
       | Opinion:
       | 
       | If Apple can make it so that
       | 
       | a) the list of CSAM hashes is globally the same, independent of
       | the region (ideally verifiably so!), and
       | 
       | b) all the reports go only to that specific US NGO (which
       | presumably doesn't care about pictures of Winnie the Pooh or
       | adult gay sex or dissident pamphlets)
       | 
       | then a lot of potential for political abuse vanishes.
        
         | trynumber9 wrote:
         | Apple said they're only enabling it in the US for now.
        
       | Accacin wrote:
       | Eh, I completely agree that this is a step too far, but the
       | solution is so simple. Stop using Apple devices - luckily I
       | switched from iOS to CalyxOS when my iPhone 7 broke earlier this
       | year. Honestly, it wasn't so bad.
        
         | psychomugs wrote:
         | This is throwing the baby (pictures) out with the bathwater. I
         | am for better or worse deeply rooted in the Apple tree (phone,
         | laptop, tablet, and, recently, watch); for all its occasionally
         | infuriating and arguably stupidly designed warts, the fact that
         | so many features disappear and Just Work is something you can
         | nary say for other ecosystems.
        
           | hypothesis wrote:
           | That's the thing, for years I had to tolerate those silly
           | issues from people who are supposed to be best in the
           | industry. There is still no default calculator installed on
           | iPad in 2021!
           | 
           | For some people, it's simply no worth it anymore, after
           | primary commitment is gone..
        
       | atbpaca wrote:
       | when are they going to add this backdoor to MacOS?
        
       | Componica wrote:
       | Imagine taking a photo or have in your gallery a photo a dear
       | leader doesn't want to spread. Ten minutes later you heard a
       | knocking at your door. That's what I'm most worried about, how is
       | this not creating the infrastructure to ensnare political
       | dissidents.
        
         | psyc wrote:
         | I am profoundly disappointed that almost all of the discussion
         | is about the minutiae of the implementation, and "Hmm.. Am I ok
         | with the minutiae of Apple's specific implementation at
         | rollout?" And almost nobody is discussing the basic general
         | principle of whether they want their own device to scan itself
         | for contraband, on society's behalf.
        
           | hypothesis wrote:
           | Maybe people realize that's not a winning strategy and thus
           | keep going back to technical details...
        
       | EugeneOZ wrote:
       | in "Photos" app, in the bottom right corner there is a "search"
       | icon. When I click it, and entering "beach", I can see photos
       | I've made on the beach (or in the sea, near the beach).
       | 
       | What does it mean? My (and your) photos are scanned and analyzed.
       | I've heard literally zero noise about this feature - nobody was
       | complaining (at least not loud enough to let me notice it).
       | 
       | So, why the hell all of that fuzz is being raised now? You're
       | (and mine) photos will be scanned and analyzed AGAIN. Not by
       | humans, by algorithms. In some really rare cases they might be
       | checked by humans, but you 100% will not have troubles with the
       | law if photos don't contain CSAM.
       | 
       | I have 2 kids and I'm not buying that argument "oh my library of
       | naked photos of my child - I'm in danger". If you are uploading
       | naked photos of your child to iCloud - it's similar to publishing
       | them. Everything that is uploaded to the Internet, will belong to
       | the Internet, and you don't have so much control of it. If, for
       | some awkward reason, you have sets of naked photos of your child
       | and you want to save them - never ever send them to the Internet.
       | 
       | If you think that not-so-experienced users should not know about
       | this rule - I'm pretty sure they don't even know (or care) about
       | this "scandal". All of that FUD wave is raised by the journalists
       | and echoes on forums like this one.
        
         | kevinpet wrote:
         | What are those cases where they might be checked by humans? To
         | determine whether it's an innocent baby bath? If you have naked
         | photos of a partner which happen to hit a statistical match for
         | certain patterns that are similar to CSAM? These aren't far
         | fetched scenarios, these are exactly the most likely types of
         | photos that would be likely flagged. Are you okay with those
         | photos being passed around Apple's security review team for
         | entertainment? Leaked to the press if you later run for office?
         | 
         | How about in 15 years when your small children aren't small? Is
         | this the magical software that can tell the difference between
         | 18 year old boobs and 17 year old? The danger isn't to child
         | molesters, it's to people who get incorrectly flagged as child
         | molesters and need to fight to prove their innocence.
        
           | EugeneOZ wrote:
           | I'm not even sure if it's a joke or you are serious.
           | 
           | It is a check against existing hashes in a big database of
           | confirmed CSAM. What are the chances that photos of your
           | partner are in that database? If your partner is older than
           | 12 - it's 0%.
           | 
           | Who is taking more risk to be sued for the leakage of the
           | photos, you or Apple?
           | 
           | The last part doesn't worth to be discussed because children
           | in that DB are younger than 12.
        
         | shapefrog wrote:
         | It turns out people liked it when their phone scanned their
         | photos for 'selfie' or 'beach' for them.
         | 
         | Apparently tagging 'child porn' on your photos for searching
         | isnt the killer feature someone thought it might be.
        
           | EugeneOZ wrote:
           | Yeah :) Also, it's funny that Apple here goes for bigger
           | risks: reputation, trust, all of that noise, then risks of
           | false accusations. And for what? To help with stopping the
           | pedophile networks.
           | 
           | "But no, wait, they want to use algorithms to scan my photos,
           | it's a privacy violation..."
           | 
           | Just wake up.
        
             | shapefrog wrote:
             | They can have a full resolution copy of my photo, all 12
             | million pixels, along with the exact time, location and
             | direction I was facing when I took it... but I draw the
             | line firmly at a hash of it being taken.
        
           | shuckles wrote:
           | So what you're saying is if Apple had a 5 year plan to help
           | China disappear minorities, they should've just kept
           | improving photos search? Maybe this child safety effort isn't
           | aimed at satisfying some authoritarian wet dream after all!
        
             | [deleted]
        
             | shapefrog wrote:
             | Given that they classified a photo I took at a pool as
             | 'beach', they have an awfully long way to go. If their
             | disappearing algorithm doesnt improve they will be
             | disappearing chinese majorities instead of minorities.
        
       | spoonjim wrote:
       | Any idea why Apple didn't just implement server side scanning
       | like everyone else?
        
         | pvarangot wrote:
         | This article speculates that that's because Apple is not
         | scanning on iCloud to respect their privacy policy:
         | https://www.hackerfactor.com/blog/index.php?/archives/929-On...
         | 
         | Apple's report count to the NCMEC is really low so it's
         | probably true that they are not scanning on iCloud unless they
         | receive a warrant.
        
         | lawkwok wrote:
         | In this TechCrunch interview, Apple believes it is less
         | invasive since no one can be individually targeted.
         | 
         | The hashes are hard coded into each iOS release which is the
         | same for all iOS devices. The database is not vulnerable to
         | server side changes.
         | 
         | Additionally, FWIW, they do not want to start analyzing entire
         | iCloud photo libraries so this system only analyzes new
         | uploads.
         | 
         | https://techcrunch.com/2021/08/10/interview-apples-head-of-p...
        
           | Dah00n wrote:
           | >The hashes are hard coded into each iOS release
           | 
           | Do you have a source on that? Since it is illegal to share
           | those hashes in any way or form. Even people working with
           | photo forensic and big photo sharing sites cannot get access
           | to them. I very much doubt Apple can incorporate them into
           | the iOS release without breaking multiple laws. The hashes
           | themselves can easily be reversed to (bad quality) pictures
           | so having the hashes equals having child pornography.
           | 
           | Edit:
           | 
           | https://www.hackerfactor.com/blog/index.php?/archives/929-On.
           | ..
        
             | Engineering-MD wrote:
             | That was very insightful article from a legal aspect.
             | Strongly recommend others read this to understand more
             | nuanced opinion.
        
             | FabHK wrote:
             | > Since it is illegal to share those hashes in any way or
             | form
             | 
             | Source? (The link you provide does not claim that, as far
             | as I could see.)
        
               | lixtra wrote:
               | The article claims that photoDNA is reversible to 26x26
               | images and _claims_ that the hashes are therefore CP.
        
             | lawkwok wrote:
             | From the interview I linked, Apple Privacy head Erik
             | Neuenschwander said, "The hash list is built into the
             | operating system, we have one global operating system and
             | don't have the ability to target updates to individual
             | users and so hash lists will be shared by all users when
             | the system is enabled."
             | 
             | Where did you hear sharing hashes is illegal? How would
             | anybody determine whether CASM at scale without those
             | hashes?
             | 
             | Your hackerfactor source states, "In 2014 and 2015, NCMEC
             | stated that they would give MD5 hashes of known CP to
             | service providers for detecting known-bad files."
        
               | trynumber9 wrote:
               | NCMEC will share MD5 but not the hashes used for
               | perceptual matching.
        
         | wyager wrote:
         | Pessimistically: to allow them to (eventually) scan other
         | content that you don't upload. I think pessimism about Apple's
         | behavior is somewhat warranted at this point.
        
         | fortenforge wrote:
         | It's a good question. The only explanation that makes sense is
         | that this now allows them to begin end-to-end encryption of
         | iCloud photos. I see that many commentators are claiming that
         | they could already have started e2e encryption without
         | introducing this "backdoor." While this is true, Apple would
         | then be creating a perfect environment for child abusers to
         | house their CSAM content on Apple's own servers. You can
         | understand why Apple might not want to do that.
         | 
         | This allows Apple to get to what is in their mind the best of
         | both worlds: a truly private cloud for their valued users while
         | not creating a safe haven for child abusers.
        
           | abecedarius wrote:
           | Lawyerly word games. The e in e2e is the _user_ , or a device
           | loyal to them; malware on the user's device has always been
           | understood as a subversion of e2e.
        
         | barbazoo wrote:
         | Potentialy to be able to introduce e2e encryption later on.
        
         | mortenjorck wrote:
         | As covered in other articles, that is exactly what they were
         | doing previously.
        
           | rootusrootus wrote:
           | I'm not so sure. John Gruber's write-up said that Apple has
           | only sent over a couple hundred reports in the last year to
           | the gov't, compared to over 20 million from Facebook. This
           | suggests to me that Apple's scanning wasn't nearly so
           | widespread.
        
             | zionic wrote:
             | Because no one in their right mind uploads CP to a cloud
             | service, and apparently pedos abuse Facebook's easy sign up
             | process to bulk upload CP.
             | 
             | Not that it matters when those Facebook sign ups are
             | probably proxied with throwaway emails
        
           | tpush wrote:
           | No, they did not. That was erroneous reporting by the
           | Telegraph that a lot of outlets copied [0].
           | 
           | The correction:
           | 
           | > This story originally said Apple screens photos when they
           | are uploaded to iCloud, Apple's cloud storage service. Ms
           | Horvath and Apple's disclaimer did not mention iCloud, and
           | the company has not specified how it screens material, saying
           | this information could help criminals.
           | 
           | And from the interview with TechCrunch:
           | 
           | > This is an area we've been looking at for some time,
           | including current state of the art techniques which mostly
           | involves scanning through entire contents of users' libraries
           | on cloud services that -- as you point out -- isn't something
           | that we've ever done; to look through users' iCloud Photos.
           | 
           | [0] https://www.telegraph.co.uk/technology/2020/01/08/apple-
           | scan...
        
       | rvz wrote:
       | Yep, that's deceptive advertising on privacy and everyone bought
       | into it and walked straight into the reality distortion field.
       | 
       | Another innovative 'gotcha' by Apple. A reminder that they are
       | not your friends.
        
       | anko wrote:
       | From the article;
       | 
       | > You could of course say that it's "a slippery slope" sort of
       | argument, and that we should trust Apple that it won't use the
       | functionality for anything else. Setting aside the absurdity of
       | trusting a giant, for-profit corporation over a democratically-
       | elected government,
       | 
       | And then later it reads
       | 
       | > and has previously cancelled their plans for iCloud backups
       | encryption under the pressure of FBI.
       | 
       | Isn't the FBI in place because of the democratically elected
       | government? It seems like the for profit organisation is trying
       | to do the right thing, and the government is stopping them.
       | 
       | This is the fundamental problem with arguments based on "trust" -
       | the government seems to be doing the wrong thing.
        
       | balozi wrote:
       | Dear tech users,
       | 
       | Associating with some of you has become a liability. One may be
       | smart enough to avoid iPhone and Alexa et al. but what to do when
       | one is surrounded by people who willingly expose themselves to
       | nefarious technology?
       | 
       | In short, I don't want pictures of me being hoovered up along
       | with your baby pics from your iPhone.
        
       | wpdev_63 wrote:
       | I used to always get the latest and greatest iphone but with the
       | politics and everything that's going on why would I want to spend
       | more than the absolute minimum on my cellphone? There are plenty
       | of wholesome things to spend money on other than tech.
        
       | xg15 wrote:
       | > _In the world of computer security this technology has a name,
       | it's called "a backdoor." A well-documented and well-intended
       | backdoor, but still a backdoor. Installed and enabled by default
       | on millions of devices around the world._
       | 
       | Sorry, but that backdoor has already existed for a long time. It
       | exists in every IoT gadget, smart car and other connected device
       | that phones home to its vendor and can receive arbitrary firmware
       | updates. It exists for every app and every desktop software that
       | will automatically update itself in the name of "evergreen
       | software".
       | 
       | This is just the first time someone is publicly making use of the
       | backdoor.
        
       | querez wrote:
       | I have a newborn at home, and like every other parent, we take
       | thousands of pictures and videos of our newest family member. We
       | took pictures of the very first baby-bath. So now I have pictures
       | of a naked baby on my phone. Does that mean that pictures of my
       | newborn baby will be uploaded to Apple for further analysis,
       | potentially stored for indefinite time, shared with law
       | enforcement?
        
         | 908B64B197 wrote:
         | Wait until someone manages to create an image (white noise)
         | that's a hash collision for anything in that database. And then
         | starts spamming random strangers via airdrop.
         | 
         | Enjoy explaining why your mugshot and arrest record had these
         | charges attached to it!
         | 
         | (Actually, in this case the prosecution would probably use the
         | other pictures on the phone that were not detected by the
         | scanning tool as a way to get a guilty plea deal!)
        
           | GuB-42 wrote:
           | Assuming it is possible (I think it is), there is a manual
           | verification process if you have a match. And obviously, the
           | white noise will be rejected, like all pictures that do not
           | look remotely like the original.
           | 
           | But it can be a form of denial of service: saturate the
           | system with hash collisions so that people can't keep up.
        
           | FabHK wrote:
           | It would have to be a number of pictures that are flagged,
           | and after that threshold is exceeded, they (more precisely,
           | their "visual derivative") are reviewed by a human. So, no
           | mugshot and no arrest record, even if you choose to accept
           | any number of pictures sent from random strangers via
           | airdrop.
        
         | bouncycastle wrote:
         | My understanding is that you should not upload these photos to
         | the cloud anyway. The cloud is not your computer and who knows,
         | maybe apple engineers might be snooping on them, or there could
         | be a hack and so on..Putting on the cloud is like sharing with
         | Apple.
        
         | s5300 wrote:
         | So... this is the way I understand it, which the general public
         | will never have the attention span to understand, so it doesn't
         | fucking matter one bit.
         | 
         | LEO's/FBI/every other institution/group that deals with child
         | pornography and abuse have teams that go through a near
         | infinite amount of pictures and videos of CP/etc.
         | 
         | These are then marked by said people as either - yes,
         | CP/Abuse/etc - or marked false positive.
         | 
         | Once marked as what they're after, they're uploaded to a shared
         | database between all groups involved.
         | 
         |  _Only_ what is in these worldwide national databases is what
         | 's going to be checked against. Your new pictures of your
         | children will have obviously never made their way to any of
         | these groups as they've never been shared/distributed in any
         | areas of the internet/etc these people work in to track down
         | trafficking rings (well, I'd hope you're not selling pictures
         | of your children to them).
         | 
         | This is the way I understand it. I admit I haven't looked into
         | it that much. If it's anything different than what I've said,
         | then yeah, it's probably fucked. I don't get what people don't
         | understand about checking against a database though. No, your
         | new pictures of whatever are not in this pre-existing database
        
         | dev_tty01 wrote:
         | No. The CSAM (Child Sexual Abuse Material) scanning is
         | comparing hashes of photos about to be uploaded to iCloud
         | against a specific set of images at NCMEC (National Center for
         | Missing and Exploited Children) which are specific to missing
         | and exploited children. It is not machine learning models
         | looking for nudes or similar. It is not a generalized
         | screening. If enough matched images are found, the images are
         | flagged for manual verification. If the manual verification
         | confirms that the images match specific images in the NCMEC
         | database, law enforcement is informed.
         | 
         | Be aware that almost all cloud providers screen photos.
         | Facebook reported 20 million images in 2020, Google reported
         | half a million. Dropbox, Box, and many, many others report
         | images. See
         | https://www.missingkids.org/content/dam/missingkids/gethelp/...
         | to see a complete list of companies that screen and report
         | images.
         | 
         | The other thing Apple announced which is completely separate
         | from the CSAM photo scanning is additional parental controls
         | for the Messages app. If a parent opts in for their under-13
         | children, a machine learning model will look for inappropriate
         | material and warn the child prior to showing the image. The
         | child is also told that their parent will be flagged if the
         | child looks at it anyway. For 13-18 year olds whose parents
         | opted in, the teen is warned first about the content. If the
         | teen continues past the warning the image is shown and no
         | further action is taken. Parents are not flagged for children
         | 13 and over. As I said, this is a parental control for pre-
         | adult kids. It requires opt-in from the parents and has no law
         | enforcement implications.
        
           | throwaway212135 wrote:
           | I am not sure the right questions are being asked.
           | 
           | 1. Who is adding these photos to NCMEC? 2. How often are
           | these photos added? 3. How many people have access to these
           | photos - both adding and viewing?
           | 
           | Everyone is focused on Apple and no one is looking at MCMEC.
           | If I wanted to plant a Trojan horse, I would point everyone
           | towards Apple and perform all of the dirty work on the NCMEC
           | end of things.
        
             | jet_32951 wrote:
             | Exactly. An unknown mechanism adds hashes to a NGO subject
             | to exactly what conditions?
             | 
             | This initiative makes me extremely leery of black boxes, to
             | the extent that any algorithm between subject and
             | accusation had damned well better be explainable outside
             | the algorithm; else I as a jury member am bound to render a
             | "not guilty" verdict.
        
               | nicce wrote:
               | Their system needs real images in the training phase,
               | because they are building the system which produces
               | hashes. There must be someone to confirm from Apple, that
               | indeed correct photos are flagged. At least in the
               | beginning.
               | 
               | We don't know really how adding new hashes work. NCMEC
               | has the whole new algorithm and they drag-n-drop new
               | images? Hopefully not like that.
        
           | ok123456 wrote:
           | The correct answer is a well qualified "Maybe." The hashes
           | are fuzzy AI generated weights. It's impossible to know what
           | will cause a false-positive.
        
           | aczerepinski wrote:
           | Comparing hashes reminds me of this announcement from a few
           | years ago that Google had produced a SHA1 collision:
           | https://security.googleblog.com/2017/02/announcing-first-
           | sha...
           | 
           | Can you imagine the chaos of a successful collision matching
           | some explicit material being sent as a prank or targeted
           | attack?
        
             | benlivengood wrote:
             | No chaos. The photos would be reported, reviewers would say
             | "that's weird" since the false positive was obviously
             | harmless and the industry would eventually switch to a
             | different hash method while ignoring the false positives
             | generated by the collision. If there were a flood of false
             | positive images being produced the agencies would work
             | faster to come up with a new solution, not perform mass
             | arrests.
        
               | farmerstan wrote:
               | Right. Kind of like how copyright violations on YouTube
               | are double checked and the humans say "that's weird" and
               | deny the request. Or maybe they will just report
               | everything and let the law work everything out. If
               | they're innocent they have nothing to worry about, right?
        
         | sandworm101 wrote:
         | Yes, if they wind up part of a child porn investigation. Your
         | cloud account gets hacked. Some perv gets your images. He is
         | then arrested and his "collection" added to the hash
         | database... including your family photos.
         | 
         | Context often matters more than the nature of the actual
         | content. Police aquire thousands of images with little hope of
         | ever knowing where they originated. If they are collected by
         | pervs, and could be construed as illegal in the hands of pervs,
         | the images become child porn and can be added to the databases.
        
           | snowwrestler wrote:
           | It's worth pointing out that this could happen with any
           | Internet-attached photo storage, and pre-dates Apple's
           | announcement.
           | 
           | What Apple announced is a new system for reading the existing
           | hash lists of known CSAM images and doing the comparison on
           | the device as part of the iCloud upload, rather than on the
           | server after upload.
        
           | nicce wrote:
           | Actually, we don't know yet whether you can access your
           | photos from the web anymore after this update, because E2EE
           | "like" implementation.
           | 
           | Protocol is rather device specific (while allowing multi-
           | device), so it might not be enough to access or hack iCloud
           | account to access the photos. So, things get complicated.
        
         | dathinab wrote:
         | Unlikely except if you send them to a iphone which is
         | registered with a "child" account.
         | 
         | Apple uses two different approaches:
         | 
         | 1. Some way to try to detect _known_ child pornographic
         | material, but it's fuzzy and there is no guarantee that it
         | doesn't make mistakes like detecting a flower pot as child
         | porn. But the chance that your photos get "miss detected" as
         | _known_ child pornographic material shouldn't be too high. BUT
         | given how many parents have IPhones it's basically guaranteed
         | to happen from time to time!
         | 
         | 2. Some KI child porn detection on child accounts, which is not
         | unlikely to labile such innocent photos as child porn.
        
           | fossuser wrote:
           | Even in the child account case it's not sent to Apple - it
           | alerts parent accounts in the family. It's also just nudity
           | generally, more akin to garden variety parental control
           | content filtering.
           | 
           | The child account iMessage thing is really entirely separate
           | from the CSAM related iCloud announcement. It's unfortunate
           | people keep confusing them.
        
         | fortenforge wrote:
         | Lots of people responding to this seem to not understand how
         | perceptual hashing / PhotoDNA works. It's true that they're not
         | cryptographic hashes, but the false positive rate is
         | vanishingly small. Apple claims it's 1 in a trillion [1], but
         | suppose that you don't believe them. Google and Facebook and
         | Microsoft are all using PhotoDNA (or equivalent perceptual
         | hashing schemes) right now. Have you heard of some massive
         | issue with false positives?
         | 
         | The fact of the matter is that unless you possess a photo that
         | exists in the NCMEC database, your photos simply will not be
         | flagged to Apple. Photos of your own kids won't trigger it,
         | nude photos of adults won't trigger it; only photos of already
         | known CSAM content will trigger (and that too, Apple requires a
         | specific threshold of matches before a report is triggered).
         | 
         | [1] "The threshold is selected to provide an extremely low (1
         | in 1 trillion) probability of incorrectly flagging a given
         | account." Page 4 of https://www.apple.com/child-
         | safety/pdf/CSAM_Detection_Techni...
        
           | ummonk wrote:
           | To be clear, it's 1 in 1 trillion per account. 1 in 1
           | trillion per photo would potentially be a more realistic
           | risk, since some people take tens of thousands of photos.
        
           | andrei_says_ wrote:
           | Who looks at the photos in that database? How do we know it
           | is a trustworthy source? That it doesn't contain photos of
           | let's say activists or other people of interest unrelated to
           | its projected use?
        
           | hsn915 wrote:
           | I think most people don't upload to facebook pictures of
           | their kids taking a bath? But they more than likely store
           | such pictures on their phones/laptops.
        
           | shawnz wrote:
           | The 1 trillion figure is only after factoring in that you
           | would need multiple false positives to trigger the feature.
           | It's not descriptive of the actual false positive rate of the
           | hashing itself.
        
           | farmerstan wrote:
           | How do new hashes get added to this database? How do we know
           | that all the hashes are of CSAM? Who is validating it and is
           | there an audit trail? Or can bad actors inject their own
           | hashes into the database and make innocent people get
           | reported as pedophiles?
        
           | akersten wrote:
           | This is all behind a huge, neon-flashing-lights asterisk of
           | "for now."
           | 
           | How long until they try to machine-learn based on that
           | database? The door's open.
        
             | fortenforge wrote:
             | Apple previously stored photo backups in their cloud in
             | cleartext. The door was always open. At some point if you
             | are providing your personal images for Apple to store, you
             | have to exercise a modicum of trust in the company. If you
             | don't trust Apple, I suggest you don't use an iPhone.
        
           | [deleted]
        
           | FabHK wrote:
           | Probability of a false positive for a given image = p
           | 
           | Probability of N false positives (assuming independence) =
           | p^N
           | 
           | Threshold N is chosen by Apple such that p^N < 10^-12, or N
           | log p < -12 log 10, or N > -12 log(10)/log(p) [since log(p) <
           | 0, since p < 1].
        
           | [deleted]
        
           | [deleted]
        
           | stickfigure wrote:
           | The false positive rate for any given image is not 1 in a
           | trillion. Perceptual hashing just does not work like that. It
           | also suffers from the birthday paradox problem - as the
           | database expands, and the total number of pictures expands,
           | collisions become more likely.
           | 
           | The parent poster does make the mistake of assuming that
           | other pictures of kids will likely cause false positives.
           | Anything could trigger a false positive - especially flesh
           | tones. Like, say, the naughty pictures you've been taking of
           | your (consenting) adult partner. I'm sure Apple's outsourced
           | low-wage-country verification team will enjoy those.
        
         | darkhorn wrote:
         | Just don't critisize your government in any way. Otherwise they
         | will find anything illegal to arrest you like from crossing
         | street in red to I don't know what. You will be fine becouse
         | there is a legal system and no one can put you into jail for
         | crimes you have not commited. Just look at Julian Assange or
         | random Joe in Belarus who was arrested for wearing red white
         | hat. The justice system always is in the innocent people's
         | side, without exception.
        
         | [deleted]
        
         | stevenicr wrote:
         | from my current understanding - that does occur with m-soft
         | one-drive which is a default in many systems), but not the
         | hash-looking thing apple is currently proposing.
        
         | baal80spam wrote:
         | No, it doesn't work like that.
        
           | zionic wrote:
           | Yes it does, it uses fuzzy perceptual hashes not crypto
           | hashes.
           | 
           | So if your innocent baby pic looks similar enough to a
           | previously tagged child abuse image then YES, it will flag
           | you and send a copy to the feds.
           | 
           | And before you correct me, the Apple employee will see a
           | picture of your naked baby and hit "forward to NCMEC",
           | which... upon investigation is actually just the feds
        
             | outworlder wrote:
             | IF there are multiple matches, IF it's going to icloud,
             | THEN a 'derivative image' will be show for screening and IF
             | deemed to be warranted, sent to NCMEC.
        
         | migueldeicaza wrote:
         | No, this will never be caught.
         | 
         | This only catches ownership of illegal photos.
        
         | slownews45 wrote:
         | If you don't choose upload to icloud, no upload to apple at
         | all.
         | 
         | If you do choose icloud upload (most do), they were being
         | uploaded already and stored and may be available to law
         | enforcement.
         | 
         | If you do upload to icloud, NOW they will be screened for
         | matches with "known" images in a database, and if you have more
         | than a threshold number of hits, you may be reported. This will
         | happen on device.
         | 
         | Apple will also scan photos in their cloud system as well from
         | what I can tell (though once on device is working less should
         | land in cloud).
         | 
         | Note that it is HIGHLY likely that google photos / facebook /
         | instagram and others will or are already doing similar scanning
         | and reporting. I've heard millions of reports go in a year.
        
           | zionic wrote:
           | Disabling iCloud does not remove the scanning system or it's
           | database from your phone.
        
             | shapefrog wrote:
             | Not syncing your contacts to icloud does not remove the
             | uploading system and its components from your phone.
             | 
             | Disabling iCloud does not remove the uploading system from
             | your phone.
             | 
             | Pressing end recording on a video does not remove the video
             | capture system from your phone.
        
           | cwkoss wrote:
           | Arent the perceptual hashes based on a chunk of the image?
           | 
           | I wonder what the false positive rates are for:
           | 
           | - A random image against the DB of perceptual hashes
           | 
           | - Images of a baby's skin against the DB of perceptual hashes
           | 
           | It seems like the second would necessarily have a higher
           | false positive rate: similar compositions (contains baby's
           | skin) would more likely have similar chunks. Is it just a
           | little higher or several orders of magnitude higher?
           | 
           | I know hash collisions are rare, but wonder how much rarity
           | of collisions decreases with perceptual hashes.
        
             | fossuser wrote:
             | It's two factors, both the match on an image hash and an
             | unknown threshold of matches at which point the data gets
             | sent up. If the threshold is not met then nothing gets
             | notified (even if there is a match). Arguably this is why
             | this approach is better for privacy. Cloud matches would
             | not be able to have this extra threshold (in addition to
             | this model allowing e2ee on the cloud in the future).
             | 
             | I'd also like to know more about the specifics here, my
             | guess is that threshold value is pretty high (their 'one in
             | a trillion' comment not withstanding). It's probably
             | targeting large CSAM dumps of matches which would not get
             | flagged by different images.
        
             | slownews45 wrote:
             | Absolute - I think this is one of two key questions for me.
             | That is why I put "known" in quotes. It can't be an exact
             | match because it has to handle cropping, rotation, resize
             | etc.
             | 
             | Images then do get a manual review before a report is made
             | which is good and may help provide feedback on alogs being
             | used.
             | 
             | Going to be hard though for apple to set the second factor
             | to high - I'd say 5 maybe? It's hard to say you had matches
             | on potential CASM and ignored them I'd think.
        
             | still_grokking wrote:
             | According to
             | 
             | https://rentafounder.com/the-problem-with-perceptual-
             | hashes/
             | 
             | the false-positive rate will be likely high. Given the
             | billions of pictures going through this system there are
             | going to be a lot of false accusations of child porn
             | possession likely (and alone such an accusation can ruin
             | lives).
             | 
             | HN discussion of that article from a few days ago:
             | 
             | https://news.ycombinator.com/item?id=28091750
        
               | slownews45 wrote:
               | This is where the thresholding and manual review come in,
               | but could be a bit scary for sure.
        
         | fossuser wrote:
         | It's worth reading this, which is basically the only good
         | reporting I've seen on this topic:
         | https://daringfireball.net/2021/08/apple_child_safety_initia...
         | 
         | There are legitimate things to be concerned about, but 99% of
         | internet discussion on this topic is junk.
        
           | joe_the_user wrote:
           | _99% of internet discussion on this topic is junk._
           | 
           | And how is that?
           | 
           | It seems like the Gruber article follows a common formula for
           | justifying controversial approaches. First, "most of what you
           | hear is junk", then "here's a bunch of technical points
           | everyone gets wrong"(but where the wrongness might not change
           | the basic situation), then go over the non-controversial and
           | then finally go to the controversial parts and give the
           | standard "think of the children" explanation. But if you've
           | cleared away all other discussion of the situation, you might
           | make these apologistics sound like new insight.
           | 
           | Is Apple "scanning people's photos"? Basically yes? They're
           | doing it with signatures but that's how any mass surveillance
           | would work. They promise to do this only with CSAM but they
           | previously promised to not scan your phone's data at all.
        
             | madeofpalk wrote:
             | But some of those technical points are important. Parent
             | comment was concerned that photos of their own kids will
             | get them in trouble - it appears the system was designed to
             | explicitly to prevent that.
        
               | joe_the_user wrote:
               | The Daring Fireball article actually is a little
               | deceptive here. It goes over a bunch of that won't get
               | parents in trouble and gives a further couched
               | justification of the finger printing example.
               | 
               | The question is whether an ordinary baby photo is likely
               | to collide with the one of the CSAM hashes Apple will be
               | scanning for. I don't think Apple can give a definite no
               | here (Edit: how could give a guarantee that a system that
               | finds any disguised/distorted CSAM won't tag a random
               | baby picture with a similar appearance. And given such
               | collision, the picture might be looked at by Apple and
               | maybe law enforcement).
               | 
               | Separately, Apple does promise only to scan things going
               | to iCloud for now. But their credibility no long appears
               | high given they're suddenly scanning users' photos on the
               | users' own machines.
               | 
               | Edited for clarity.
        
               | FabHK wrote:
               | > how could give a guarantee that a system that finds any
               | disguised/distorted CSAM won't tag a random baby picture
               | with a similar appearance.
               | 
               | Cannot guarantee, but by choosing a sufficiently high
               | threshold, you can make the probability of that happening
               | arbitrarily small. And then you have human review.
               | 
               | > And given such collision, the picture might be looked
               | at by Apple and maybe law enforcement
               | 
               | No, not "the picture", but a "visual derivative".
        
           | refulgentis wrote:
           | It's also not even wrong in so many ways that it really
           | highlights how far DF has fallen over the years. Really ugly
           | stuff, handwaving about hashing and nary a mention of
           | perceptual hashing and collisions. Not a technology analysis
           | of any sort.
        
           | ursugardaddy wrote:
           | It's still a non-zero chance it triggers a no-knock raid by
           | the police that kills your family or pets.
           | 
           | it happens all the time
        
             | lawkwok wrote:
             | Non-zero being technically true because of the subject
             | matter, but I don't see how Apple's system increases the
             | risk of authorities killing family or pets more than
             | server-side scanning.
        
               | merpnderp wrote:
               | Their neural hashing is new, and they claim has a one in
               | a trillion collision rate. There are 1.5 trillion images
               | created in the US and something like 100 million photos
               | in the compared database. That's a heck of a lot of
               | collisions. And that's just a single year, Apple will be
               | comparing everyone's back catalog.
               | 
               | A lot of innocent people are going to get caught up in
               | this.
        
               | lawkwok wrote:
               | We'll have to wait and see how good their neural hashing
               | is, but just to clarify the 1 trillion number is the
               | "probability of incorrectly flagging a given account"
               | according to Apple's white paper.
               | 
               | I think some people think that's the probability of a
               | picture being incorrectly flagged, which would be more
               | concerning given the 1.5 trillion images created in the
               | US.
               | 
               | Source: https://www.apple.com/child-
               | safety/pdf/CSAM_Detection_Techni...
        
               | fossuser wrote:
               | I think you're wrong about the risk (the paper says per
               | account), but even so you need to compare it to the
               | alternatives.
               | 
               | Photos in iCloud are unencrypted and Apple checks for
               | CSAM on the unencrypted photos server side, they know of
               | all matches.
               | 
               |  _OR_
               | 
               | Photo hashes are checked client side and only if a
               | certain threshold of matches is passed does Apple get
               | notified at all (at which point there's a sanity check
               | for false positive by a person). This would allow all
               | photos on iCloud to be able to be encrypted e2e.
               | 
               | Both only happen when iCloud photo backup is enabled.
               | 
               | The new method reduces the risk.
        
           | samename wrote:
           | John Gruber is biased because his brand is closely tied to
           | Apple's brand. Ben Thompson wrote a better review on the
           | topic: https://stratechery.com/2021/apples-mistake/
           | 
           | There's also the Op-Ed by Matthew Green and Alex Stamos,
           | cyber security researchers:
           | https://www.nytimes.com/2021/08/11/opinion/apple-iphones-
           | pri...
        
             | fossuser wrote:
             | They have a podcast together called Dithering which is
             | pretty good (but not free) - they're friends.
             | 
             | I think John's article is better than Ben's, but they're
             | both worth reading.
             | 
             | Ben takes the view that unencrypted cloud is the better
             | tradeoff - I'm not sure I agree. I'd rather have my stuff
             | e2ee in the cloud. If the legal requirements around CSAM
             | are the blocker then Apple's approach may be a way to
             | thread the needle to get the best of both worlds.
        
               | samename wrote:
               | Friends can disagree. Everyone has their own biases -
               | good and bad. I think it's always good to keep people's
               | biases in mind when reading their work.
        
               | fossuser wrote:
               | I agree, but it doesn't necessarily mean what they say is
               | wrong.
               | 
               | I like that they disagree - the issue doesn't have an
               | obviously correct answer.
        
               | [deleted]
        
               | AlexandrB wrote:
               | One logical conclusion of systems like this is that
               | modifying your device in any "unauthorized" way becomes
               | suspicious because you might be trying to evade CSAM
               | detection. So much for jail-breaking and right to repair!
               | 
               | I think I'd rather have the non-e2ee cloud.
        
               | fossuser wrote:
               | I don't really buy that - you could just turn off iCloud
               | backup and it'd avoid their current implementation.
        
               | echelon wrote:
               | And you think this will be the ultimate implementation?
               | 
               | Let the devil in, and he'll treat himself to tea and
               | biscuits.
        
               | fossuser wrote:
               | I think it's possible to have nuanced policy in difficult
               | areas where some things are okay and others are not.
        
               | sa1 wrote:
               | For me it's the worst of both worlds - e2ee has no
               | meaning if the ends are permanently compromised - and
               | there's no local vs cloud separation anymore which you
               | can use to delineate what is under your own control -
               | nothing's under your control.
        
               | fossuser wrote:
               | The end isn't really compromised with their described
               | implementation.
               | 
               | The only thing sent is the hash and signature and that's
               | only if there are enough matches to pass some threshold.
               | 
               | I don't really view that as 'permanently compromised' -
               | at least not in any way more serious that Apple's current
               | capabilities to compromise a device.
               | 
               | I think e2ee still has meaning here - it'd prevent Apple
               | from being able to see your photo content on their
               | servers.
               | 
               | This is a nuanced issue, I don't think there's an
               | obviously better answer and both outcomes have different
               | risks. [0]
               | 
               | [0]:
               | https://www.lesswrong.com/posts/PeSzc9JTBxhaYRp9b/policy-
               | deb...
        
               | sa1 wrote:
               | Yeah, and as argued in one of the blog posts - that's
               | just a policy decision - not a capability decision -
               | malleable to authoritarian countries' requests.
        
               | fossuser wrote:
               | Yes - and I agree that that's where the risk lies.
               | 
               | Though I'd argue the risk has kind of always lied there
               | given companies can ship updates to phones. You could
               | maybe argue it'd be harder to legally compel them to do
               | so, but I'm not sure there's much to that.
               | 
               | The modern 'megacorp' centralized software and
               | distribution we have is dependent on policy for the most
               | part.
        
               | matwood wrote:
               | That's the problem I had with Ben's post - it's _always_
               | been policy since Apple controls and distributes iOS.
        
               | fossuser wrote:
               | Yeah - the sense I got was he just liked the cleaner cut
               | policy of a hard stop at the phone itself (and he was
               | cool with the tradeoff of unencrypted content on the
               | server).
               | 
               | It does have some advantages - it's easier to argue (see:
               | the disaster that is most of the commentary on this
               | issue).
               | 
               | It also could in theory be easier to argue in court. In
               | the San Bernardino case - it's easier for Apple to
               | decline to assist if assisting requires them to _build_
               | functionality rather than just grant access.
               | 
               | If the hash detection functionality already exists and a
               | government demands Apple use it for something other than
               | CSAM it may be harder for them to refuse since they can
               | no longer make the argument that they can't currently do
               | it (and can't be compelled to build it).
               | 
               | That said - I think this is mostly just policy all the
               | way down.
        
               | sa1 wrote:
               | Yup, we can agree on that.
        
               | echelon wrote:
               | > The end isn't really compromised with their described
               | implementation.
               | 
               | They've turned your device into a dragnet for content the
               | powers that be don't like. It could be anything. They're
               | not telling you. And you're blindly trusting them to have
               | your interests at heart, to never change their promise.
               | You don't even know these people.
               | 
               | You seriously want to cuddle up with that?
        
               | fossuser wrote:
               | > "They've turned your device into a dragnet for content
               | the powers that be don't like. It could be anything.
               | They're not telling you"
               | 
               | They're pretty explicitly telling us what it's for and
               | what it's not for.
               | 
               | > "And you're blindly trusting them to have your
               | interests at heart, to never change their promise. You
               | don't even know these people."
               | 
               | You should probably get to work building your own phone,
               | along with your own fab, telecoms, networks, - basically
               | the entire stack. There's trust and policy all over the
               | place. In a society with rule of law we depend on it. You
               | think your phone couldn't be owned if you were important
               | enough to be targeted?
        
               | amelius wrote:
               | > The only thing sent is the hash and signature and
               | that's if there are enough matches to pass some
               | threshold.
               | 
               | Not true. If there are enough matches, someone at Apple
               | will have a look at your pictures. Even if they are
               | innocent.
        
               | fossuser wrote:
               | I think the one thumbnail of the matching hash? Just to
               | make sure there isn't a (they argue one in a trillion,
               | but I don't know if I buy that) false positive.
               | 
               | That's if there is enough matches to trigger the
               | threshold in the first place, otherwise nothing is sent
               | (even if there are matches below that threshold).
               | 
               | Alternatively this is running on all unencrypted photos
               | you have in iCloud and all matches are known immediately.
               | Is that preferable?
        
               | drenvuk wrote:
               | I really don't understand how you're arguing as if you
               | don't see the bigger picture. Is this is a subtle troll?
               | 
               | They are now scanning on the device. Regardless of how
               | limited it is in its current capabilities, those
               | capabilities are only prevented from being expanded by
               | Apple's current policies. The policies enacted by the
               | next incoming exec who isn't beholden to the promises of
               | the previous can easily erode whatever 'guarantees' we've
               | been given when they're being pressured for KPIs or
               | impact or government requests or promotion season or
               | whatever. This has happened time and again. It's been
               | documented.
               | 
               | I really am at a loss how you can even attempt to be fair
               | to Apple. This is a black and white issue. They need to
               | keep scanning for crimes off our devices.
               | 
               | So to your answer your question, yes it is preferable to
               | have them be able to scan all of the unencrypted photos
               | on iCloud. We can encrypt things beforehand if need be.
               | It is lunacy to have crime detecting software on the
               | device in any fashion because it opens up the possibility
               | for them to do _more_. The people in positions to ask for
               | these things always want more information, more control.
               | Always.
               | 
               | The above reads like conspiracy theory but over the past
               | couple of decades it has been proven correct. It's
               | honestly infuriating to see people defend what's going on
               | in any way shape or form.
        
               | fossuser wrote:
               | Frankly the distinction seems arbitrary to me.
               | 
               | This is a policy issue in both cases - policy can change
               | (for the worse) in both cases.
               | 
               | The comparison is about unencrypted photos in iCloud or
               | this other method that reveals less user information by
               | running some parts of it client side (only if iCloud
               | photos are enabled) and could allow for e2e encryption on
               | the server.
               | 
               | The arguments of "but they could change it to be worse!"
               | applies to any implementation and any policy. That's why
               | the specifics matter imo. Apple controls the OS and
               | distribution, governments control the legislation (which
               | is hopefully correlated with the public interest). The
               | existing 'megacorp' model doesn't have a non-policy
               | defense to this kind of thing so it's always an argument
               | about policy. In this specific implementation I think the
               | policy is fine. That may not hold if they try to use it
               | for something else (at which point it's worth fighting
               | against whatever that bad policy is).
               | 
               | Basically what I said here:
               | https://news.ycombinator.com/item?id=28162418
               | 
               | This implementation as it stands reveals less information
               | about end users and could allow them to enable e2ee for
               | photos on their servers - that's a better outcome than
               | the current state (imo).
        
               | amelius wrote:
               | > I think the one thumbnail of the matching hash?
               | 
               | So it _is_ sending pictures? That makes your argument
               | quite a bit weaker.
               | 
               | > Is that preferable?
               | 
               | Nope, E2EE without compromises is preferable.
        
               | matwood wrote:
               | > So it is sending pictures? That makes your argument
               | quite a bit weaker.
               | 
               | Important to note this is only ran on images going to
               | iCloud so they are already sent.
        
               | fossuser wrote:
               | I think the thumbnail is only when the threshold is
               | passed _and_ there 's a hash match. The reason for that
               | is an extra check to make sure there is no false positive
               | match based on hatch match (they claim one trillion to
               | one, but even ignoring that probably pretty rare and
               | strictly better than everything unencrypted on iCloud
               | anyway).
               | 
               | > Nope, E2EE without compromises is preferable.
               | 
               | Well that's not an option on offer and even that has real
               | tradeoffs - it would result in less CSAM getting
               | detected. Maybe you think that's the acceptable tradeoff,
               | but unless government legislatures also think so it
               | doesn't really matter.
               | 
               | This isn't the clipper chip, this is more about enabling
               | _more_ security and _more_ encryption by default but
               | still handling CSAM.
               | 
               | The CSAM issue is a real problem:
               | https://www.nytimes.com/interactive/2019/09/28/us/child-
               | sex-...
        
               | drenvuk wrote:
               | >Well that's not an option on offer and even that has
               | real tradeoffs - it would result in less CSAM getting
               | detected. Maybe you think that's the acceptable tradeoff,
               | but unless government legislatures also think so it
               | doesn't really matter.
               | 
               | It should and can be an option. Who cares what they offer
               | us. Do it yourself.
        
               | FabHK wrote:
               | > someone at Apple will have a look at your pictures
               | 
               | No, but at a "visual derivative"
        
           | slownews45 wrote:
           | Even HN reporting / article linking / comments have been
           | surprisingly low quality and seem to fulminate and declaim
           | with surprisingly little interesting conversation and tons of
           | super big assertions.
           | 
           | Linked articles and comments have said apple's brand is now
           | destroyed, that apple is committing child porn felonies
           | somehow with this (the logical jumps and twisting to get to
           | these claims are very far from strong plausible
           | interpretation).
           | 
           | How do you scan for CASM in an E2EE system is the basic
           | question Apple seems to be trying to solve for.
           | 
           | I'd be more worried about the encrypted hash DB being
           | unlockable - is it clear this DOES NOT have anything that
           | could be recreated into an image? I'd actually prefer NOT to
           | have E2EE and have apple scan stuff server side, and keep DB
           | there.
        
             | still_grokking wrote:
             | From https://www.hackerfactor.com/blog/index.php?/archives/
             | 929-On...
             | 
             | > The laws related to CSAM are very explicit. 18 U.S. Code
             | SS 2252 states that knowingly transferring CSAM material is
             | a felony. (The only exception, in 2258A, is when it is
             | reported to NCMEC.) In this case, Apple has a very strong
             | reason to believe they are transferring CSAM material, and
             | they are sending it to Apple -- not NCMEC.
             | 
             | > It does not matter that Apple will then check it and
             | forward it to NCMEC. 18 U.S.C. SS 2258A is specific: the
             | data can only be sent to NCMEC. (With 2258A, it is illegal
             | for a service provider to turn over CP photos to the police
             | or the FBI; you can only send it to NCMEC. Then NCMEC will
             | contact the police or FBI.) What Apple has detailed is the
             | intentional distribution (to Apple), collection (at Apple),
             | and access (viewing at Apple) of material that they
             | strongly have reason to believe is CSAM. As it was
             | explained to me by my attorney, that is a felony.
             | 
             | Apple is going to commit child porn felonies according to
             | US law this way. This claim seems actually quite
             | irrefutable.
        
               | FabHK wrote:
               | > This claim seems actually quite irrefutable.
               | 
               | I don't think so.
               | 
               | Apple transfers the images to iCloud, yes, but before the
               | threshold of flagged photos is reached, Apple doesn't
               | know that there might be CSAM material among them. When
               | the threshold is exceeded and Apple learns about the
               | potential of CSAM, the images have been transferred
               | already. But then Apple does not transfer them any
               | further, and has a human review not the images
               | themselves, but a "visual derivative" that was in a
               | secure envelope (that can by construction only be
               | unlocked once the threshold is exceeded).
        
               | rootusrootus wrote:
               | Apple isn't looking at the actual image, but a
               | derivative. Presumably their lawyers think this will be
               | sufficient to shield them from accusations of possessing
               | child porn.
        
               | still_grokking wrote:
               | "But look, I've re-compressed it with JPEG 80%. It's not
               | THAT picture!".
               | 
               | It would be interesting to hear what a court has to say
               | if a child porn consumer would try to defend him/her with
               | this "argument".
        
               | rootusrootus wrote:
               | Love 'em or hate 'em, it is hard to believe Apple's
               | lawyers haven't very carefully figured out what kind of
               | derivative image will be useful to catch false positives
               | but not also itself illegal CP. I assume they have in
               | fact had detailed conversations on this exact issue with
               | NCMEC.
        
               | still_grokking wrote:
               | Firstly, the NCMEC doesn't make the the laws. They can't
               | therefore give any exceptional allowance to Apple.
               | 
               | Secondly, any derivatives that are clear enough to enable
               | a definitive judgment whether something's CP or not by an
               | Apple employee would be subject to my argument above.
               | Also just collecting such material is an felony.
               | 
               | I don't see any way around that. Only that promising some
               | checks before stuff gets reported for real is just a PR
               | move to smoothen the first wave of pushback. PR promises
               | aren't truly binding...
        
               | madeofpalk wrote:
               | Don't you think that Apple has their own attorneys and
               | lawyers?
        
               | still_grokking wrote:
               | Don't you think that telling people now that there will
               | be a "check" at Apple before things get reported to NCMEC
               | could be a PR lie to keep people calm?
               | 
               | They can easily say afterwards that they're "frankly"
               | required to directly report any suspicion to enforcement
               | agencies because "that's the law", and they didn't know
               | because that was an oversight?
               | 
               | That would be just an usual PR strategy to "sell"
               | something people don't like: Selling it in small batches
               | works best. (If the batches are small enough people often
               | don't even realize how the whole picture looks. Salami
               | tactics are tried tool for something like that; used for
               | example in politics day to day).
        
               | laserlight wrote:
               | IMHO, what Apple is doing is not _knowingly_ transferring
               | CSAM material. Very strong reason to believe is not the
               | same as knowing. Of course it's up to courts to decide
               | and IANAL.
        
               | mdoms wrote:
               | Go ahead and click on some google results after searching
               | for child porn and see if that defence holds up.
        
               | laserlight wrote:
               | Can you elaborate how your reply relates to Apple's case
               | and my comment?
        
               | slownews45 wrote:
               | Ahh - an "irrefutable" claim that apple is committing
               | child porn felonies.
               | 
               | This is sort of what I mean and a perfect example.
               | 
               | People imagine that apple hasn't talked to the actual
               | folks in charge NCMEC.
               | 
               | People seem to imagine apple doesn't have lawyers?
               | 
               | People go to the most sensationalist least good faith
               | conclusion.
               | 
               | Most mod systems at scale are using similar approaches.
               | Facebook is doing 10's of MILLIONS of images to NCMEC,
               | these get flagged by users and/or systems, and in most
               | cases then facebook copies, checks through moderation
               | queue and submits to NEC.
               | 
               | Reddit uses the sexualization of minors flags. In almost
               | all cases, even though folks may have strong reasons to
               | believe some of this flagged content is CSAM, it still
               | gets a manual look. Once they know they act
               | appropriately.
               | 
               | So the logic of this claim about apples late to party
               | arrival of CSAM scanning is weird.
               | 
               | We are going to find out that instead of trying to charge
               | apple with some kind of child porn charges, NCMEC and
               | politicians are going to be THANKING apple, and may start
               | requiring others with E2EE ideas to follow a similar
               | approach.
        
               | mdoms wrote:
               | Sorry under which of these other moderation regimes does
               | the organisation in question transmit CSAM from a client
               | device to their own servers? To my knowledge Apple is the
               | only one doing so.
        
               | matwood wrote:
               | Apple is attaching a ticket to images as the user uploads
               | to iCloud. If enough of these tickets think CSAM and
               | allow an unlock key to be built, they will unlock and get
               | checked. It's still the user who has turned on iCloud and
               | uploaded the images.
        
             | zionic wrote:
             | You don't. That's the entire point of E2EE, the data
             | transferred is private between you and the recipient party.
        
             | nonbirithm wrote:
             | Another reminder that many parts of HN have their own
             | biases; they're just different than the biases found on
             | other networks.
             | 
             | Instead of exclusively focusing on the authoritarian
             | slippery slope like it's inevitable, it's worth wondering
             | first: why do the major tech companies show no intention of
             | giving up the server-side PhotoDNA scanning that has
             | already existed for over a decade? CSAM is still considered
             | illegal by half of all the countries in the entire world,
             | for reasons many consider justifiable.
             | 
             | The point of all the detection is so that Apple _isn 't_
             | found liable for hosting CSAM and consequently implicated
             | with financial and legal consequences themselves. And
             | beyond just the realm of law, it's reputational suicide to
             | be denounced as a "safe haven for pedophiles" if it's not
             | possible for law enforcement to tell if CSAM is being
             | stored on third-party servers. Apple was not the best actor
             | to look towards if absolute privacy was one's goal to begin
             | with, because the requests of law enforcement are both
             | reasonable enough to the public and intertwined with
             | regulation from the higher powers anyway. It's the nature
             | of public sentiment surrounding this issue.
             | 
             | Because a third party insisting that user-hosted content is
             | completely impervious to outside actors also means that it
             | is possible for users to hide CSAM from law enforcement
             | using the same service, thus making the service criminally
             | liable for damages under many legal jurisdictions, I was
             | surprised that this debate didn't happen earlier (to the
             | extent it's taking place, at least). The two principles
             | seem fundamentally incompatible.
        
             | belorn wrote:
             | The encryption on the hash DB has very little to do with
             | recreating images. It is pretty trivial to make sure that
             | it is mathematical impossible to do (just not enough bytes,
             | and hash collisions means there is an infinitive large
             | number of false positives).
             | 
             | My own guess is that the encryption is there so that people
             | won't have access to an up-to-date database to test
             | against. People who want to intentionally create false
             | positive could abuse it, and sites that distribute images
             | could alter images to automatic bypass the check. There is
             | also always the "risk" that some security research may look
             | at the database and find false positives from the original
             | source and make bad press, as they have done with block
             | lists (who can forget the bonsai tree website that got
             | classified as child porn).
        
           | echelon wrote:
           | Gruber practically (no, perhaps _actually_ ) worships Apple.
           | He'd welcome Big Brother into his house if it came with an
           | Apple logo, and he'd tell us how we were all wrong for
           | distrusting it. He's not the voice to listen to this time,
           | and you should trust him to have your best interests at
           | heart.
           | 
           | People are furious with Apple, and there's no reason to
           | discount the completely legitimate concerns they have. This
           | is a slippery slope into hell.
           | 
           | It's a good thing congress is about to start regulating Apple
           | and Google. Maybe our devices can get back to being devices
           | instead of spy tools, chess moves, and protection rackets.
           | 
           | (read: Our devices are supposed to be _property_. Property is
           | something we fully own that behaves the way we want. It doesn
           | 't spy on us. Property is something we can repair. And it
           | certainly is not a machination to fleece the industry by
           | stuffing us into walled and taxed fiefdoms, taking away our
           | control. Discard anything that doesn't behave like property.)
           | 
           | [edit: I've read Gruber's piece on this. It's wish-washy,
           | kind of like watching a moderate politician dance on the
           | party line. Not the direct condemnation this behavior
           | deserves. Let's not take his wait and see approach with
           | Dracula.]
        
             | mistrial9 wrote:
             | > regulating Apple and Google
             | 
             | this is not strong safety for citizens
             | 
             | source: political history
        
             | acdha wrote:
             | > Gruber practically (no, perhaps actually) worships Apple.
             | He'd welcome Big Brother into his house if it came with an
             | Apple logo, and he'd tell us how we were all wrong for
             | distrusting it.
             | 
             | You mean the same Gruber who described the situation as
             | "justifiably, receiving intense scrutiny from privacy
             | advocates."? The one who said "this slippery-slope argument
             | is a legitimate concern"?
             | 
             | I'm having a hard time reconciling your pat dismissal with
             | the conclusion of his piece which very clearly rejects the
             | position you're attributing to him as grounds for
             | dismissal:
             | 
             | > But the "if" in "if these features work as described and
             | only as described" is the rub. That "if" is the whole
             | ballgame. If you discard alarmism from critics of this
             | initiative who clearly do not understand how the features
             | work, you're still left with completely legitimate concerns
             | from trustworthy experts about how the features could be
             | abused or misused in the future.
             | 
             | I mean, sure, know where he's coming from but be careful
             | not to let your own loyalties cause you to make a bad-faith
             | interpretation of a nuanced position on a complex issue.
        
               | shapefrog wrote:
               | If icloud backup works as advertised - it backs up your
               | device.
               | 
               | However, if we consider the slipery slope, under pressure
               | from a shaddow government, the contents of your phone
               | could have been uploaded to the CIA every day, including
               | live recordings 24 hours a day.
        
               | fossuser wrote:
               | Thanks - I couldn't have said it better.
        
           | Dah00n wrote:
           | Yes but so is much in that link or at least it is very
           | biased. This one is far better:
           | 
           | https://www.hackerfactor.com/blog/index.php?/archives/929-On.
           | ..
        
           | montagg wrote:
           | "If it works as designed" is I think where Gruber's article
           | does it's best work: he explains that the design is pretty
           | good, but the _if_ is huge. The slippery slope with this is
           | real, and even though Apple's chief of privacy has basically
           | said everything everyone is worried about is currently
           | impossible, "currently" could change tomorrow if Apple's
           | bottom line is threatened.
           | 
           | I think their design is making some really smart trade offs,
           | given the needle they are trying to thread. But it shouldn't
           | exist at all, in my opinion; it's too juicy a target for
           | authoritarian and supposedly democratic governments to find
           | out how to squeeze Apple into using this for evil.
        
           | Spooky23 wrote:
           | The EFF wrote a really shitty hit piece deliberately confused
           | the parental management function with the matching against
           | hashes of illegal images. Two different things. From there, a
           | bazillion hot takes followed.
        
             | rootusrootus wrote:
             | Yeah I found the EFF's piece to be really disappointing,
             | coming from an organization I'm otherwise aligned with
             | nearly 100% of the time.
        
               | shapefrog wrote:
               | EFF today is really not the organisation it was just a
               | few years ago. I dont know who they hired badly, but the
               | reasoned takedowns have been replaced with hysterical
               | screaming.
        
               | matwood wrote:
               | > hysterical screaming
               | 
               | Given the political/societal climate, it probably gets
               | them more donations.
        
             | washadjeffmad wrote:
             | The EFF article refers to a "classifier", not just matching
             | hashes.
             | 
             | So, three different things.
             | 
             | I don't know how much you know about them, but this is what
             | the EFF's role is. Privacy can't be curtailed uncritically
             | or unchecked. We don't have a way to guarantee that Apple
             | won't change how this works in the future, that it will
             | never be compromised domestically or internationally, or
             | that children and families won't be harmed by it.
             | 
             | It's an unauditable black box that places one of the
             | highest, most damaging penalties in the US legal system
             | against a bet that it's a perfect system. Working backwards
             | from that, it's easy to see how anything that assumes its
             | own perfection is an impossible barrier for individuals,
             | akin to YouTube's incontestable automated bans. Best case,
             | maybe you lose access to all of your Apple services for
             | life. Worst case, what, your life?
             | 
             | When you take a picture of your penis to send to your
             | doctor and it accidentally syncs to iCloud and trips the
             | CSAM alarms, will you get a warning before police appear?
             | Will there be a whitelist to allow certain people to "opt-
             | out for (national) security reasons" that regular people
             | won't have access to or be able to confirm? How can we know
             | this won't be used against journalists and opponents of
             | those in power, like every other invasive system that
             | purports to provide "authorized governments with technology
             | that helps them combat terror and crime[1]".
             | 
             | Someone's being dumb here, and it's probably the ones who
             | believe that fruit can only be good for them.
             | 
             | [1] https://en.wikipedia.org/wiki/Pegasus_(spyware)
        
               | FabHK wrote:
               | > When you take a picture of your penis to send to your
               | doctor and it accidentally syncs to iCloud and trips the
               | CSAM alarms, will you get a warning before police appear?
               | 
               | You would have to have not one, but N perceptual hash
               | collisions with existing CSAM (where N is chosen such
               | that the overall probability of that happening is
               | vanishingly small). Then, there'd be human review. But
               | no, presumably there won't be a warning.
               | 
               | > Will there be a whitelist to allow certain people to
               | "opt-out for (national) security reasons" that regular
               | people won't have access to or be able to confirm?
               | 
               | Everyone can opt out (for now at least) by disabling
               | iCloud syncing. (You could sync to another cloud service,
               | but chances are that then they're scanned there.)
               | 
               | Beyond that, it would be good if Apple built it
               | verifiably identically across jurisdictions. (If you
               | think that Apple creates malicious iOS updates targeting
               | specific people, then you have more to worry about than
               | this new feature.)
               | 
               | > How can we know this won't be used against journalists
               | and opponents of those in power, like every other
               | invasive system that purports to provide "authorized
               | governments with technology that helps them combat terror
               | and crime[1]".
               | 
               | By ensuring that a) the used hash database is verifiably
               | identical across jurisdictions, and b) notifications go
               | only to that US NGO. Would be nice if Apple could open
               | source that part of the iOS, but unless one could somehow
               | verify that that's what's running on the device, I don't
               | see how that would alleviate the concerns.
        
             | dathinab wrote:
             | Two different things which are sold as one package _by
             | Apple_.
             | 
             | Two different things which both are known to be prone to
             | all kind of miss-detection.
        
             | merpnderp wrote:
             | Can you quote what you found confusing, because I didn't
             | see anything that didn't agree with the Apple announcement
             | they linked in the piece.
        
           | bastardoperator wrote:
           | Like this part?
           | 
           | "The Messages feature is specifically only for children in a
           | shared iCloud family account. If you're an adult, nothing is
           | changing with regard to any photos you send or receive
           | through Messages. And if you're a parent with children whom
           | the feature could apply to, you'll need to explicitly opt in
           | to enable the feature. It will not turn on automatically when
           | your devices are updated to iOS 15."
        
           | vondur wrote:
           | I still don't understand how this is allowed. If the police
           | want to see the photos on my device, then they need to get a
           | warrant to do so. Full stop. This type of active scanning
           | should never be allowed. I hope that someone files a lawsuit
           | over this.
        
             | amelius wrote:
             | You agreed to the EULA :)
        
               | vondur wrote:
               | I'm not sure EULA's can effectively bargain away US
               | constitutional protections.
        
             | fossuser wrote:
             | Speculating (IANAL) - it's only when iCloud photos is
             | enabled. I'd guess this is akin to third party hosting the
             | files, I think the rules around that are more complex.
        
           | mdoms wrote:
           | You must be joking. It would be hard to find anyone more
           | biased in favour of Apple than Gruber.
        
         | samename wrote:
         | Unless those pictures are also in the NCMEC database, there
         | won't be a match.*
         | 
         | * As addressed in the comments below, this isn't entirely true:
         | the hash looks for visually similar picture and there may be
         | false positives.
        
           | gambiting wrote:
           | Absolutely not true. Apple is using a similarity based hash,
           | so if the NCMEC database contains a picture that's _similar_
           | to one that you have, it could produce a match even if it 's
           | not the same. Apple says this isn't an issue, because a
           | person will look at your picture(yes, a random person
           | somewhere will look at the pictures of your newborn) and
           | judge whether they are pictures of child abuse or not. If
           | this unknown person thinks your picture shows child abuse,
           | you will be reported to NCMEC and then what happens is
           | unknown - but likely that it would result in some legal
           | action against you.
        
             | samename wrote:
             | Good point, thanks, updated my comment.
        
             | Spooky23 wrote:
             | Where's your evidence on this?
             | 
             | The NCMEC database and this hashing have been around for
             | like 15 years. I'm curious as to how you know this.
        
               | mattigames wrote:
               | False positives have been found, not because the photo
               | hold any similarities but because the hashes match: https
               | ://www.hackerfactor.com/blog/index.php?/archives/929-On..
               | .
        
               | gambiting wrote:
               | Literally Apple said in their own FAQ that they are using
               | a perceptual(similarity based) hash and that their
               | employees will review images when flagged. If that's not
               | good enough(somehow) then even the New York Times article
               | about it says the same thing. What other evidence do you
               | need?
        
             | lawkwok wrote:
             | Keep in mind, this manual review only happens after Apple's
             | system detects multiple occurrences of matches. Until that
             | point, no human is alerted of matches nor does anyone see
             | how many matches there have been.
             | 
             | In a TechCrunch interview Apple said that they are going
             | after larger targets that are worth NCMEC's time.
        
               | zionic wrote:
               | Parents take a lot of photos of their kid. Like, _lots_.
        
               | jamesu wrote:
               | "Multiple occurrences of matches" could have definitely
               | been an issue for a friend of mine. When they took a
               | picture, they'd often go for the "blitz the subject with
               | the camera" approach, then never ended up deleting all
               | the bad pictures because they had hoarding tendencies.
        
             | vxNsr wrote:
             | > _Apple is using a similarity based hash, so if the NCMEC
             | database contains a picture that 's similar to one that you
             | have, it could produce a match even if it's not the same_
             | 
             | It's actually worse than this, if the hashes are similar
             | then they'll get sent for review. Your picture could be a
             | picture of an abstract painting[0] which has no visual
             | similarity to anything in the db, but through the magic of
             | crypto is similar and it too will be flagged.
             | 
             | [0] The reason I use this example is because someone posted
             | a bunch of abstract art that was flagged by the algo.
        
           | josefx wrote:
           | As far as I understand they use some kind of hash. I suspect
           | their paper on avoiding hash collisions is right next to the
           | Nobel price wining description of the worlds first working
           | perpetuum mobile.
        
             | potatoman22 wrote:
             | https://en.wikipedia.org/wiki/Perceptual_hashing
        
           | Dah00n wrote:
           | That is not true. If you read what experts who actually do
           | something to stop CP (unlike Apple) say there are proven
           | false positives.
           | 
           | https://www.hackerfactor.com/blog/index.php?/archives/929-On.
           | ..
        
             | samename wrote:
             | Thanks, I updated my comment.
        
             | FabHK wrote:
             | Yes, and you can put a number on the probability of that
             | happening, say a fixed p << 1. And then you can choose the
             | number of required matches before flagging, say N. And then
             | (assuming independence [1]) you have an overall probability
             | of p^N, which you can make _arbitrarily small_ by making N
             | sufficiently large. (I 'm pretty sure that's how Apple came
             | up with their "1 in a trillion chance per year".) And then
             | you still have manual review.
             | 
             | [1] you could "help" independence by requiring a certain
             | distance between images you simultaneously flag.
        
         | [deleted]
        
       | deeblering4 wrote:
       | What would prevent someone from, for instance, printing off an
       | illegal photo, "borrowing" a disliked co-workers iCloud enabled
       | phone, and snapping a picture of the illegal picture with their
       | camera?
       | 
       | On iOS the camera can be accessed before unlocking the phone, and
       | wouldn't this effectively put illegal image(s) in the targets
       | possession without their knowledge?
        
         | [deleted]
        
         | randyrand wrote:
         | yes
        
         | fortenforge wrote:
         | These illegal photos are not trivial to obtain. Possessing (and
         | here, the printing step necessitates possession) these illegal
         | photos is in and of itself a crime in most relevant
         | jurisdictions.
         | 
         | But OK, let's say that you've found a way to get the photos and
         | you're comfortable with the criminal implications of that. At
         | that point why don't you just hide the printed photos in your
         | coworker's desk? My point is that if you have a disgruntled
         | coworker who's willing to resort to heinous crimes in order to
         | screw you over, there's many different things they could do
         | that are less convoluted.
        
           | tick_tock_tick wrote:
           | It takes 1 minutes on TOR to find enough to get anyone thrown
           | in jail don't make it sound harder than it really is. As for
           | photos vs printing taking a photo reports it for you so
           | you're never involved.
        
       | cebert wrote:
       | As much as the tech and security community has concerns and
       | objections to this policy change on the part of Apple, I'm
       | skeptical there will be any notable impact to Apple's revenue and
       | future sales.
        
         | hypothesis wrote:
         | I remember people were saying same thing about Linux on
         | Desktop, yet we have viable alternatives to proprietary OSes.
         | 
         | Yes, someone will have to struggle to get us there, but will
         | have alternative if we don't give up.
        
       | tehjoker wrote:
       | I'd like to point out that the government (and by proxy Apple,
       | companies care even less) doesn't give a shit about children.
       | They are advocating a policy of mass infection, they didn't give
       | a crap about children in Flint drinking toxic water, etc. If they
       | cared about kids, they would care a lot about thinks that
       | physically hurt and kill them. This means we don't have to take
       | their stated reasons for this at all seriously.
       | 
       | Apple, if you care about children, you'll pay more than your
       | legally owed taxes and push for improved access to education,
       | nutrition, and free child care. They're only interested in the
       | avenue that coincidentally dramatically increases their
       | surveillance powers and the powers of the government.
       | 
       | Weird, can't figure that one out.
        
       | johnvaluk wrote:
       | My common sense is tingling, telling me that Apple's eventual
       | move will be one of malicious compliance, finally implementing
       | e2ee in a way that provides them with culpable deniability and
       | users with a much desired privacy enhancement.
        
       | atbpaca wrote:
       | #NotUpdating to iOS15, also #NotUpgrading this time until further
       | notice.
        
       | ur-whale wrote:
       | > The Deceptive PR
       | 
       | Tautology
        
       | phkahler wrote:
       | I really don't get all the hype. This is not a backdoor as it's
       | called in TFA. It's not Apple "reaching into your device". It is
       | literally checking for specific images and reporting their
       | presence to Apply if found. It's not using AI to analyze your
       | photos or anything like that. It's looking for specific images,
       | and only prior to uploading them to iCloud. It won't even flag
       | your own nasty images because the hash won't match.
       | 
       | Note: The above assume we're talking about a typical hash of data
       | and not an image-analysis "hash" of what it thinks the content
       | it. This is supported by the language they use.
       | 
       | Yes, it's a bit big-brother. But I already assume the authorities
       | can fairly easily get ALL your iCloud data if they ask Apple the
       | right way.
       | 
       | You know what's creepy AF? Having a private conversation and
       | getting facebook ads the next day relating to the topic. Talk
       | about an acquaintance acting schizophrenic and get ads about
       | medications and treatment for that? Creepy as fuck. And that was
       | on the wifes iPhone - I have Android and didn't get that stuff,
       | but I seem to remember similar incidents where I got ads for
       | stuff talked about. That's serious voice analysis, not just
       | checking a file hash, and it happens when your phone is in your
       | pocket.
        
         | sureglymop wrote:
         | I think the perceived problem is the database of hashes to be
         | matched. If this is a database that can be contributed to by
         | other parties this can be used to "frame" people of a crime.
         | This can the be used for example to get rid of political
         | dissidents etc. Since there are just hashes in the database,
         | this should be fairly easy.
         | 
         | This is what I THINK people are worried about. I don't have an
         | Apple device so I haven't really fact checked all of this.
        
         | refulgentis wrote:
         | It's a dam-break moment for Apple on privacy - until now,
         | people could be unaware, or wave away , concerns about storing
         | iCloud data exclusively on government servers because it was
         | China, promoting ""antivirus"" software on boot because it was
         | Russia, not encrypting iCloud backups because who knows maybe
         | that's just been a complete oversight for years even after
         | detailed reporting on how Apple used it to escape arguing about
         | encryption after San Bernardino, and additionally made it super
         | easy to get the backups, just show me a warrant
         | 
         | Now it's on _your_ phone in _your_ country. No handwaving, no
         | sticking ones hand in the sand. The only hopeful argument being
         | posited is that somehow this will "make iCloud more private in
         | the long run"
        
         | daveidol wrote:
         | Wait are you saying you get ads based on things you converse
         | about out loud in the real world because your phone is
         | listening to everything in your pocket? You know that is a myth
         | and isn't true, right?
        
           | shapefrog wrote:
           | You saying one conspiracy is 100% true but another one is
           | laugh out loud impossible?
        
         | zepto wrote:
         | I agree with you, but I want to correct you - they _are_ using
         | an image analysis hash, not a cryptographic hash. However it
         | doesn't change the logic of your argument. They require
         | multiple positive marches, and they also require a visual
         | derivative of the CSAM images to match.
        
         | mustacheemperor wrote:
         | The 'hype' seems to me to be the valid-sounding concern that
         | this tool creates the ability to "look for specific images
         | prior to uploading them to iCloud" on Apple devices, and that
         | while today that capability is exclusively applied to save the
         | children, that tool could later be repurposed by authoritarian
         | regimes or other human rights abusers.
         | 
         | Speaking towards what we can assume the authorities can do, we
         | know the FBI cannot compromise an encrypted iPhone because they
         | attempted to force Apple to do that via court order. From what
         | I can tell, the objections to Apple's expanded protection
         | tooling is similar to the objections to adding backdoors to
         | iPhone encryption so the FBI can break into devices used by
         | criminals. It's great to stop the crime today, but how could
         | this be repurposed tomorrow.
        
       | 1vuio0pswjnm7 wrote:
       | "The hypothesis that I have is that Apple wishes to distance
       | itself from checking users' data. They've been fighting with the
       | FBI and the federal government for years, they've been struggling
       | with not reporting CSAM content to the NCMEC, they don't want to
       | be involved in any of this anymore."
       | 
       | However there is close to zero evidence to support this idea. I
       | was just reading something the other day that directly
       | contradicted this; it suggested the relationship has been
       | excellent save for a single, well-publicised dispute over
       | unlocking an iPhone. In other words, the publicly aired dispute
       | was an anomaly, not representative of the underlying
       | relationship.
       | 
       | Even more, unless the pontificator works for Apple or the
       | government, she is not a good position to summarise the
       | relationship. Plainly put, it is not public information.
       | 
       | What does such baseless speculation achieve. Is it like spreading
       | a meme. I dont get it.
       | 
       | "The worst part is: how do I put my money where my mouth is? Am I
       | going back to using Linux on the desktop (2022 will be the year
       | of Linux on the desktop, remember), debugging wifi drivers and
       | tirelessly trying to make resume-from-suspend work? Am I getting
       | a Pixel and putting GrapheneOS on it like a total nerd? FUCK."
       | 
       | Is having a computer with closed source wifi drivers and working
       | resume-from-suspend more important than having a computer with an
       | open OS that does not include an intentional backdoor.
       | 
       | Maybe the problem is not how to put your money where your mouth
       | is, its how to put your mouth where your money is. What does
       | GrapheneOS cost. Maybe this is not about money.
       | 
       | Options like GrapheneOS, even the mere idea of GrapheneOS, i.e.,
       | that there can be alternatives to BigTech's offerings, get buried
       | underneath Apple marketing. Much of that marketing Apple gets for
       | free. It comes from people who do not work for Apple.
       | 
       | Bloggers and others who discuss computers can help change that.
       | They can also help Apple sail through any criticism (and they
       | do).
        
       | severak_cz wrote:
       | > The hypothesis that I have is that Apple wishes to distance
       | itself from checking users' data.
       | 
       | This is best explanation of the whole situation I have read.
        
         | hu3 wrote:
         | And in the process hand over more user data to tyrants.
         | 
         | Surely they know this will be abused to check user data before
         | it is uploaded to iCloud. All it takes is a willing government.
        
           | crummy wrote:
           | How is that different from how things work now?
        
       | [deleted]
        
       | rootsudo wrote:
       | https://www.missingkids.org/theissues/end-to-end-encryption
       | 
       | Geez.
        
         | still_grokking wrote:
         | The main statement on that site does once more not explain why
         | real criminals just wouldn't use not backdoored software.
         | 
         | Indeed, it just looks like another move in the current crypto-
         | wars.
        
       | seph-reed wrote:
       | I really don't see why the scanning would ever be done on the
       | phone instead of on iCloud if it only affects iCloud images.
       | 
       | But I do have guesses why.
        
         | pvarangot wrote:
         | This article speculates that that's because Apple is not
         | scanning on iCloud to respect their privacy policy:
         | https://www.hackerfactor.com/blog/index.php?/archives/929-On...
         | 
         | Apple's report count to the NCMEC is really low so it's
         | probably true that they are not scanning on iCloud unless they
         | receive a warrant.
        
           | wyager wrote:
           | "In order to respect our privacy policy, we need to even more
           | egregiously violate your privacy in a weird lawyer-y way"
        
         | czzr wrote:
         | Only semi-good reason is it would enable E2E encryption in the
         | cloud while still allowing detection of CSAM.
        
           | roody15 wrote:
           | Except despite this being repeated over and over... Apple has
           | not said anything about E2E
        
             | zepto wrote:
             | It's also being repeated over and over that apple is doing
             | this so they can later do some more evil scanning. They
             | haven't said anything about that either.
        
             | macintux wrote:
             | Apple almost never talks about features like that until
             | they're ready, so while you're correct, it doesn't mean
             | much.
        
               | wyager wrote:
               | It's been leaked before that Apple folded under pressure
               | from the FBI not to add iCloud encryption for images.
        
               | ummonk wrote:
               | That would seem to back up the theory that they plan to
               | roll out E2EE and are adding on-device scanning first to
               | enable that.
        
             | czzr wrote:
             | As I said, the design enables this, if Apple chose to do
             | it. It remains to be seen if they will.
        
               | zionic wrote:
               | The design more plausible enables total device
               | surveillance than questionable iCloud Backups. (I refuse
               | to call a backdoored setup E2EE)
        
               | zepto wrote:
               | That's silly. The design is so narrowly tailored to scam
               | for CSAM that nobody can use it for anything else.
        
               | FabHK wrote:
               | It all depends on what perceptual hashes you use. If
               | Apple can institute a process whereby those are tied to
               | the OS version, but not to the region, then it would be
               | impossible to impose jurisdiction-specific exceptions.
        
               | zepto wrote:
               | > It all depends on what perceptual hashes you use.
               | 
               | I'm talking about the mechanism as described, not a
               | hypothetical.
               | 
               | > If Apple can institute a process whereby those are tied
               | to the OS version, but not to the region, then it would
               | be impossible to impose jurisdiction-specific exceptions.
               | 
               | As it is the mechanism they have built only works in the
               | US jurisdiction.
        
           | nullc wrote:
           | Apple is free to enable E2E encryption today, without the
           | backdoor.
        
         | outworlder wrote:
         | In-device scanning is used for the feature that warns teens
         | (and if =< 12yr old, their parents).
         | 
         | Biggest mistake Apple has ever done was to roll out three
         | different features at once and announce at the same time. This
         | is creating all sorts of confusion.
        
           | FabHK wrote:
           | And CSAM detection as well.
        
         | AlexandrB wrote:
         | That's the crux of it. Why bother with on-device
         | identification, unless one of:
         | 
         | a. Apple intends to E2E encrypt iCloud data.
         | 
         | b. This is intended to extend to _all_ photos on the device in
         | the future.
         | 
         | I'm hoping it's (a), but it's probably (b). And in either case
         | it sets a bad precedent for other companies to follow.
         | 
         | Edit: This also turns every jailbreak into a possible CSAM
         | detection avoidance mechanism, giving the government plausible
         | cover to treat them as serious, criminal actions. Apple would
         | probably love that.
        
           | still_grokking wrote:
           | Where is this stance coming form that Apple needs to break
           | E2E crypto to be "able" to "E2E encrypt iCloud data"?
           | 
           | That makes absolutely no sense. There is nowhere such a
           | requirement.
           | 
           | They could just E2E encrypt iCloud data. Point.
        
             | matwood wrote:
             | There is no requirement right now, but you only need to
             | look at what's happening in the US, UK, and EU to see the
             | battle setting up around E2EE. Apple may see this feature
             | as a way to quiet critics of E2EE. Hard to know if it will
             | be enough.
             | 
             | But, I think it's safe to say if Apple did turn on E2EE w/o
             | any provision for things like CSAM, it would help drive
             | legislation that is likely more heavy handed.
        
             | FabHK wrote:
             | They could E2E iCloud, of course. Question is whether they
             | could while still staying on the right side of the law.
        
               | still_grokking wrote:
               | Is there a law requiring device manufactures to search
               | (without any warrant!) the devices of all their
               | customers?
               | 
               | How do for example hard drive manufacturers comply?
        
         | [deleted]
        
       | Barrin92 wrote:
       | >The worst part is: how do I put my money where my mouth is? Am I
       | going back to using Linux on the desktop (2022 will be the year
       | of Linux on the desktop, remember)
       | 
       | people really need to retire this meme. On the desktop in
       | particular as a dev environment Linux is completely fine at this
       | point. I can understand people not wanting to run a custom phone
       | OS because that really is a ton of work but for working software
       | developers Fedora, Ubuntu whatever any mainstream distro is at
       | this point largely hassle free.
        
         | ajsnigrutin wrote:
         | I hate ubuntu from the bottom of my heart, for breaking stuff
         | and changing stuff that used to "just work" all the time, but
         | 99.999% of the time, that means "background stuff", "normal
         | users" never mess around with, and for normal users, a "usb key
         | -> install -> next, next, next -> finish -> reboot" just works.
        
       ___________________________________________________________________
       (page generated 2021-08-12 23:00 UTC)