• Apple scanning for images on their storage system: OK...

    From John@21:1/5 to badgolferman on Fri Aug 6 15:15:25 2021
    XPost: comp.mobile.android, alt.comp.os.windows-10

    On Fri, 6 Aug 2021 12:57:36 -0000 (UTC), badgolferman wrote:
    It makes me wonder how much of these images and videos were actually
    watched by the people who developed these so-called "accurate hashes".
    Also it seems these could easily be fooled by putting on a few articles
    of clothing.

    It's a warrantless search, 24/7, on your own phone, without your permission! https://www.apple.com/child-safety/pdf/CSAM_Detection_Technical_Summary.pdf

    Without you ever having committed a crime, and without your consent, and without you ever logging into any Apple servers, Apple will forcefully constantly perform warrantless surveillance 24/7 on your own phone of your
    own personal private data without you ever needing to upload any of your private data to any of Apple's servers.

    Prior to that unacceptable warrantless 24/7 surveillance by Apple of your private and personal data, Apple will gladly accept from any government (in
    the future) any listing whatsoever of data that those foreign governments
    deem they are interested in looking for without any checks whatsoever on
    that data (Apple will accept anything with no limits defined) and Apple
    will, always without your permission, put that limitless list on your phone.

    Completely outside your consent, Apple is allowing multiple governments (of Apple's choosing!) to create a listing of those governments' choosing, which Apple will exercise absolutely no control over (Apple can't even see inside
    the lists!), so that Apple will, without your consent, scan your private and personal data, 24/7, on your own phone, even when you do not log into the iCloud, and, then Apple will automatically report you to those foreign governments even if you have committed no crime (with absolutely no
    oversight whatsoever!).

    I agree with you this is a disproportionate use of power. Trawling billions
    of iPhones just in case they find data any government wants to see without
    your consent, even when you have never committed any crime whatsoever, with absolutely zero oversight, is criminal in and of itself.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bruce Horrocks@21:1/5 to Alan Baker on Mon Aug 9 00:57:07 2021
    XPost: misc.phone.mobile.iphone

    On 08/08/2021 19:04, Alan Baker wrote:

    If you'd read up on the system Apple is planning to use, you'd learn
    that false positives really aren't a problem...

    <https://www.apple.com/child-safety/pdf/Expanded_Protections_for_Children_Technology_Summary.pdf>


    ...and that's the opinion of people far more knowledgeable than you or I
    on the subject:

    False positives will be a big problem for those that fall foul of them.

    Suppose, hypothetically, that the photos in the Julia Somerville case
    had been taken on an iPhone with this new scanning system in place.

    <https://en.wikipedia.org/wiki/Julia_Somerville>

    From the diagram on page 5 of the technology summary PDF linked above,
    her photos would be hashed and matched against the CSAM hashes. The hash mechanism is described as recognising scenes but makes no mention of recognising faces. So the pictures of her child in the bath could
    potentially match CSAM pictures of different children in baths.

    This would flag up and an Apple reviewer would look at the photos. Now,
    please note carefully: the Apple reviewer **cannot see the original CSAM image** that the photo matched. Therefore the Apple reviewer cannot
    *compare* the photos. All the Apple reviewer can do is *evaluate* each
    flagged photo against written guidelines, based on their training. And
    since it's a picture of a child in a bath they can pretty much do only
    one thing, which is to agree and report it as a child abuse picture.

    At which point the phone's owner has their iCloud account suspended and
    is launched into an expensive and time consuming legal process to prove
    their innocence.

    In one of the other posts someone said: Well, what's your solution?

    Well, I'd like to hear NCMEC's view on the system; understand from them
    how many referrals per year they expect to receive; what resources they
    are committing to reviewing cases; what the process for reviewing cases
    is and have them give a median timeline for resolving a case one way or another.

    A compromise might be that iCloud accounts aren't suspended until NCMEC
    staff have visually compared the flagged photos with the original CSAM
    image to confirm the match, and a criminal prosecution has been started.

    Suspending the iCloud account and thereby tipping-off genuine
    paedophiles, giving them time to destroy evidence, is probably not what law-enforcement would want anyway.

    --
    Bruce Horrocks
    Surrey, England

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From nospam@21:1/5 to Horrocks on Sun Aug 8 20:06:41 2021
    XPost: misc.phone.mobile.iphone

    In article <dc2ebce0-b636-ea8d-7f97-25d14feb6264@scorecrow.com>, Bruce
    Horrocks <07.013@scorecrow.com> wrote:


    If you'd read up on the system Apple is planning to use, you'd learn
    that false positives really aren't a problem...


    <https://www.apple.com/child-safety/pdf/Expanded_Protections_for_Children_Te
    chnology_Summary.pdf>


    ...and that's the opinion of people far more knowledgeable than you or I
    on the subject:

    False positives will be a big problem for those that fall foul of them.

    1 out of 1 trillion, so not very many, and it must also get through a
    manual review before it's referred to law enforcement.

    Suppose, hypothetically, that the photos in the Julia Somerville case
    had been taken on an iPhone with this new scanning system in place.

    <https://en.wikipedia.org/wiki/Julia_Somerville>

    From the diagram on page 5 of the technology summary PDF linked above,
    her photos would be hashed and matched against the CSAM hashes. The hash mechanism is described as recognising scenes but makes no mention of recognising faces. So the pictures of her child in the bath could
    potentially match CSAM pictures of different children in baths.

    it's not recognizing scenes with kids in them.

    if she's just taking pics of her kid in the tub, then they won't be in
    the database and will not be matched.

    if pictures of her child are in the csam database, then she has much
    bigger problems than having photos flagged.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Alan Baker@21:1/5 to John on Wed Aug 11 10:08:18 2021
    XPost: misc.phone.mobile.iphone

    On 2021-08-09 5:51 p.m., John wrote:
    On Mon, 9 Aug 2021 00:57:07 +0100, Bruce Horrocks wrote:
    False positives will be a big problem for those that fall foul of them.

    You can ask them nicely to retract their mistakes after they ruin your life.


    For there to be a false positive that actually gets reported to LE, the following must all occur:

    1. A number of photos on your device must match by hash with known
    images of CSAM.

    2. That number must be large enough to allow the images to be decrypted
    at the server end.

    3. (And most importantly), when that threshold number of images matches,
    they images must then actually be seen by a human being who agrees that
    they are, in fact, images of CSAM.

    Then and only then would LE be informed.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)