It makes me wonder how much of these images and videos were actually
watched by the people who developed these so-called "accurate hashes".
Also it seems these could easily be fooled by putting on a few articles
of clothing.
If you'd read up on the system Apple is planning to use, you'd learn
that false positives really aren't a problem...
<https://www.apple.com/child-safety/pdf/Expanded_Protections_for_Children_Technology_Summary.pdf>
...and that's the opinion of people far more knowledgeable than you or I
on the subject:
If you'd read up on the system Apple is planning to use, you'd learn
that false positives really aren't a problem...
<https://www.apple.com/child-safety/pdf/Expanded_Protections_for_Children_Te
chnology_Summary.pdf>
...and that's the opinion of people far more knowledgeable than you or I
on the subject:
False positives will be a big problem for those that fall foul of them.
Suppose, hypothetically, that the photos in the Julia Somerville case
had been taken on an iPhone with this new scanning system in place.
<https://en.wikipedia.org/wiki/Julia_Somerville>
From the diagram on page 5 of the technology summary PDF linked above,
her photos would be hashed and matched against the CSAM hashes. The hash mechanism is described as recognising scenes but makes no mention of recognising faces. So the pictures of her child in the bath could
potentially match CSAM pictures of different children in baths.
On Mon, 9 Aug 2021 00:57:07 +0100, Bruce Horrocks wrote:
False positives will be a big problem for those that fall foul of them.
You can ask them nicely to retract their mistakes after they ruin your life.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 388 |
Nodes: | 16 (2 / 14) |
Uptime: | 05:51:08 |
Calls: | 8,220 |
Calls today: | 18 |
Files: | 13,122 |
Messages: | 5,872,261 |
Posted today: | 1 |