Foodstuffs is trialling facial-recognition systems in some of its New
World and PakNSave supermarkets.
One thing to keep in mind about the reliability of this technology is the >base-rate effect.
Lets say the system is 99% accurate at identifying faces of >undesirables--that is, if it says somebody is on their match list, there
is only 1% chance its a false positive. (I suspect thats an optimistic >figure.)
Now suppose that, out of every 1000 people who visit a supermarket, one is
on the undesirables list.
Out of those 999 innocent people, 1 in 100, or about 10, will likely be >identified as undesirables. Plus we assume that the actual undesirable
will also be picked out.
In other words, of those identified as undesirables who should be kept out
of the supermarket, about 90% will be innocent.
Let’s say the system is 99% accurate at identifying faces of >undesirables--that is, if it says somebody is on their match list, there
is only 1% chance it’s a false positive. (I suspect that’s an optimistic >figure.)
Now suppose that, out of every 1000 people who visit a supermarket, one is
on the undesirables list.
Out of those 999 innocent people, 1 in 100, or about 10, will likely be >identified as undesirables. Plus we assume that the actual undesirable
will also be picked out.
In other words, of those identified as undesirables who should be kept out
of the supermarket, about 90% will be innocent.
Using the accuracy
figures that you provide, "99% accurate at identifying faces of
undesirables" means ...
Your statement that the same ratio, 1/100, also applies to "innocent"
people being tagged as "undesirable" is a total misunderstanding of how
such ratios work ...
On Thu, 08 Feb 2024 08:18:50 GMT, Willy Nilly wrote:
Using the accuracy
figures that you provide, "99% accurate at identifying faces of
undesirables" means ...
I even clarified what it means: if it says somebody is on their match
list, there is only 1% chance its a false positive.
Your statement that the same ratio, 1/100, also applies to "innocent"
people being tagged as "undesirable" is a total misunderstanding of how
such ratios work ...
Lets define
condition U -- person is an undesirable
condition I -- person is identified as an undesirable
We can also have the opposite conditions
condition U -- person is not an undesirable
condition I -- person is not identified as an undesirable
In usual probability notation, P[U] means probability that the person
who just walked through the door is an undesirable, and like any >probability, it must have a (real) value between 0 and 1 inclusive.
We can also have conditional probabilities, where P[I|U] means
probability that a person is identified as an undesirable, given that
they are an undesirable, and P[I|U] means probability that a person
is (incorrectly) identified as an undesirable, given that they are
*not* an undesirable.
So my statement about the reliability of the system can be expressed as
P[I|U] = 0.01
Note that I didnt say anything about P[I|U]. That will likely be less
than 1, but its exact value is unimportant for this analysis. Lets just
say its 1. If the actual value is less than 1, then this term makes
even less of a contribution to the total result below, which, we will
soon see, is dominated by the other term.
Note that, by definition, since any condition is either in effect or
is not,
P[I|U] + P[I|U] = 1
P[U] + P[U] = 1
We also have the probability that any person walking through the door
is actually an undesirable, which I gave as
P[U] = 0.001
or conversely,
P[U] = 0.999
So now, by Bayes theorem, we can compute P[I], the probability that
the system will register a match, as
P[I] = P[I|U]P[U] + P[I|U]P[U]
= 1 0.001 + 0.01 0.999
= 0.01099
This is about 11 times the value of P[U]! Which means our system is >identifying about 11 times as many undesirables as are actually
present. So we have to wade through 10 false positives for every >undesirable we actually find.
That is the base-rate effect.
So it is a good thing that the trial is being overseen by the Privacy Commissioner.
On Fri, 09 Feb 2024 14:04:10 +1300, Rich80105 wrote:He loves to hear his own words however pointless - nobody else hears them you see.
So it is a good thing that the trial is being overseen by the Privacy
Commissioner.
You could have said all that without quoting (and mangling) my derivation.
Here’s one implication of the numbers that some may have picked up on: if >the proportion of undesirables entering the store is higher, then the
ratio of false positives decreases accordingly.
In short, this sort of surveillance works better if it is targeted towards >neighbourhoods where the undesirables are known to be more prevalent.
He loves to hear his own words however pointless - nobody else hears
them you see.
So he steals other people's ideas and misrepresents them - stupidity or
by design wjo knows?
On Fri, 09 Feb 2024 05:54:33 GMT, Tony wrote:Ask him, not me.
He loves to hear his own words however pointless - nobody else hears
them you see.
So he steals other people's ideas and misrepresents them - stupidity or
by design wjo knows?
Is he referring to himself in the third person again?
On Fri, 09 Feb 2024 05:54:33 GMT, Tony wrote:
He loves to hear his own words however pointless - nobody else hears
them you see.
So he steals other people's ideas and misrepresents them - stupidity or
by design wjo knows?
Is he referring to himself in the third person again?
Let’s define
condition U -- person is an undesirable
condition I -- person is identified as an undesirable
We can also have the opposite conditions
condition ¬U -- person is not an undesirable
condition ¬I -- person is not identified as an undesirable
On Fri, 9 Feb 2024 06:14:22 -0000 (UTC), Lawrence D'Oliveiro
<ldo@nz.invalid> wrote:
On Fri, 09 Feb 2024 05:54:33 GMT, Tony wrote:
He loves to hear his own words however pointless - nobody else hears
them you see.
So he steals other people's ideas and misrepresents them - stupidity or
by design wjo knows?
Is he referring to himself in the third person again?
The thread has been going for some time; you gave a good explanation
of the mathematics relating to the probability of various things; I
should have thanked you for that before addressing a different issue
relating to the use that we expect to be made of identified people.
Missed by some is that many retail outlets have cameras for security
purposes - they are not always manned, but can be useful when theft or
other events have happened. We have not yet got to the extent of
camera surveillance in the UK, but the use of such cameras is getting >cheaper. There are some legal issues related to such surveillance,
and in particular to actions taken on the basis of technology only;
some of the stories were not clear about what staff would actually do.
If all the technology is doing is identifying someone worth keeping an
eye on that is probably sufficient in most cases. If someone has been >trespassed from entering, then other actions may be appropriate, but
from what has been said they will use information from the system to
help, not make decisions for them. As such there should be few
concerns. I am happy to leave the cost justification to the companies
- they make enough profit to be able to afford it, but the whole
system may turn out to not be worth their while.
On Thu, 8 Feb 2024, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
Let’s define
condition U -- person is an undesirable condition I -- person is
identified as an undesirable
We can also have the opposite conditions
condition ¬U -- person is not an undesirable condition ¬I -- person
is not identified as an undesirable
Ixnay, your condition not-I, to be consistent with the top lines, would
be: "person is identified as a non-undesirable".
At a guess, the device matches 40/100 aspects to identify a person,
similar to fingerprint analysis. So if an "undesirable" is missed,
it'll be because he is not recognised, as opposed to being recognised
as someone else. For one person to be mistaken as another, 40 aspects
would need to match, at a likelihood of 1 in 2^40 = 1 in a trillion.
It's more complicated than that, and the details and priors will
differ, but I have other things to do than to research this topic.
Let’s see your working through of the consequences of this. What numbers
do you come up with?
Foodstuffs is trialling facial-recognition systems in some of its New
World and Pak’N’Save supermarkets.
On Thu, 8 Feb 2024 05:32:17 -0000 (UTC), I wrote:
Foodstuffs is trialling facial-recognition systems in some of its New
World and PakNSave supermarkets.
And here is at least one case ><https://www.nzherald.co.nz/nz/supermarket-facial-recognition-trial-rotorua-mothers-discrimination-ordeal/IK4ZEJHLQVFRLMDE6LX4AR57PE/>
of mistaken identity.
How many of these cases will it take before the technology is
abandoned? I would say, not many. This isnt China.
On Fri, 12 Apr 2024 20:30:38 -0000 (UTC), Lawrence D'Oliveiro ><ldo@nz.invalid> wrote:Idiot. Profitable companies pay people to work for them and those people get to buy food for their families.
On Thu, 8 Feb 2024 05:32:17 -0000 (UTC), I wrote:
Foodstuffs is trialling facial-recognition systems in some of its New
World and PakNSave supermarkets.
And here is at least one case >><https://www.nzherald.co.nz/nz/supermarket-facial-recognition-trial-rotorua-mothers-discrimination-ordeal/IK4ZEJHLQVFRLMDE6LX4AR57PE/>
of mistaken identity.
How many of these cases will it take before the technology is
abandoned? I would say, not many. This isnt China.
Defending profits is Gods Own Work as far as this Government is
concerned - companies that can't make profits don't make (carefully >non-political) donations to so-called "Think Tanks" that run "idea
campaigns" - just look at the Facebook Group for Groundswell as one
current example . . .
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 297 |
Nodes: | 16 (0 / 16) |
Uptime: | 128:58:57 |
Calls: | 6,663 |
Calls today: | 1 |
Files: | 12,212 |
Messages: | 5,335,293 |