• Facial Recognition Technology In Supermarkets

    From Lawrence D'Oliveiro@21:1/5 to All on Thu Feb 8 05:32:17 2024
    Foodstuffs is trialling facial-recognition systems in some of its New
    World and Pak’N’Save supermarkets.

    One thing to keep in mind about the reliability of this technology is the “base-rate effect”.

    Let’s say the system is 99% accurate at identifying faces of undesirables--that is, if it says somebody is on their match list, there
    is only 1% chance it’s a false positive. (I suspect that’s an optimistic figure.)

    Now suppose that, out of every 1000 people who visit a supermarket, one is
    on the undesirables list.

    Out of those 999 innocent people, 1 in 100, or about 10, will likely be identified as undesirables. Plus we assume that the actual undesirable
    will also be picked out.

    In other words, of those identified as undesirables who should be kept out
    of the supermarket, about 90% will be innocent.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich80105@21:1/5 to ldo@nz.invalid on Thu Feb 8 21:38:12 2024
    On Thu, 8 Feb 2024 05:32:17 -0000 (UTC), Lawrence D'Oliveiro
    <ldo@nz.invalid> wrote:

    Foodstuffs is trialling facial-recognition systems in some of its New
    World and PakNSave supermarkets.

    One thing to keep in mind about the reliability of this technology is the >base-rate effect.

    Lets say the system is 99% accurate at identifying faces of >undesirables--that is, if it says somebody is on their match list, there
    is only 1% chance its a false positive. (I suspect thats an optimistic >figure.)

    Now suppose that, out of every 1000 people who visit a supermarket, one is
    on the undesirables list.

    Out of those 999 innocent people, 1 in 100, or about 10, will likely be >identified as undesirables. Plus we assume that the actual undesirable
    will also be picked out.

    In other words, of those identified as undesirables who should be kept out
    of the supermarket, about 90% will be innocent.

    I don't think they were intending to eject them from the supermarket,
    just to follow them and observe so they can be ready to apprehend if
    necessary (eg for theft or violence)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Willy Nilly@21:1/5 to Lawrence D'Oliveiro on Thu Feb 8 08:18:50 2024
    On Thu, 8 Feb 2024, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    Let’s say the system is 99% accurate at identifying faces of >undesirables--that is, if it says somebody is on their match list, there
    is only 1% chance it’s a false positive. (I suspect that’s an optimistic >figure.)

    Now suppose that, out of every 1000 people who visit a supermarket, one is
    on the undesirables list.

    Out of those 999 innocent people, 1 in 100, or about 10, will likely be >identified as undesirables. Plus we assume that the actual undesirable
    will also be picked out.

    In other words, of those identified as undesirables who should be kept out
    of the supermarket, about 90% will be innocent.

    OK, you are now officially an innumerate moron. Using the accuracy
    figures that you provide, "99% accurate at identifying faces of
    undesirables" means that it will nail 99 out of 100 "undesirables" (to
    use your word) and miss 1. That means, if, as you say, 1 out of every
    1000 supermarket patrons are "undesirable", that means 100 out of
    every 100,000 are so, of which the gadget nails 99 and misses 1.
    Therefore it misses one "undesirable" per every 100,000 patrons.

    Your statement that the same ratio, 1/100, also applies to "innocent"
    people being tagged as "undesirable" is a total misunderstanding of
    how such ratios work -- there's a thing called a "prior" which gives
    the likelihoods in either direction, and those priors are unrelated.
    You need to recognise that you are an idiot -- maybe you can progress
    from there.

    My statement does not mean I support facial recognition technology,
    just that I understand basic math.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Willy Nilly on Thu Feb 8 23:07:37 2024
    On Thu, 08 Feb 2024 08:18:50 GMT, Willy Nilly wrote:

    Using the accuracy
    figures that you provide, "99% accurate at identifying faces of
    undesirables" means ...

    I even clarified what it means: “if it says somebody is on their match
    list, there is only 1% chance it’s a false positive”.

    Your statement that the same ratio, 1/100, also applies to "innocent"
    people being tagged as "undesirable" is a total misunderstanding of how
    such ratios work ...

    Let’s define

    condition U -- person is an undesirable
    condition I -- person is identified as an undesirable

    We can also have the opposite conditions

    condition ¬U -- person is not an undesirable
    condition ¬I -- person is not identified as an undesirable

    In usual probability notation, P[U] means “probability that the person
    who just walked through the door is an undesirable”, and like any probability, it must have a (real) value between 0 and 1 inclusive.

    We can also have conditional probabilities, where P[I|U] means
    “probability that a person is identified as an undesirable, given that
    they are an undesirable”, and P[I|¬U] means “probability that a person
    is (incorrectly) identified as an undesirable, given that they are
    *not* an undesirable”.

    So my statement about the reliability of the system can be expressed as

    P[I|¬U] = 0.01

    Note that I didn’t say anything about P[I|U]. That will likely be less
    than 1, but its exact value is unimportant for this analysis. Let’s just
    say it’s 1. If the actual value is less than 1, then this term makes
    even less of a contribution to the total result below, which, we will
    soon see, is dominated by the other term.

    Note that, by definition, since any condition is either in effect or
    is not,

    P[I|U] + P[¬I|U] = 1
    P[U] + P[¬U] = 1

    We also have the probability that any person walking through the door
    is actually an undesirable, which I gave as

    P[U] = 0.001

    or conversely,

    P[¬U] = 0.999

    So now, by Bayes’ theorem, we can compute P[I], the probability that
    the system will register a match, as

    P[I] = P[I|U]P[U] + P[I|¬U]P[¬U]
    = 1 × 0.001 + 0.01 × 0.999
    = 0.01099

    This is about 11 times the value of P[U]! Which means our system is
    identifying about 11 times as many “undesirables” as are actually
    present. So we have to wade through 10 false positives for every “undesirable” we actually find.

    That is the “base-rate effect”.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich80105@21:1/5 to ldo@nz.invalid on Fri Feb 9 14:04:10 2024
    On Thu, 8 Feb 2024 23:07:37 -0000 (UTC), Lawrence D'Oliveiro
    <ldo@nz.invalid> wrote:

    On Thu, 08 Feb 2024 08:18:50 GMT, Willy Nilly wrote:

    Using the accuracy
    figures that you provide, "99% accurate at identifying faces of
    undesirables" means ...

    I even clarified what it means: if it says somebody is on their match
    list, there is only 1% chance its a false positive.

    Your statement that the same ratio, 1/100, also applies to "innocent"
    people being tagged as "undesirable" is a total misunderstanding of how
    such ratios work ...

    Lets define

    condition U -- person is an undesirable
    condition I -- person is identified as an undesirable

    We can also have the opposite conditions

    condition U -- person is not an undesirable
    condition I -- person is not identified as an undesirable

    In usual probability notation, P[U] means probability that the person
    who just walked through the door is an undesirable, and like any >probability, it must have a (real) value between 0 and 1 inclusive.

    We can also have conditional probabilities, where P[I|U] means
    probability that a person is identified as an undesirable, given that
    they are an undesirable, and P[I|U] means probability that a person
    is (incorrectly) identified as an undesirable, given that they are
    *not* an undesirable.

    So my statement about the reliability of the system can be expressed as

    P[I|U] = 0.01

    Note that I didnt say anything about P[I|U]. That will likely be less
    than 1, but its exact value is unimportant for this analysis. Lets just
    say its 1. If the actual value is less than 1, then this term makes
    even less of a contribution to the total result below, which, we will
    soon see, is dominated by the other term.

    Note that, by definition, since any condition is either in effect or
    is not,

    P[I|U] + P[I|U] = 1
    P[U] + P[U] = 1

    We also have the probability that any person walking through the door
    is actually an undesirable, which I gave as

    P[U] = 0.001

    or conversely,

    P[U] = 0.999

    So now, by Bayes theorem, we can compute P[I], the probability that
    the system will register a match, as

    P[I] = P[I|U]P[U] + P[I|U]P[U]
    = 1 0.001 + 0.01 0.999
    = 0.01099

    This is about 11 times the value of P[U]! Which means our system is >identifying about 11 times as many undesirables as are actually
    present. So we have to wade through 10 false positives for every >undesirable we actually find.

    That is the base-rate effect.

    So it is a good thing that the trial is being overseen by the Privacy Commissioner. They will be concerned to see that there is not an
    unacceptable bias based on skin colour, and the results may then guide
    what the store staff / store security people actually do. If all they
    do is observe, and the 'suspect" does nothing, they may miss other
    thieves by being distracted - but those wrongly identified should not
    have their passage impeded. Certainly the stores are making enough
    profit to pay for it, and I am all for thieves being caught. If it
    does not produce results then they will not waste money. In the
    meantime stores that put a lot of dummy cameras all over the place may
    find a short term drop in shoplifting . . .

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to All on Fri Feb 9 02:22:26 2024
    On Fri, 09 Feb 2024 14:04:10 +1300, Rich80105 wrote:

    So it is a good thing that the trial is being overseen by the Privacy Commissioner.

    You could have said all that without quoting (and mangling) my derivation.

    Here’s one implication of the numbers that some may have picked up on: if
    the proportion of undesirables entering the store is higher, then the
    ratio of false positives decreases accordingly.

    In short, this sort of surveillance works better if it is targeted towards neighbourhoods where the undesirables are known to be more prevalent.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tony@21:1/5 to Lawrence D'Oliveiro on Fri Feb 9 05:54:33 2024
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Fri, 09 Feb 2024 14:04:10 +1300, Rich80105 wrote:

    So it is a good thing that the trial is being overseen by the Privacy
    Commissioner.

    You could have said all that without quoting (and mangling) my derivation.

    Here’s one implication of the numbers that some may have picked up on: if >the proportion of undesirables entering the store is higher, then the
    ratio of false positives decreases accordingly.

    In short, this sort of surveillance works better if it is targeted towards >neighbourhoods where the undesirables are known to be more prevalent.
    He loves to hear his own words however pointless - nobody else hears them you see.
    So he steals other people's ideas and misrepresents them - stupidity or by design wjo knows?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Tony on Fri Feb 9 06:14:22 2024
    On Fri, 09 Feb 2024 05:54:33 GMT, Tony wrote:

    He loves to hear his own words however pointless - nobody else hears
    them you see.
    So he steals other people's ideas and misrepresents them - stupidity or
    by design wjo knows?

    Is he referring to himself in the third person again?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tony@21:1/5 to Lawrence D'Oliveiro on Fri Feb 9 07:10:39 2024
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Fri, 09 Feb 2024 05:54:33 GMT, Tony wrote:

    He loves to hear his own words however pointless - nobody else hears
    them you see.
    So he steals other people's ideas and misrepresents them - stupidity or
    by design wjo knows?

    Is he referring to himself in the third person again?
    Ask him, not me.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich80105@21:1/5 to ldo@nz.invalid on Fri Feb 9 20:44:31 2024
    On Fri, 9 Feb 2024 06:14:22 -0000 (UTC), Lawrence D'Oliveiro
    <ldo@nz.invalid> wrote:

    On Fri, 09 Feb 2024 05:54:33 GMT, Tony wrote:

    He loves to hear his own words however pointless - nobody else hears
    them you see.
    So he steals other people's ideas and misrepresents them - stupidity or
    by design wjo knows?

    Is he referring to himself in the third person again?

    It is difficult to tell with Tony - in this case he was probably
    reacting to a post from me that you had replied to; Tony is however
    not always clear in his posts, and inclined to take things personally
    when they are not intended that way. He seems to like pretending that
    he is in control and that his posts should be immune from any
    disagreement - his accusations above are silly and childish;
    discussions are about sharing knowledge and opinions, and listening to
    others.

    The thread has been going for some time; you gave a good explanation
    of the mathematics relating to the probability of various things; I
    should have thanked you for that before addressing a different issue
    relating to the use that we expect to be made of identified people.

    Missed by some is that many retail outlets have cameras for security
    purposes - they are not always manned, but can be useful when theft or
    other events have happened. We have not yet got to the extent of
    camera surveillance in the UK, but the use of such cameras is getting
    cheaper. There are some legal issues related to such surveillance,
    and in particular to actions taken on the basis of technology only;
    some of the stories were not clear about what staff would actually do.
    If all the technology is doing is identifying someone worth keeping an
    eye on that is probably sufficient in most cases. If someone has been trespassed from entering, then other actions may be appropriate, but
    from what has been said they will use information from the system to
    help, not make decisions for them. As such there should be few
    concerns. I am happy to leave the cost justification to the companies
    - they make enough profit to be able to afford it, but the whole
    system may turn out to not be worth their while.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Willy Nilly@21:1/5 to Lawrence D'Oliveiro on Fri Feb 9 08:19:00 2024
    On Thu, 8 Feb 2024, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    Let’s define
    condition U -- person is an undesirable
    condition I -- person is identified as an undesirable

    We can also have the opposite conditions
    condition ¬U -- person is not an undesirable
    condition ¬I -- person is not identified as an undesirable

    Ixnay, your condition not-I, to be consistent with the top lines,
    would be: "person is identified as a non-undesirable".

    The difference is the "excluded middle", those people who are not
    recognised at all. Your (wrong) definition includes them, my
    (correct) definition excludes them.

    Those who are not recognised at all, are a very large group consisting
    of visitors/patrons of all kinds. You can't pretend they are part of
    the "not undesirable" group -- you just don't know.

    This "excluded middle" arises is if a statement is claimed to be true
    or false, with no other possibility. The "middle" says that there is
    a third solution, neither true nor false -- indeterminate -- such as
    the solution to the sentence "This statement is false".

    You must isolate and count the "unknowns", otherwise your conclusions
    are wrong -- as they were here.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tony@21:1/5 to Rich80105@hotmail.com on Fri Feb 9 19:00:15 2024
    Rich80105 <Rich80105@hotmail.com> wrote:
    On Fri, 9 Feb 2024 06:14:22 -0000 (UTC), Lawrence D'Oliveiro
    <ldo@nz.invalid> wrote:

    On Fri, 09 Feb 2024 05:54:33 GMT, Tony wrote:

    He loves to hear his own words however pointless - nobody else hears
    them you see.
    So he steals other people's ideas and misrepresents them - stupidity or
    by design wjo knows?

    Is he referring to himself in the third person again?


    The thread has been going for some time; you gave a good explanation
    of the mathematics relating to the probability of various things; I
    should have thanked you for that before addressing a different issue
    relating to the use that we expect to be made of identified people.

    Missed by some is that many retail outlets have cameras for security
    purposes - they are not always manned, but can be useful when theft or
    other events have happened. We have not yet got to the extent of
    camera surveillance in the UK, but the use of such cameras is getting >cheaper. There are some legal issues related to such surveillance,
    and in particular to actions taken on the basis of technology only;
    some of the stories were not clear about what staff would actually do.
    If all the technology is doing is identifying someone worth keeping an
    eye on that is probably sufficient in most cases. If someone has been >trespassed from entering, then other actions may be appropriate, but
    from what has been said they will use information from the system to
    help, not make decisions for them. As such there should be few
    concerns. I am happy to leave the cost justification to the companies
    - they make enough profit to be able to afford it, but the whole
    system may turn out to not be worth their while.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Willy Nilly on Fri Feb 9 20:57:28 2024
    On Fri, 09 Feb 2024 08:19:00 GMT, Willy Nilly wrote:

    On Thu, 8 Feb 2024, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    Let’s define
    condition U -- person is an undesirable condition I -- person is
    identified as an undesirable

    We can also have the opposite conditions
    condition ¬U -- person is not an undesirable condition ¬I -- person
    is not identified as an undesirable

    Ixnay, your condition not-I, to be consistent with the top lines, would
    be: "person is identified as a non-undesirable".

    Let’s see your working through of the consequences of this. What numbers
    do you come up with?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Willy Nilly on Sat Feb 10 04:53:40 2024
    On Sat, 10 Feb 2024 04:19:39 GMT, Willy Nilly wrote:

    At a guess, the device matches 40/100 aspects to identify a person,
    similar to fingerprint analysis. So if an "undesirable" is missed,
    it'll be because he is not recognised, as opposed to being recognised
    as someone else. For one person to be mistaken as another, 40 aspects
    would need to match, at a likelihood of 1 in 2^40 = 1 in a trillion.

    It's more complicated than that, and the details and priors will
    differ, but I have other things to do than to research this topic.

    Maybe you should have done that first.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Willy Nilly@21:1/5 to Lawrence D'Oliveiro on Sat Feb 10 04:19:39 2024
    On Fri, 9 Feb 2024, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    Let’s see your working through of the consequences of this. What numbers
    do you come up with?

    At a guess, the device matches 40/100 aspects to identify a person,
    similar to fingerprint analysis. So if an "undesirable" is missed,
    it'll be because he is not recognised, as opposed to being recognised
    as someone else. For one person to be mistaken as another, 40 aspects
    would need to match, at a likelihood of 1 in 2^40 = 1 in a trillion.

    It's more complicated than that, and the details and priors will
    differ, but I have other things to do than to research this topic.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to All on Fri Apr 12 20:30:38 2024
    On Thu, 8 Feb 2024 05:32:17 -0000 (UTC), I wrote:

    Foodstuffs is trialling facial-recognition systems in some of its New
    World and Pak’N’Save supermarkets.

    And here is at least one case <https://www.nzherald.co.nz/nz/supermarket-facial-recognition-trial-rotorua-mothers-discrimination-ordeal/IK4ZEJHLQVFRLMDE6LX4AR57PE/>
    of mistaken identity.

    How many of these cases will it take before the technology is
    abandoned? I would say, not many. This isn’t China.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich80105@21:1/5 to ldo@nz.invalid on Sat Apr 13 16:55:56 2024
    On Fri, 12 Apr 2024 20:30:38 -0000 (UTC), Lawrence D'Oliveiro
    <ldo@nz.invalid> wrote:

    On Thu, 8 Feb 2024 05:32:17 -0000 (UTC), I wrote:

    Foodstuffs is trialling facial-recognition systems in some of its New
    World and PakNSave supermarkets.

    And here is at least one case ><https://www.nzherald.co.nz/nz/supermarket-facial-recognition-trial-rotorua-mothers-discrimination-ordeal/IK4ZEJHLQVFRLMDE6LX4AR57PE/>
    of mistaken identity.

    How many of these cases will it take before the technology is
    abandoned? I would say, not many. This isnt China.

    Defending profits is Gods Own Work as far as this Government is
    concerned - companies that can't make profits don't make (carefully non-political) donations to so-called "Think Tanks" that run "idea
    campaigns" - just look at the Facebook Group for Groundswell as one
    current example . . .

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tony@21:1/5 to Rich80105@hotmail.com on Sat Apr 13 22:01:44 2024
    Rich80105 <Rich80105@hotmail.com> wrote:
    On Fri, 12 Apr 2024 20:30:38 -0000 (UTC), Lawrence D'Oliveiro ><ldo@nz.invalid> wrote:

    On Thu, 8 Feb 2024 05:32:17 -0000 (UTC), I wrote:

    Foodstuffs is trialling facial-recognition systems in some of its New
    World and PakNSave supermarkets.

    And here is at least one case >><https://www.nzherald.co.nz/nz/supermarket-facial-recognition-trial-rotorua-mothers-discrimination-ordeal/IK4ZEJHLQVFRLMDE6LX4AR57PE/>
    of mistaken identity.

    How many of these cases will it take before the technology is
    abandoned? I would say, not many. This isnt China.

    Defending profits is Gods Own Work as far as this Government is
    concerned - companies that can't make profits don't make (carefully >non-political) donations to so-called "Think Tanks" that run "idea
    campaigns" - just look at the Facebook Group for Groundswell as one
    current example . . .
    Idiot. Profitable companies pay people to work for them and those people get to buy food for their families.
    Profit is not dirty. Profit enables families and social services. It pays for the police and hospitals.
    Sheesh - why do you not understand simple facts?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)