• Risks Digest 31.55 (2/2)

    From RISKS List Owner@21:1/5 to All on Tue Feb 4 11:55:41 2020
    [continued from previous message]

    "Truth Default" concept. By default, humans believe their peers. He explores and discusses conditions that contribute to trust determination. He explains the elusive nature of human deception, and the challenges that burden experienced interrogators (judges, detectives, counter-intelligence agents, etc.) attempting to identify it.

    AI algorithm decisions might one day be automatically judged for bias if an international reference standard existed for this context. This "bias
    reference standard" would be analogous to the kilogram, meter, or second,
    but it would apply to AI algorithm bias detection and context.

    It is doubtful that a software stack, especially one using conditional
    Boolean logic, can serve in this reference capacity. It is unlikely that a human can engineer it directly. Perhaps an artificial generalized
    intelligence can evolve to serve humans in this magnanimous capacity. Until
    a universal bias reference standard emerges, a bias-free AI algorithm, or equivalent computation structure hosted via quantum, neuromorphic, and/or analog computers, appears unlikely to materialize.

    Unless governments tighten regulations and toughen enforcement, criminals
    and scurrilous interests will exploit AI at the public's expense.

    Scam surveillance programs, enhanced malware detection platforms, may
    comprise the next technological disruption that entrepreneurs and startups pursue. How will their unbiased trust be earned and shown to serve the
    public interest? Will they yield explainable, transparent, and fair outcomes that can withstand legal scrutiny?

    ------------------------------

    Date: Mon, 20 Jan 2020 10:51:17 -1000
    From: the keyboard of geoff goodfellow <geoff@iconia.com>
    Subject: Clearview app lets strangers find your name, info with snap of a
    photo, report says (CNET)

    EXCERPT:

    What if a stranger could snap your picture on the sidewalk then use an app
    to quickly discover your name, address and other details? A startup called Clearview AI <https://clearview.ai/> has made that possible, and its app is currently being used by hundreds of law enforcement agencies in the US, including the FBI, says a Saturday report in The New York Times. <https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html>

    The app, says *The Times*, works by comparing a photo to a database of more than 3 billion pictures that Clearview says it's scraped off Facebook,
    Venmo, YouTube and other sites. It then serves up matches, along with links
    to the sites where those database photos originally appeared. A name might easily be unearthed, and from there other info could be dug up online.

    The size of the Clearview database dwarfs others in use by law enforcement.
    The FBI's own database, which taps passport and driver's license photos, is
    one of the largest, with over 641 million images of US citizens.

    The Clearview app isn't currently available to the public, but the Times
    says police officers and Clearview investors think it will be in the
    future. [...]

    https://www.cnet.com/news/clearview-app-lets-strangers-find-your-name-info-with-snap-of-a-photo-report-says/

    ------------------------------

    Date: Sat, 18 Jan 2020 10:58:10 +0200
    From: Amos Shapir <amos083@gmail.com>
    Subject: College career centers teach job applicants how to impress AI
    systems (CNN)

    It seems that hiring companies use AI system to analyze not just CV's, but
    also video job interviews.

    Full story:

    https://edition.cnn.com/2020/01/15/tech/ai-job-interview/?utm_source=join1440&utm_medium=email&utm_placement=etcetera

    ------------------------------

    Date: January 20, 2020 22:49:51 JST
    From: Dewayne Hendricks <dewayne@warpspeed.com>
    Subject: Banning Facial Recognition Isn't Enough (Bruce Schneier, NYTimes)

    [via Dave Farber]

    Bruce Schneier, 20 Jan 2020
    The whole point of modern surveillance is to treat people differently, and facial recognition technologies are only a small part of that.

    https://www.nytimes.com/2020/01/20/opinion/facial-recognition-ban-privacy.html

    Communities across the United States are starting to ban facial recognition technologies. In May of last year, San Francisco banned facial recognition;
    the neighboring city of Oakland soon followed, as did Somerville and
    Brookline in Massachusetts (a statewide ban may follow). In December, San
    Diego suspended a facial recognition program in advance of a new statewide
    law, which declared it illegal, coming into effect. Forty major music
    festivals pledged not to use the technology, and activists are calling for a nationwide ban. Many Democratic presidential candidates support at least a partial ban on the technology.

    These efforts are well intentioned, but facial recognition bans are the
    wrong way to fight against modern surveillance. Focusing on one particular identification method misconstrues the nature of the surveillance society
    we're in the process of building. Ubiquitous mass surveillance is
    increasingly the norm. In countries like China, a surveillance
    infrastructure is being built by the government for social control. In countries like the United States, it's being built by corporations in order
    to influence our buying behavior, and is incidentally used by the
    government.

    In all cases, modern mass surveillance has three broad components: identification, correlation and discrimination. Let's take them in turn.

    Facial recognition is a technology that can be used to identify people
    without their knowledge or consent. It relies on the prevalence of cameras, which are becoming both more powerful and smaller, and machine learning technologies that can match the output of these cameras with images from a database of existing photos.

    But that's just one identification technology among many. People can be identified at a distance by their heart beat or by their gait, using a laser-based system. Cameras are so good that they can read fingerprints and iris patterns from meters away. And even without any of these technologies,
    we can always be identified because our smartphones broadcast unique numbers called MAC addresses. Other things identify us as well: our phone numbers,
    our credit card numbers, the license plates on our cars. China, for example, uses multiple identification technologies to support its surveillance state.

    Once we are identified, the data about who we are and what we are doing can
    be correlated with other data collected at other times. This might be
    movement data, which can be used to *follow* us as we move throughout our
    day. It can be purchasing data, internet browsing data, or data about who we talk to via email or text. It might be data about our income, ethnicity, lifestyle, profession and interests. There is an entire industry of data brokers who make a living analyzing and augmenting data about who we are -- using surveillance data collected by all sorts of companies and then sold without our knowledge or consent.

    There is a huge -- and almost entirely unregulated -- data broker industry
    in the United States that trades on our information. This is how large
    internet companies like Google and Facebook make their money. It's not just that they know who we are, it's that they correlate what they know about us
    to create profiles about who we are and what our interests are. This is why many companies buy license plate data from states. It's also why companies
    like Google are buying health records, and part of the reason Google bought
    the company Fitbit, along with all of its data.

    The whole purpose of this process is for companies -- and governments -- to treat individuals differently. We are shown different ads on the internet
    and receive different offers for credit cards. Smart billboards display different advertisements based on who we are. In the future, we might be treated differently when we walk into a store, just as we currently are when
    we visit websites.

    The point is that it doesn't matter which technology is used to identify people. That there currently is no comprehensive database of heart beats or gaits doesn't make the technologies that gather them any less effective. And most of the time, it doesn't matter if identification isn't tied to a real name. What's important is that we can be consistently identified over
    time. We might be completely anonymous in a system that uses unique cookies
    to track us as we browse the internet, but the same process of correlation
    and discrimination still occurs. It's the same with faces; we can be tracked
    as we move around a store or shopping mall, even if that tracking isn't tied
    to a specific name. And that anonymity is fragile: If we ever order
    something online with a credit card, or purchase something with a credit
    card in a store, then suddenly our real names are attached to what was anonymous tracking information.

    ------------------------------

    Date: Sun, 26 Jan 2020 12:31:45 -0500
    From: Monty Solomon <monty@roscom.com>
    Subject: It May Be the Biggest Tax Heist Ever. And Europe Wants Justice
    (The New York Times)

    Stock traders are accused of siphoning $60 billion from state coffers, in a scheme that one called `the devil's machine'. Germany is the first country
    to try to get its money back.

    https://www.nytimes.com/2020/01/23/business/cum-ex.html

    ------------------------------

    Date: Sun, 26 Jan 2020 16:15:47 -0500
    From: Monty Solomon <monty@roscom.com>
    Subject: India Restores Some Internet Access in Kashmir After Long Shutdown
    (NYTimes)

    https://www.nytimes.com/2020/01/26/world/asia/kashmir-internet-shutdown-india.html

    ------------------------------

    Date: Tue, 21 Jan 2020 20:35:47 -0500
    From: Steve Golson <sgolson@trilobyte.com>
    Subject: Y2038 is here (Twitter)

    Wonderful and scary story about Y2038. It's here, now. https://twitter.com/jxxf/status/1219009308438024200

    Summary: a batch script that does financial projections 20 years out, dies
    on January 19, 2018.

    No one knew what was wrong at first. This batch job had never, ever
    crashed before, as far as anyone remembered or had logs for. The person
    who originally wrote it had been dead for at least 15 years, and in any
    case hadn't been employed by the firm for decades.

    [Unix Redux. 2038 seemed fairly far ahead when Ken Thompson chose that end
    date. Unix systems will still be around, and we will here more
    beforehand, and then after the fixes don't last, just like Y2K. PLAN
    AHEAD means different things to different folks. PGN]

    ------------------------------

    Date: Mon, 27 Jan 2020 12:21:54 -0500
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Yikes, friend's LinkedIn account hacked and spamming (Google)

    ... sending messages within LinkedIn with dodgy links. No reason LinkedIn accounts would be immune, so be alert.

    Plenty of previous reports:

    https://www.google.com/search?client=firefox-b-1-d&q=linkedin+account+hacked

    ------------------------------

    Date: Mon, 27 Jan 2020 15:49:04 PST
    From: "Peter G. Neumann" <neumann@csl.sri.com>
    Subject: From a car dealer

    Your Recent Service Experience

    TMNA_GEO_NAME_ENUM and BP_EXTERNAL_NAME_TXT would like to thank you for choosing a new TMNA_MODEL_NAME_AUTO. We appreciate your business and value
    you as a customer.

    About two weeks ago, we sent an email requesting your feedback. The
    information you provide will help TMNA_GEO_NAME_ENUM, its distributors, its affiliates, and BP_EXTERNAL_NAME_TXT continuously improve customer
    experiences.

    If you have already shared your feedback, please disregard this email.

    This survey will be active through TMNA_SURVEY_EXPIRATION_DATE_TEXT_EMAILS= Please begin by responding to the question below. [...]

    Please do not reply to this e-mail as we are not able to respond to messages sent to this address.

    ------------------------------

    Date: Tue, 21 Jan 2020 22:17:25 +0000
    From: Chris Drewe <e767pmk@yahoo.co.uk>
    Subject: Re: "Don't expect a return to the browser wars".

    I spotted this in a newspaper -- summary follows https://www.telegraph.co.uk/technology/2020/01/20/dont-expect-return-browser-wars/

    *The Telgraph*, 20 January 2020

    Don't expect a return to the browser wars. It has been two decades since
    Microsoft and the US government went to war over the former's efforts to
    crush challengers to its Internet Explorer web browser. Explorer's market
    share peaked at around 95pc in 2004 before heading rapidly down with the
    rise of superior rivals such as Mozilla's Firefox, Opera and then Google's
    Chrome. Whether Microsoft lost because of intervention or because free
    market innovation did its job is still a matter of debate. But the firm
    was relegated to an afterthought in the browser wars. Explorer remains the
    butt of many jokes. [Edge] runs on Chromium, the engine built by Google
    for the search company's own Chrome browser. Most net users are
    unconcerned about which web engines they use but they have been a key part
    of the battle between major software companies. Microsoft's [IE] browser
    -- once so dominant it triggered monopoly investigations on two continents
    -- managed to become so irrelevant it was not worth working to
    support. Quite a fall.

    I had to feel a twinge of sympathy for Microsoft as the EU court case
    dragged on for years, and when they paid the fine, hardly anybody was still using Internet Explorer anyway...

    ------------------------------

    Date: Mon, 14 Jan 2019 11:11:11 -0800
    From: RISKS-request@csl.sri.com
    Subject: Abridged info on RISKS (comp.risks)

    The ACM RISKS Forum is a MODERATED digest. Its Usenet manifestation is
    comp.risks, the feed for which is donated by panix.com as of June 2011.
    SUBSCRIPTIONS: The mailman Web interface can be used directly to
    subscribe and unsubscribe:
    http://mls.csl.sri.com/mailman/listinfo/risks

    SUBMISSIONS: to risks@CSL.sri.com with meaningful SUBJECT: line that
    includes the string `notsp'. Otherwise your message may not be read.
    *** This attention-string has never changed, but might if spammers use it.
    SPAM challenge-responses will not be honored. Instead, use an alternative
    address from which you never send mail where the address becomes public!
    The complete INFO file (submissions, default disclaimers, archive sites,
    copyright policy, etc.) is online.
    <http://www.CSL.sri.com/risksinfo.html>
    *** Contributors are assumed to have read the full info file for guidelines!

    OFFICIAL ARCHIVES: http://www.risks.org takes you to Lindsay Marshall's
    searchable html archive at newcastle:
    http://catless.ncl.ac.uk/Risks/VL.IS --> VoLume, ISsue.
    Also, ftp://ftp.sri.com/risks for the current volume
    or ftp://ftp.sri.com/VL/risks-VL.IS for previous VoLume
    If none of those work for you, the most recent issue is always at
    http://www.csl.sri.com/users/risko/risks.txt, and index at /risks-31.00
    Lindsay has also added to the Newcastle catless site a palmtop version
    of the most recent RISKS issue and a WAP version that works for many but
    not all telephones: http://catless.ncl.ac.uk/w/r
    ALTERNATIVE ARCHIVES: http://seclists.org/risks/ (only since mid-2001)
    *** NOTE: If a cited URL fails, we do not try to update them. Try
    browsing on the keywords in the subject line or cited article leads.
    Apologies for what Office365 and SafeLinks may have done to URLs.
    Special Offer to Join ACM for readers of the ACM RISKS Forum:
    <http://www.acm.org/joinacm1>

    ------------------------------

    End of RISKS-FORUM Digest 31.55
    ************************

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)