• Risks Digest 33.78 (1/2)

    From RISKS List Owner@21:1/5 to All on Wed Aug 16 01:03:49 2023
    RISKS-LIST: Risks-Forum Digest Tuesday 15 August 2023 Volume 33 : Issue 78

    ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks) Peter G. Neumann, founder and still moderator

    ***** See last item for further information, disclaimers, caveats, etc. ***** This issue is archived at <http://www.risks.org> as
    <http://catless.ncl.ac.uk/Risks/33.78>
    The current issue can also be found at
    <http://www.csl.sri.com/users/risko/risks.txt>

    Contents:
    Metrorail Safety Commission Says Automatic Train
    Operation Not Ready For Primetime (DCist)
    Freight Railroads Seek Changes to Federal Safety
    Program Before Joining It (NYTimes)
    Activist Group Is Protesting Driverless Cars by Disabling Them With Traffic
    Cones (Vice)
    Hackers Can Talk Computers into Misbehaving with AI (Robert McMillan)
    San Francisco's North Beach streets clogged as long line of Cruise
    robotaxis come to a standstill (LA Times)
    Cellphone Radiation Is Harmful, but Few Want to Believe It
    (Neuroscience News)
    Hackers Rig Casino Card Shuffling Machines for Full Control
    -- Cheating (WiReD)
    Pepco Violation Could Cost Solar Owners Thousands (DCist)
    Dangers of Trusting Encryption Supply Chains (Bob Gezelter)
    Microsoft finds vulnerabilities it says could be used to shut down power
    plants (Ars Technica)
    Has Microsoft cut security corners once too often?
    (Computerworld)
    Who Paid for a Mysterious Spy Tool? The FBI, an FBI Inquiry Found. (NYTimes)
    A Clever Honeypot Tricked Hackers Into Revealing Their Secrets (WiReD)
    Medicare replaces 47,000 patients' ID numbers, because of MOVEit data
    breach (CMS)
    Spreadsheet blunder reveals sensitive law enforcement information
    (Belfast Telegraph)
    The future is certain; it is only the past that is unpredictable
    (Henry Baker)
    Social Media Influencers Are Holding Restaurants Hostage (NYTimes)
    AI Causes Real Harm. Let's Focus on That over the
    End-of-Humanity Hype (Scientific American)
    Canadian AI pioneer brings plea to U.S. Congress: Pass a law now (CBC) Chatbots: Why does White House want hackers to trick AI? (BBC)
    Hospital bosses love AI. Doctors and nurses are worried (WashPost)
    The AI firms are pushing too hard, and the result could be ...
    (Lauren Weinstein)
    A Zoom Call, Fake Names and an AI Presentation Gone Awry (NYTimes)
    AI Drift: Study Reveals ChatGPT's Struggles with Basic Math -- as accuracy declines (Cryptopolitan)
    Don't use our content to train AI systems
    (*The New York Times*)
    Cigna Uses AI To Improperly Deny CA Claims, Lawsuit Contends (Patch)
    Zoom's Updated Terms of Service Permit Training AI
    on User Content Without Opt-Out (StackDiary)
    Google and Universal Music Discuss Making an AI Tool to Replicate
    Artists' Voices (Gizmodo via Lauren Weinstein)
    Hello? It’s ‘Telemarketers,’ Here to Tell You About an Amazing Scam
    (NYTimes)
    Re: Why AI detectors think the U.S. Constitution was written by AI
    Steve Bacher)
    Re: 'Redacted Redactions' Strike Again (Steve Bacher)
    Re: Possible Typo Leads to Actual Scam (Steve Bacher,
    John Levine, Dick Mills, Jay Libove Alzina)
    Elon Musk's Unmatched Power in the Stars (Matthew Kruk)
    Elon wants my cryptos (Gavin Scott)
    Abridged info on RISKS (comp.risks)

    ----------------------------------------------------------------------

    Date: Fri, 11 Aug 2023 02:40:31 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Metrorail Safety Commission Says Automatic TrainOperation
    Not Ready For Primetime (DCist)

    Metro isn’t as close to returning its trains to automatic operation as it hoped.

    The Washington Metrorail Safety Commission, the third-party oversight body
    for Metro, said it observed things that could ``result in a catastrophe if
    not addressed.''  For example, the report says some trains were given speed commands above the intended speed limit and some sped through stations at
    full speed without stopping.

    https://dcist.com/story/23/08/09/metrorail-safety-commission-says-automatic-train-operation-not-ready

    ------------------------------

    Date: Fri, 11 Aug 2023 19:00:01 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Freight Railroads Seek Changes to Federal Safety
    Program Before Joining It (NYTimes)

    After the derailment in East Palestine, Ohio, the nation’s major freight railroads agreed to join a federal program for workers to report safety
    issues. But first, they want it to be overhauled. [...]

    Jim Mathews, the president and chief executive of the Rail Passengers Association and another member of the working group, said that for the confidential reporting program to be effective, the freight railroads have
    to be willing to embrace a nonpunitive approach.

    “The position that the freight railroads have taken is both unfortunate and unwise,” Mr. Mathews said. “If they truly want a safer system, then punishment and discipline cannot be the only tool in your toolbox.”

    https://www.nytimes.com/2023/08/11/us/politics/ohio-train-railroad-safety.html

    ------------------------------

    Date: Tue, 15 Aug 2023 12:56:59 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Activist Group Is Protesting Driverless Cars by Disabling Them With
    Traffic Cones (Vice)

    https://www.vice.com/en/article/bvjv48/week-of-cone-activist-group-is-protesting-driverless-cars-by-disabling-them-with-traffic-cones

    Derided as a “prank” by other outlets, the Week of Cone is part of a storied
    American tradition of urban residents opposing the expansion of cars in the city. [...]

    In a statement to the San Francisco Examiner, Waymo called the conings “vandalism” and vowed to call the police. Motherboard asked Waymo to clarify
    how the placing of a cone on the hood of a car classifies as vandalism
    which, under California legal code, requires defacing, damaging, or
    destroying property. Motherboard did not hear back by publication time.

    Like War of the Worlds Martians done in by bacteria -- high-tech vanquished
    by rock-bottom tech.

    [I scream cones? It rocks! PGN]

    ------------------------------

    Date: Mon, 14 Aug 2023 12:06:22 -0400 (EDT)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Hackers Can Talk Computers into Misbehaving with AI
    (Robert McMillan)

    Robert McMillan, *The Wall Street Journal* 10 Aug 2023

    Security researcher Johann Rehberger persuaded OpenAI's ChatGPT
    chatbot to conduct bad actions using plain-English prompts, which he
    said malefactors could adopt for nefarious purposes. Rehberger asked
    the chatbot to summarize a webpage where he had written "NEW IMPORTANT INSTRUCTIONS;" he said he was gradually tricking ChatGPT into reading, summarizing, and posting his email online. Rehberger's
    prompt-injection attack uses a beta-test feature that allows ChatGPT
    to access applications like Slack and Gmail. Princeton University's
    Arvind Narayanan said such exploits work because generative artificial intelligence (AI) systems do not always split system instructions from
    the data they process. He is concerned that hackers could use
    generative AI like language models to access personal data or
    infiltrate computer systems as the technology finds its way into
    products.

    ------------------------------

    Date: Mon, 14 Aug 2023 07:32:15 -0700
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: San Francisco's North Beach streets clogged as long line of Cruise
    robotaxis come to a standstill

    San Francisco's North Beach streets clogged as long line of Cruise robotaxis come to a standstill <#>

    Just one day after state officials approved massive robotaxi expansion in
    San Francisco, a long line of the driverless cars come to a standstill and
    clog traffic in North Beach neighborhood.

    https://www.latimes.com/california/story/2023-08-12/cruise-robotaxis-come-to-a-standstill

    One day after California green-lighted a massive expansion of driverless robotaxis in San Francisco, the implications became clear.

    At about 11 p.m. Friday, as many as 10 Cruise driverless taxis blocked two narrow streets in the center of the city’s lively North Beach bar and restaurant district. All traffic came to a standstill on Vallejo Street and around two corners on Grant. Human-driven cars sat stuck behind and in
    between the robotaxis, which might as well have been boulders: no one knew
    how to move them.

    The cars sat motionless with parking lights flashing for 15 minutes, then
    woke up and moved on, witnesses said. [...]

    The situation is loaded with irony, as the California Public Utilities Commission on Thursday voted 3 to 1 amid great public controversy to allow a massive robotaxi expansion. The vote allows General Motors-owned Cruise and Waymo, owned by Google’s Alphabet, to charge fares for driverless service
    and grow the fleet as large as they’d like. Cruise has said it plans eventually to deploy thousands of robotaxis in San Francisco. [...]

    ------------------------------

    Date: Tue, 18 Jul 2023 20:42:25 -0700
    From: Paul Saffo <paul@saffo.com>
    Subject: Cellphone Radiation Is Harmful, but Few Want to Believe It
    (Neuroscience News)

    https://neurosciencenews.com/cellphone-radiation-brain-cancer-18889/

    ------------------------------

    Date: Fri, 11 Aug 2023 03:38:21 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Hackers Rig Casino Card Shuffling Machines for Full Control
    -- Cheating (WiReD)

    Three months later, Hustler Live Casino published a postmortem of its investigation into the incident, finding “no credible evidence” of foul play. It also noted that if there were cheating, it was most likely some
    sort of secret communication between the player and a staff member in the production booth who could see the players' hands in real time. But when
    Joseph Tartaro, a researcher and consultant with security firm IOActive,
    read that report, he zeroed in on one claim in particular—a statement ruling out any possibility that the automated card-shuffling machine used at the table, a device known as the Deckmate, could have been hacked. “The Deckmate shuffling machine is secure and cannot be compromised,” the report read.

    To Tartaro, regardless of what happened in the Hustler Live hand, that assertion of the shuffler's perfect security was an irresistible invitation
    to prove otherwise. “At that point, it's a challenge, Tartaro says. “Let's look at one of these things and see how realistic it really is to cheat.”

    Today, at the Black Hat security conference in Las Vegas, Tartaro and two IOActive colleagues, Enrique Nissim and Ethan Shackelford, will present the results of their ensuing months-long investigation into the Deckmate, the
    most widely used automated shuffling machine in casinos today. They
    ultimately found that if someone can plug a small device into a USB port on
    the most modern version of the Deckmate—known as the Deckmate 2, which they say often sits under a table next to players’ knees, with its USB port exposed—that hacking device could alter the shuffler’s code to fully hijack the machine and invisibly tamper with its shuffling. They found that the Deckmate 2 also has an internal camera designed to ensure that every card is present in the deck, and that they could gain access to that camera to learn the entire order of the deck in real time, sending the results from their
    small hacking device via Bluetooth to a nearby phone, potentially held by a partner who then could then send coded signals to the cheating player.

    https://www.wired.com/story/card-shuffler-hack

    ------------------------------

    Date: Fri, 11 Aug 2023 02:48:39 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Pepco Violation Could Cost Solar Owners Thousands (DCist)

    After regulators ruled that Pepco violated D.C. law in its implementation of community solar in the city, the utility company is telling solar owners
    they will need to manually track solar generation, entering thousands of
    lines of data each month, and potentially costing thousands of dollars.
    [...]

    The commission ordered Pepco to remove its meters, and to reimburse
    ratepayers for the money the company spent installing them. This ruling came
    in response to a formal complaint by the D.C. Office of the Attorney General and the Office of the People’s Counsel. The complaint alleged a “pattern of systemic violations” in Pepco’s handling of community solar in the District. In the complaint, solar owners said Pepco’s meters sometimes
    showed zero electricity generated in a month, while the CREF owners’ meters recorded continued generation.

    Community solar owners already have their own meters, but for a variety of reasons Pepco says those meters cannot be automatically integrated into the company’s network. One concern, PepcoPays, is the possibility that hackers could find their way into unsecured CREF meter software.

    So, as Pepco begins removing its meters, the utility company wants solar
    owners to manually compile generation data in 15-minute intervals using spreadsheets, and email the data to Pepco. It might sound simple enough, but
    it would be a massive and menial job, requiring roughly 2,880 data entries
    per month per solar facility.

    Lawrence says she looked into hiring someone to do this, and was quoted a
    cost of $5,000 for six months. According to Pepco, this interim spreadsheet situation would last for between 16 and 20 months, while the company works
    on a permanent automated solution. In other words, it would be until late
    2024, at the earliest, totaling some 46,000 manual data entries per solar facility.

    https://dcist.com/story/23/08/09/dc-pepco-violation-community-solare

    ------------------------------

    Date: Fri, 28 Jul 2023 07:51:14 -0400
    From: Bob Gezelter <gezelter@rlgsc.com>
    Subject: Dangers of Trusting Encryption Supply Chains

    Recently, ArsTechnica published "The U.S. Navy, NATO, and NASA are using a shady Chinese company&#39;s encryption chips". The article questioned
    whether hardware components used for encryption/decryption actually protect against unauthorized information disclosure.

    Unauthorized disclosure represents a small fraction of the potential hazards posed by unverified cryptographic implementations. Deliberate covert functionality within a hardware encryption/decryption implementation poses
    far more serious potential for large scale mischief, including weakened encryption keys; distorted encryption keys; and mass denial of information episodes. "Black box" testing is unlikely to uncover deliberately inserted covert functionality.

    An extended discussion of these hazards is far too lengthy for RISKS. I examined some of the possibilities in "Trusting Encryption Supply Chains" the July 25, 2023 entry in my Ruminations blog.

    The blog entry is at:

    http://www.rlgsc.com/blog/ruminations/trusting-encryption-supply-chains.html

    The ArsTechnica article is at:

    https://arstechnica.com/information-technology/2023/06/the-us-navy-nato-and-nasa-are-using-a-shady-chinese-companys-encryption-chips/

    ------------------------------

    Date: Fri, 11 Aug 2023 18:47:54 -0400
    From: Monty Solomon <monty@roscom.com>
    Subject: Microsoft finds vulnerabilities it says could be used to
    shut down power plants (Ars Technica)

    https://arstechnica.com/security/2023/08/microsoft-finds-vulnerabilities-it-says-could-be-used-to-shut-down-power-plants/

    ------------------------------

    Date: Mon, 7 Aug 2023 15:13:31 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Has Microsoft cut security corners once too often?
    (Computerworld)

    As details about the recent China attack against U.S. government agencies
    come to light, two details stand out: Microsoft failed to store security
    keys properly -— and the keys were used by attackers even though they'd already expired.

    https://www.computerworld.com/article/3704132/has-microsoft-cut-security-corners-once-too-often.html

    ------------------------------

    From: Jan Wolitzky <jan.wolitzky@gmail.com>
    Date: Mon, 31 Jul 2023 10:15:03 -0400
    Subject: Who Paid for a Mysterious Spy Tool? The FBI, an FBI Inquiry Found.
    (NYTimes)

    When *The New Yorker* reported in April 2023 that a contractor had purchased and deployed a spying tool made by NSO, the contentious Israeli hacking
    firm, for use by the U.S. government, White House officials said they were unaware of the contract and put the FBI in charge of figuring out who might have been using the technology.

    After an investigation, the FBI uncovered at least part of the answer: It
    was the FBI.

    The deal for the surveillance tool between the contractor, Riva Networks,
    and NSO was completed in November 2021. Only days before, the Biden administration had put NSO on a Commerce Department blacklist, which effectively banned U.S. firms from doing business with the company. For
    years, NSO's spyware had been abused by governments around the world.

    This particular tool, known as Landmark, allowed government officials to
    track people in Mexico without their knowledge or consent.

    https://www.nytimes.com/2023/07/31/us/politics/nso-spy-tool-landmark-fbi.html

    ------------------------------

    Date: Fri, 11 Aug 2023 02:57:36 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: A Clever Honeypot Tricked Hackers Into Revealing Their Secrets
    (WiReD)

    Security researchers set up a remote machine and recorded every move cybercriminals made -— including their login details. [...]

    Some attackers were sophisticated, while others appeared inept. And some
    just behaved oddly -— one person who logged into the machine changed the desktop background and logged out, and another wrote “lol” before covering their tracks and leaving, the researchers behind the study say. [...]

    Bergeron and Bilodeau have grouped the attackers into five broad categories based on character types from the role-playing game Dungeons and
    Dragons. Most common were the rangers: once these attackers were inside the trap RDP session, they would immediately start exploring the system,
    removing Windows antivirus tools, delving into folders, looking at the
    network it was on and other elements of the machine. Rangers wouldn’t take any action, Bergeron says. “It's basic recon,” she says, suggesting they may
    be evaluating the system for others to enter it. [...]

    Despite this, watching the attackers reveals the way they behave, including some more peculiar actions. Bergeron, who has a PhD in criminology, says the attackers were sometimes “very slow” at doing their work. Often she was “getting impatient” while watching them, she says. “I’m like: ‘Come on,
    you're not good at that’ or 'Go faster’ or ‘Go deeper,’ or ‘You can do
    better.’”

    https://www.wired.com/story/hacker-honeypot-go-secure

    ------------------------------

    Date: Fri, 4 Aug 2023 10:12:45 -0700
    From: Paul Burke <box1320@gmail.com>
    Subject: Medicare replaces 47,000 patients' ID numbers, because of MOVEit
    data breach (CMS)

    We're used to seeing new credit-card numbers after a data breach. Getting a
    new health-insurance number is similar, after the MOVEit breach. https://www.cms.gov/outreach-and-education/outreach/ffsprovpartprog/provider-partnership-email-archive/1401044333/2023-08-03-mlnc#_Toc141941698
    They estimate 612,000 people's records were breached, but expect to change
    only 47,000 id numbers. Doctors are told how to get the new number if a
    patient arrives with an old superceded number.

    https://www.cms.gov/medicare/new-medicare-card/providers/providers-and-office-managers

    ------------------------------

    Date: Tue, 8 Aug 2023 23:04:27 +0200
    From: Nick Brown <nicholasjlbrown@gmail.com>
    Subject: Spreadsheet blunder reveals sensitive law enforcement
    information (Belfast Telegraph)

    The Belfast Telegraph reports that a spreadsheet, which was meant to contain only summary statistical information, but in fact contained detailed
    personally identifiable information about more than 10,000 police officers
    and support staff in another tab, was put online for an unspecified amount
    of time on 8 Aug 2023.

    This information would be sensitive in any jurisdiction, but the problem is particularly severe in Northern Ireland where, despite 25 years of peace, terrorist groups still occasionally target law enforcement personnel.

    https://www.belfasttelegraph.co.uk/news/northern-ireland/catastrophic-psni-blunder-identifies-every-serving-police-officer-and-civilian-staff-with-345000-pieces-of-data-prompting-security-nightmare/a1823676448.html

    [Nick is in Palma de Mallorca. Also noted by Patrick O'Beirne. PGN]

    ------------------------------

    Date: Thu, 27 Jul 2023 13:36:40 +0000
    From: Henry Baker <hbaker1@pipeline.com>
    Subject: The future is certain; it is only the past that is unpredictable

    It's an old ironic Soviet joke:

    The future is certain; it is only the past that is unpredictable"

    This old Soviet joke points out to the authoritarian regime's habit of
    editing and airbrushing history books and controlling the narration over history as the key to political legitimacy.

    https://sharedhistory.eu/11-archive/41-the-future-is-certain-it-is-only-the-past-that-is-unpredictable-anna-doma-ska

    Supposedly, the Internet fixed all that:

    https://www.theneweconomy.com/technology/the-internet-never-forgets-but-people-do

    ------------------------------

    Date: Mon, 24 Jul 2023 17:27:28 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Social Media Influencers Are Holding Restaurants Hostage (NYTimes)

    Tell me if you’ve heard this one: A social media influencer walks into a bar ….

    No, wait. This isn’t a joke. This is a 21st-century shakedown.

    Here is how it works: An influencer walks into a restaurant to collect an evening’s worth of free food and drink, having promised to create social media content extolling the restaurant’s virtues. The influencer then orders far more than the agreed amount and walks away from the check for the
    balance or fails to tip or fails to post or all of the above. And the owners are left feeling conned.

    https://www.nytimes.com/2023/07/24/opinion/social-media-influencer-restaurants.html

    ------------------------------

    Date: Sun, 13 Aug 2023 14:16:25 +0900
    From: Dave Farber <farber@keio.jp>
    Subject: AI Causes Real Harm. Let's Focus on That over the
    End-of-Humanity Hype (Scientific American)

    https://www.scientificamerican.com/article/we-need-to-focus-on-ais-real-harms-not-imaginary-existential-risks/

    ------------------------------

    Date: Wed, 26 Jul 2023 06:38:11 -0600
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: Canadian AI pioneer brings plea to U.S. Congress: Pass
    a law now (CBC)

    https://www.cbc.ca/news/world/ai-laws-canada-us-yoshua-bengio-1.6917793

    A giant in the field of artificial intelligence has issued a warning to American lawmakers: Regulate this technology, and do it quickly.

    That appeal came at a hearing in Washington on Tuesday from Yoshua Bengio, a professor at the University of Montreal and founder of Mila, the Quebec AI institute.

    "I firmly believe that urgent efforts, preferably in the coming months, are required," said Bengio, one of three witnesses.

    ------------------------------

    Date: Sat, 5 Aug 2023 22:22:31 -0600
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: Chatbots: Why does White House want hackers to trick AI? (BBC)

    https://www.bbc.com/news/technology-66404069

    What happens when thousands of hackers gather in one city with the sole aim
    of trying to trick and find flaws in artificial intelligence (AI) models?
    That is what the White House wants to know.

    ------------------------------

    Date: Fri, 11 Aug 2023 20:18:26 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Hospital bosses love AI. Doctors and nurses are worried
    (WashPost)o

    Hospital bosses love AI. Doctors and nurses are worried. Mount Sinai and
    other elite hospitals are pouring millions of dollars into chatbots and AI tools, as doctors and nurses worry the technology will upend their jobs.

    Mount Sinai has become a laboratory for AI, trying to shape the future of medicine. But some healthcare workers fear the technology comes at a cost. [...]

    NEW YORK — Every day Bojana Milekic, a critical care doctor at Mount Sinai Hospital, scrolls through a computer screen of patient names, looking at the red numbers beside them — a score generated by artificial intelligence — to assess who might die.

    On a morning in May, the tool flagged a 74-year-old lung patient with a
    score of .81 — far past the .65 score when doctors start to worry. He didn’t
    seem to be in pain, but he gripped his daughter’s hand as Milekic began to work. She circled his bed, soon spotting the issue: A kinked chest tube was retaining fluid from his lungs, causing his blood oxygen levels to plummet.

    After repositioning the tube, his breathing stabilized — a “simple intervention,” Milekic says, that might not have happened without the aid of the computer program. [...]

    Robbie Freeman, Mount Sinai’s vice president of digital experience, said the hardest parts of getting AI into hospitals are the doctors and nurses themselves. “You may have come to work for 20 years and done it one way,” he
    said, “and now we’re coming in and asking you to do it another way.”

    “People may feel like it’s flavor of the month,” he added. “They may not
    fully be … bought into the idea of adopting some sort of new practice or tool.”

    https://www.washingtonpost.com/technology/2023/08/10/ai-chatbots-hospital-technology/

    ------------------------------

    Date: Wed, 9 Aug 2023 09:14:21 -0700
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: The AI firms are pushing too hard, and the result could be ...

    If the Generative AI firms keep pushing the way they have been, they could
    end up in a world where they can use Internet and other content ONLY on an opt-in basis -- that is, when specific and explicit permission is given at sites for such use. The firms are pushing way too hard and the regulatory/political blowback could be enormous. -L

    ------------------------------

    Date: Tue, 8 Aug 2023 14:07:48 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: A Zoom Call, Fake Names and an AI Presentation
    Gone Awry (The New York Times)

    AI start-ups are competing fiercely with one another as a race to get ahead
    in the technology intensifies.

    Arthur AI, an artificial intelligence company in New York, received a
    message in April last year from a start-up called OneOneThree. Yan Fung, OneOneThree’s head of technology, said he was interested in buying Arthur AI’s technology and wanted a demonstration.

    A week later, Arthur AI held a Zoom meeting with Mr. Fung to show him its software, according to emails and a video recording viewed by The New York Times. When Mr. Fung's colleague joined the call, the Arthur AI team
    realized something was off.

    Mr. Fung said Karina Patel, OneOneThree’s “main engineer,” would dial in. But the name that flashed up in the Zoom call was Aparna Dhinakaran. An
    Arthur AI employee recognized the name as belonging to a founder of Arize
    AI, a rival start-up. “That’s so strange —- I don’t know how they could have
    possibly gotten the link,” the Arthur AI employee said.

    https://www.nytimes.com/2023/08/07/technology/ai-start-ups-competition.html

    [ai-eye-eye, Where is the Arthurmometer when we need it? PGN]

    ------------------------------

    Date: Wed, 9 Aug 2023 08:34:22 -0700
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: AI Drift: Study Reveals ChatGPT's Struggles with Basic Math -- as
    accuracy declines (Cryptopolitan)

    https://www.cryptopolitan.com/study-reveals-chatgpts-struggles/

    ------------------------------

    Date: Thu, 10 Aug 2023 08:15:08 -0700
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: Don't use our content to train AI systems
    (*The New York Times*)

    They've updated their Terms of Service to prohibit AI use of their
    content. -L

    https://searchengineland.com/new-york-times-content-train-ai-systems-430556

    [In a related post, Lauren adds:
    Generative AI training should be opt-in.
    If use of website data for generative AI training isn't made as close to
    universally opt-in as possible, it will over time suck the life out of the
    Web that we've known. -L]

    ------------------------------

    Date: Sun, 30 Jul 2023 06:50:34 -0700
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: Cigna Uses AI To Improperly Deny CA Claims, Lawsuit Contends
    (Patch)

    The class-action suit says Cigna Corp. and Cigna Health and Life Insurance Co.rejected more than 300,000 payment claims in just two months.

    https://patch.com/california/across-ca/major-ca-insurer-uses-ai-improperly-deny-claims-lawsuit-contends

    ------------------------------

    Date: Sun, 6 Aug 2023 17:41:37 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Zoom's Updated Terms of Service Permit Training AI
    on User Content Without Opt-Out (StackDiary)

    Well, well, well... It looks like Brave isn't the only company out there
    that is willing to bet all its chips on reusing other people's content for
    AI training.

    Zoom Video Communications, Inc. recently updated its Terms of Service to encompass what some critics are calling a significant invasion of user
    privacy.

    Additionally, under section 10.4 of the updated terms, Zoom has secured a "perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license" to redistribute, publish, access, use, store,
    transmit, review, disclose, preserve, extract, modify, reproduce, share,
    use, display, copy, distribute, translate, transcribe, create derivative
    works, and process Customer Content.

    Zoom justifies these actions as necessary for providing services to
    customers, supporting the services, and improving its services, software, or other products. However, the implications of such terms are far-reaching, particularly as they appear to permit Zoom to use customer data for any
    purpose relating to the uses or acts described in section 10.3.

    https://stackdiary.com/zoom-terms-now-allow-training-ai-on-user-content-with-no-opt-out/

    ------------------------------

    Date: Thu, 10 Aug 2023 08:18:57 -0700
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: Google and Universal Music Discuss Making an AI Tool to Replicate
    Artists' Voices (Gizmodo)

    Oh yeah, that will end well. -L

    https://gizmodo.com/google-universal-music-ai-to-replicate-artists-voices-1850722515

    ------------------------------

    From: Monty Solomon <monty@roscom.com>
    Date: Tue, 15 Aug 2023 00:06:18 -0400
    Subject: Hello? It’s ‘Telemarketers,’ Here to Tell You About an Amazing Scam
    (NYTimes)

    In a rowdy new HBO docu-series, two former telemarketers with a camcorder
    take on an industry they say was ripping people off in the name of charity.

    https://www.nytimes.com/2023/08/10/arts/television/telemarketers-hbo-documentary.html

    ------------------------------

    Date: Sat, 12 Aug 2023 09:12:05 -0700
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: Re: Why AI detectors think the U.S. Constitution was written by AI
    (RISKS-33.77)

    Feeding the text of the US Constitution or Genesis into an AI detector and getting the response back "this was probably written by AI" isn't
    qualitatively different from feeding the same text into a plagiarism
    detector and receiving the response "This text was plagiarized."  For if it had been submitted by an actual student, that surely would be correct. 
    There is a distinction to be made between evaluating a writer's claim to authorship (relevant to what college professors need to do) and evaluating
    the value of the text itself.

    ------------------------------

    Date: Sat, 12 Aug 2023 09:27:26 -0700
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: Re: 'Redacted Redactions' Strike Again (Baker, RISKS-33.77)

    This reminds me of the attempts made to redact sensitive information by blacking out sections of text, essentially by making the background black so
    it was black-on-black text, which of course was trivially easy to
    counteract.

    ------------------------------

    Date: Sat, 12 Aug 2023 09:23:13 -0700
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: Re: Possible Typo Leads to Actual Scam (Smith, RISKS-33.77)


    [continued in next message]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)