RISKS-LIST: Risks-Forum Digest Sunday 4 June 2023 Volume 33 : Issue 72
ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks) Peter G. Neumann, founder and still moderator
***** See last item for further information, disclaimers, caveats, etc. ***** This issue is archived at <
http://www.risks.org> as
<
http://catless.ncl.ac.uk/Risks/33.72>
The current issue can also be found at
<
http://www.csl.sri.com/users/risko/risks.txt>
Contents:
How A Dark Fleet Moves Russian Oil (The New York Times)
Metro Breach Linked To Computer In Russia, Report Finds (DCIST)
Kaspersky Says New Zero-Day Malware Hit iPhones, Including Its Own (WiReD)
$528 Billion Nuclear Cleanup Plan at Hanford Site in Jeopardy (NYTimes)
Secret industry documents reveal that makers of PFAS 'forever chemicals'
covered up their health dangers (phys.org)
Japanese Moon Lander Crashed Because of a Software Glitch (NYTimes)
Millions of Gigabyte Motherboards Were Sold With a Firmware Backdoor (WiReD) Fake students stealing aid from colleges (Nanette Asimov)
Tesla leak reportedly shows thousands of Full Self-Driving safety complaints
(The Verge)
Tesla data leak reportedly details Autopilot complaints (LATimes)
Social Media and Youth Mental Health (U.S. Surgeon General)
Meta slapped with record $1.3 billion EU fine over data privacy (CNN)
Flaws Found in Using Source Reputation for Training Automatic Misinformation
Detection Algorithms (Carol Peters)
Failed Expectations: A Deep Dive Into the Internet's 40 Years of Evolution
(Geoff Huston)
AI Poses 'Risk of Extinction,' Industry Leaders Warn (Kevin Roose)
What we *should* be worrying about with AI (Lauren Weinstein)
Artificial intelligence system predicts consequences of gene modifications
(medicalxpress.com)
How to fund and launch your AI startup (Meetup)
Rise of the Newsbots: AI-Generated News Websites Proliferating Online
(NewsGuard)
Some thoughts on the current AI storm und drang (Gene Spafford)
Massachusetts hospitals, doctors, medical groups pilot ChatGPT technology
(The Boston Globe)
The benefits and perils of using artificial intelligence to trade and other
financial instruments (TheConversation.com)
Professor Flunks All His Students After ChatGPT Falsely Claims It Wrote
Their Papers (Rolling Stone)
Top French court backs AI-powered surveillance cameras for Paris Olympics
(Politico)
Meta's Big AI Giveaway (Metz/Isaac)
Meta hit with record fine by Irish regulator over U.S. data transfers (CBC)
AI scanner used in hundreds of US schools misses knives (BBC)
Milton resident's against CVS raises questions about the use of AI lie
detectors in hiring (The Boston Globe)
EPIC on Generative AI (Prashanth Mundkur)
Reality check: What will generative AI really do for cybersecurity?
(Cyberscoop)
Moody's cites credit risk from state-backed cyber intrusions into
U.S. critical infrastructure (cybersecuritydive.com)
What Happens When Your Lawyer Uses ChatGPT (NYTimes)
Anger over airports' passport e-gates not working (BBC News)
Longer and longer trains are blocking emergency services and killing people
(WashPost)
Denials of health-insurance claims are risingm and getting weirder
(WashPost)
Small plane crashes after jet fighter chase in WashDC area (WashPost)
Response from American Airlines for delay (Steven J. Greenwald)
Microsoft Finds macOS Bug That Lets Hackers Bypass SIP Root Restrictions
(Sergiu Gatlan)
Apps for Older Adults Contain Security Vulnerabilities (Patrick Lejtenyi)
India official drains entire dam to retrieve phone (BBC)
Google's Privacy Sandbox (Lauren Weinstein)
WebKit Under Attack: Apple Issues Emergency Patches for 3 New Zero-Day
Vulnerabilities (Apple)
Q&A: Why is there so much hype about the quantum computer? (phys.org)
Report Estimates Trillions in Indirect Losses Would Follow Quantum Computer
Hack (nextgov.com)
Don't Store Your Money on Venmo, U.S. Govt Agency Warns (Gizmodo)
Re: An EFF Investigation: Mystery GPS Tracker (Steve Lamont)
Re: Three Companies Supplied Fake Comments to FCC (NY AG), but John Oliver
didn't (John Levine)
Re: Near collision embarrasses Navy, so they order public San Diego
(Michael Kohne)
Abridged info on RISKS (comp.risks)
----------------------------------------------------------------------
Date: Sat, 3 Jun 2023 13:15:46 PDT
From: Peter G Neumann <
neumann@csl.sri.com>
Subject: How A Dark Fleet Moves Russian Oil (The New York Times)
This article is by Christian Triebert, Blacki Migliozzi, Alexander Cardia,
Muyl Shao, and David Botti. It covers pages 6-7 in today's National
Edition, and has a front-page satellite image above the fold showing the
Cathay Phoenix tanker docked at the Russian oil terminal in Kozmino,
although its GPS showed it many miles southeast, near to the coast of Japan. Actually, the ship had left from China for a scheduled stop in South Korea,
and then switched its GPS location to a spoofed fixed FAKE location near Niigata (Japan) while returning to Kozmino. According to the article, three tankers tracked by *The NYTimes* from Kozmino had made 13 trips loading
Russian oil and delivering it to China, each using GPS spoofing to mask
their whereabouts.
[Just another instance of spoofed GPS locations, which have been
discussed in earlier RISKS issues, such as these:
Russia Regularly Spoofs Regional GPS (RISKS-31.15)
Ghost ships, crop circles, and soft gold: A GPS mystery in Shanghai,
RISKS-31.48)
Mysterious GPS outages are wracking the shipping industry (RISKS-31.59)
High Seas Deception: How Shady Ships Use GPS to Evade International
Law (RISKS-33.43)
PGN]
------------------------------
Date: Wed, 17 May 2023 17:04:22 -0400
From: Gabe Goldberg <
gabe@gabegold.com>
Subject: Metro Breach Linked To Computer In Russia, Report Finds (DCIST)
A former WMATA contractor using a personal computer in Russia breached
Metro's computer system earlier this year, according to a report from
WMATA's Office of the Inspector General, revealing *grave concerns* in the system's cyber-vulnerabilities.
The investigation by Metro OIG Rene Febles into the hacking revealed several weaknesses in WMATA operations regarding data protection and cyberscurity,
and a failure by the agency to address its vulnerabilities.
``Evidence has surfaced that WMATA, at all levels, has failed to follow its
own data handling policies and procedures as well as other policies and procedures establishing minimum levels of protection for handling and transmitting various types of data collected by WMATA,'' OIG report, made public Wednesday.
https://dcist.com/story/23/05/17/metro-breach-linked-russian-computer
------------------------------
Date: Fri, 2 Jun 2023 18:19:28 -0400
From: Gabe Goldberg <
gabe@gabegold.com>
Subject: Kaspersky Says New Zero-Day Malware Hit iPhones, Including Its Own
(WiReD)
On the same day, Russia's FSB intelligence service launched wild claims of
NSA and Apple hacking thousands of Russians.
https://www.wired.com/story/kaspersky-apple-ios-zero-day-intrusion
------------------------------
Date: Thu, 1 Jun 2023 11:08:36 -0400
From: Gabe Goldberg <
gabe@gabegold.com>
Subject: $528 Billion Nuclear Cleanup Plan at Hanford Site in Jeopardy
(The New York Times)
A $528-billion plan to clean up 54-million gallons of radioactive
bomb-making waste may never be achieved. Government negotiators are looking
for a compromise.
https://www.nytimes.com/2023/05/31/us/nuclear-waste-cleanup.html
[WOPR in *War Games* strikes again?
``The only winning strategy is not to play.''
A compromise here seems like a lose-lose strategy.
PGN]
------------------------------
Date: Fri, 02 Jun 2023 02:21:34 +0000
From: Richard Marlon Stein <
rmstein@protonmail.com>
Subject: Secret industry documents reveal that makers of PFAS 'forever
chemicals' covered up their health dangers (phys.org)
https://phys.org/news/2023-05-secret-industry-documents-reveal-makers.html
... From the department of environment pollution risks.
Is another master settlement agreement, similar to that imposed on tobacco companies, for cancer-causing PFAS -- forever chemical pollution -- in the works?
------------------------------
From: Jan Wolitzky <
jan.wolitzky@gmail.com>
Date: Sat, 27 May 2023 08:36:07 -0400
Subject: Japanese Moon Lander Crashed Because of a Software Glitch
(NYTimes)
A software glitch caused a Japanese robotic spacecraft to misjudge its
altitude as it attempted to land on the moon last month leading to its
crash, an investigation has revealed.
Ispace of Japan said in a news conference on Friday that it had finished
its analysis of what went wrong during the landing attempt on April 25. The Hakuto-R Mission 1 lander completed its planned landing sequence, slowing
to a speed of about 2 miles per hour. But it was still about three miles
above the surface. After exhausting its fuel, the spacecraft plunged to its destruction, hitting the Atlas crater at more than 200 miles per hour.
<
https://www.nytimes.com/2023/05/26/science/moon-crash-japan-ispace.html>
------------------------------
Date: Wed, 31 May 2023 13:50:00 -0400
From: Gabe Goldberg <
gabe@gabegold.com>
Subject: Millions of Gigabyte Motherboards Were Sold With a Firmware
Backdoor (WiReD)
Hidden code in hundreds of models of Gigabyte motherboards invisibly and insecurely downloads programs -- a feature ripe for abuse, researchers say.
Hiding malicious programs in a computer's UEFI firmware, the deep-seated
code that tells a PC how to load its operating system, has become an
insidious trick in the toolkit of stealthy hackers. But when a motherboard manufacturer installs its own hidden backdoor in the firmware of millions of computers -- and doesn't even put a proper lock on that hidden back entrance
-- they're practically doing hackers' work for them.
Researchers at firmware-focused cybersecurity company Eclypsium revealed
today that they've discovered a hidden mechanism in the firmware of motherboards sold by the Taiwanese manufacturer Gigabyte, whose components
are commonly used in gaming PCs and other high-performance computers.
Whenever a computer with the affected Gigabyte motherboard restarts,
Eclypsium found, code within the motherboard's firmware. code within the motherboard's firmware invisibly initiates an updater program that runs on
the computer and in turn downloads and executes another piece of software.
While Eclypsium says the hidden code is meant to be an innocuous tool to
keep the motherboard's firmware updated, researchers found that it's implemented insecurely, potentially allowing the mechanism to be hijacked
and used to install malware instead of Gigabyte's intended program. And
because the updater program is triggered from the computer's firmware,
outside its operating system, it's tough for users to remove or even
discover.
https://www.wired.com/story/gigabyte-motherboard-firmware-backdoor/
------------------------------
Date: Sun, 4 Jun 2023 11:49:18 PDT
From: Peter G Neumann <
neumann@csl.sri.com>
Subject: Fake students stealing aid from colleges (Nanette Asimov)
Nanette Asimov, *The San Francsico Chronicle" print edition,
4 Jun 2023 front page
[Based on an earlier online version:]
Thousands of `ghost students' are applying to California colleges to steal
financial aid. Here's how. SFChronicle, 2 Jun 2023:
Nobody knows how much money the fraudsters have managed to grab by
impersonating enrollees.
Months after a mysterious check for $1,400 landed in Richard Valicenti's mailbox last summer, the U.S. Department of Education notified him that the money was a mistake -- an overpayment of the $3,000 Pell grant he had used to attend Saddleback College in Orange County.
"I told them I never applied for a Pell," said Valicenti, a 64-year-old radiation oncologist at UC Davis who had never even heard of Saddleback.
[...]
[Just the tip of the iceberg, evidently. No surprise... PGN]
------------------------------
Date: Fri, 26 May 2023 20:01:37 -0400
From: Gabe Goldberg <
gabe@gabegold.com>
Subject: Tesla leak reportedly shows thousands of Full Self-Driving
safety complaints (The Verge)
The data contains reports about over 2,400 self-acceleration issues and more than 1,500 braking problems.
https://www.theverge.com/2023/5/25/23737972/tesla-whistleblower-leak-fsd-complaints-self-driving
------------------------------
Date: Sun, 28 May 2023 07:05:03 -0700
From: Steve Bacher <
sebmb1@verizon.net>
Subject: Tesla data leak reportedly details Autopilot complaints (LATimes)
https://www.latimes.com/business/story/2023-05-26/tesla-autopilot-alleged-data-breach-leak
How bad is Tesla Autopilot's safety problem? According to thousands of complaints allegedly from Tesla customers in the U.S. and around the world, pretty bad. <
https://www.latimes.com/business/story/2022-12-08/tesla-lawsuit-full-self-driving-technology-failure-not-fraud>
------------------------------
Date: Wed, 24 May 2023 08:12:24 -0600
From: Jim Reisert AD1C <
jjreisert@alum.mit.edu>
Subject: Social Media and Youth Mental Health (U.S. Surgeon General)
This Advisory describes the current evidence on the impacts of social media
on the mental health of children and adolescents. It states that we cannot conclude social media is sufficiently safe for children and adolescents and outlines immediate steps we can take to mitigate the risk of harm to
children and adolescents.
https://www.hhs.gov/surgeongeneral/priorities/youth-mental-health/social-media/
https://www.hhs.gov/sites/default/files/sg-youth-mental-health-social-media-summary.pdf
https://www.hhs.gov/sites/default/files/sg-youth-mental-health-social-media-advisory.pdf
------------------------------
Date: Tue, 30 May 2023 16:21:16 -0700
From: geoff goodfellow <
geoff@iconia.com>
Subject: Meta slapped with record $1.3 billion EU fine over data privacy
(CNN)
Meta has been fined a record-breaking 1.2 billion euros ($1.3 billion) by European Union regulators for violating EU privacy laws by transferring the personal data of Facebook users to servers in the United States.
https://edition.cnn.com/2022/11/28/tech/meta-irish-fine-privacy-law/index.html https://edition.cnn.com/2022/04/23/business/eu-tech-regulation/index.html
The European Data Protection Board announced the fine in a statement Monday, saying it followed an inquiry into Facebook (FB) by the Irish Data
Protection Commission, the chief regulator overseeing Meta's operations in Europe. <
https://edpb.europa.eu/news/news/2023/12-billion-euro-fine-facebook-result-edpb-binding-decision_en>
<
https://money.cnn.com/quote/quote.html?symb=FB&source=story_quote_link>
The move highlights ongoing uncertainty about how global businesses may
legally transfer EU users' data to servers overseas. [...]
https://www.cnn.com/2023/05/22/tech/meta-facebook-data-privacy-eu-fine
[Matthew Kruk found
https://www.cbc.ca/news/business/meta-europe-fine-data-transfers-1.6851243
PGN]
------------------------------
Date: Wed, 17 May 2023 12:15:57 -0400 (EDT)
From: ACM TechNews <
technews-editor@acm.org>
Subject: Flaws Found in Using Source Reputation for Training Automatic
Misinformation Detection Algorithms (Carol Peters)
Carol Peters, Rutgers Today, 16 May 2023 via ACM TechNews
Rutgers University scientists found algorithms trained to detect `fake news' may have a flawed approach for assessing the credibility of online news stories. The researchers said most of these programs do not evaluate an article's credibility, but instead rely on a credibility score for the article's sources. They rated the credibility and political leaning of 1,000 news articles and incorporated the assessment into misinformation-detection algorithms, then evaluated the labeling methodology's impact on the
algorithms' performance. Article-level source labels matched just 51% of the time, illustrating the source reputation method 's lack of reliability. In response, the researchers created a new dataset of journalistic-quality, individually labeled articles and a process for misinformation detection and fairness audits.
------------------------------
Date: Tue, 30 May 2023 16:18:19 -0700
From: geoff goodfellow <
geoff@iconia.com>
Subject: Failed Expectations: A Deep Dive Into the Internet's 40 Years of
Evolution (Geoff Huston)
In a recent workshop, I attended, reflecting on the evolution of the
Internet over the past 40 years, one of the takeaways for me is how weâve managed to surprise ourselves in both the unanticipated successes weâve encountered and in the instances of failure when technology has stubbornly resisted to be deployed despite our confident expectations to the contrary! What have we learned from these lessons about our inability to predict technology outcomes? Are the issues related to the aspects of the
technology? Are they embedded in the considerations behind the expectations about how a technology will be adopted? Or do the primary issues reside at a deeper level relating to economic and even political contexts? Let's look at this question of failed expectations using several specific examples drawn
from the last 40 years of the Internet's evolution.
*The Public Debut of the Internet (and the demise of O.S.I.)*. [...]
https://circleid.com/posts/20230524-failed-expectations-a-deep-dive-into-the-internets-40-years-of-evolution
------------------------------
Date: Wed, 31 May 2023 14:56:28 PDT
From: Peter G Neumann <
neumann@csl.sri.com>
Subject: AI Poses 'Risk of Extinction,' Industry Leaders Warn (Kevin Roose)
Kevin Roose, *The New York Times*, 30 May 2023, via ACM TechNews
Subtital: Putting the Necessity of Controls on Par with Nuclear Weapons
Industry leaders warned in an open letter from the nonprofit Center for AI Safety that artificial intelligence (AI) technology might threaten
humanity's existence. Signatories included more than 350 executives, scientists, and engineers working on AI, with the CEOs of OpenAI, Google DeepMind, and Anthropic among them. ACM Turing Award recipients and AI
pioneers Geoffrey Hinton and Yoshua Bengio also signed the letter, which
comes amid growing concern about the potential hazards of AI partly fueled
by innovations in large language models. Such advancements have provoked
fears of AI facilitating mass job takeovers and the spread of
misinformation, while earlier this month OpenAI's Sam Altman said the risks were sufficiently dire to warrant government intervention and regulation.
https://www.nytimes.com/2023/05/30/technology/ai-threat-warning.html
[Jan Wolitzky noted a comment on this in The Onion:
``It's sad how desperately these nerds want to make their coding jobs sound
cool.''
https://www.theonion.com/industry-leaders-warn-that-ai-poses-risk-of-extinc= tion-1850497166
PGN]
------------------------------
Date: Wed, 31 May 2023 19:11:27 -0700
From: Lauren Weinstein <
lauren@vortex.com>
Subject: What we *should* be worrying about with AI
We shouldn't be worrying about AI wiping out humanity. That's a smokescreen. That's sci-fi. We need to worry about the *individuals* now and in the near future who can be hurt by the premature deployment of generative AI systems that spew wrong answers and lies, and then when asked for confirmation, lie about their own lies! And just popping up warnings to users is useless,
because you know and I know that hardly anyone will read those warnings or
pay any attention to them whatsoever.
[Remenber the boy who cried wolf too often -- when there was one. PGN]
------------------------------
Date: Thu, 01 Jun 2023 04:00:24 +0000
From: Richard Marlon Stein <
rmstein@protonmail.com>
Subject: Artificial intelligence system predicts consequences of gene
modifications (medicalxpress.com)
https://medicalxpress.com/news/2023-05-artificial-intelligence-consequences-gene-modifications.html
"The new model, dubbed Geneformer, learns from massive amounts of data on
gene interactions from a broad range of human tissues and transfers this knowledge to make predictions about how things might go wrong in disease."
Would commercial or academic life science organizations apply this
capability to reduce human trial expenses for certain genetically engineered medicines or treatments, like CAR T-cells used to treat leukemia? Would prescription drug prices decline as a result?
------------------------------
Date: Fri, 2 Jun 2023 02:21:02 -0400
From: Gabe Goldberg <
gabe@gabegold.com>
Subject: How to fund and launch your AI startup (Meetup)
You've got a great idea, but how do you build it into a successful company? Come learn how to assemble the right team, develop your pitch, and raise venture capital for your new company.
https://www.meetup.com/acm-chicago/events/293851188
What could go wrong?
------------------------------
Date: Fri, 2 Jun 2023 07:00:29 -0700
From: Steve Bacher <
sebmb1@verizon.net>
Subject: Rise of the Newsbots: AI-Generated News Websites Proliferating
Online (NewsGuard)
NewsGuard has identified 49 news and information sites that appear to be
almost entirely written by artificial intelligence software. A new
generation of content farms is on the way.
https://www.newsguardtech.com/special-reports/newsbots-ai-generated-news-websites-proliferating/
------------------------------
Date: Fri, 19 May 2023 17:53:41 -0400
From: Gene Spafford <
spaf@purdue.edu>
Subject: Some thoughts on the current AI storm und drang
There is a massive miasma of hype and misinformation around topics related
to AI, ML, and chat programs and how they might be used -- or misused. I remember previous hype cycles around 5th-generation systems, robotics, and automatic language translation (as examples). The enthusiasm each time
resulted in some advancements that weren't as profound as predicted. That enthusiasm faded as limitations became apparent and new bright, shiny technologies appeared to be chased.
The current hype seems even more frantic for several reasons, not least of which is that there are many more potential market opportunities for the current developments. Perhaps the entities that see new AI systems as a way
to reduce expenses by cutting headcount and replacing people with AI are one
of the biggest drivers causing both enthusiasm and concern (see, for
example,
https://www.businessinsider.com/chatgpt-jobs-at-risk-replacement-artificialintelligence-ai-labor-trends-2023-02?op=1#teachers-5). That
was a driver of the robotics craze some years back, too. The current cycle
has already had an impact on some creative media, including being an issue
of contention in the media writers' strike in the US. It also is raising serious questions in academia, politics, and the military.
There's also the usual hype cycle FOMO (fear of missing out) and the urge to
be among the early adopters, as well as those speculating about the most
severe forms of misuse. That has led to all sorts of predictions of
outlandish capabilities and dire doom scenarios -- neither of which is
likely wholly accurate. AI, generally, is still a developing field and will produce some real benefits over time. The limitations of today's systems may
or may not be present in future systems. However, there are many caveats
about the systems we have now and those that may be available soon that
justify genuine concern.
First, LLMs such as ChatGPT, Bard, et al. are NOT really "intelligent." [...] Second, these systems are not accountable in current practice and law. [...] Third, the inability of much of the general public to understand teh limitations of current systems means that any use may introduce a bias
into how people make their own decisions and choices. [...]
[Long item PGN-ed for RISKS. Check in with Spaf if you want the entire
piece.]
------------------------------
Date: Wed, 31 May 2023 07:27:32 -0400
From: Jan Wolitzky <
jan.wolitzky@gmail.com>
Subject: Massachusetts hospitals, doctors, medical groups pilot ChatGPT
technology (The Boston Globe)
Artificial intelligence is already in wide use in health care: medical
workers use it to record patient interactions and add notes to medical
records; some hospitals use it to read radiology images, or to predict how
long a patient may need to be in intensive care.
But some hospitals have begun to contemplate using a new phase of AI that
is much more advanced and could have a profound effect on their operations,
and possibly even clinical care.
Indeed, never one for modesty, ChatGPT, one form of the new AI technology
that can render answers to queries in astonishing depth (if dubious
accuracy), called its own role in the future of medicine a *groundbreaking development poised to reshape the medical landscape.*
https://www.bostonglobe.com/2023/05/30/metro/massachusetts-hospitals-doctor= s-medical-groups-pilot-chatgpt-technology/
------------------------------
Date: Sun, 21 May 2023 03:26:49 +0000
From: Richard Marlon Stein <
rmstein@protonmail.com>
Subject: The benefits and perils of using artificial intelligence to trade
and other financial instruments (TheConversation.com)
https://theconversation.com/chatgpt-powered-wall-street-the-benefits-and-perils-of-using-artificial-intelligence-to-trade-stocks-and-other-financial-instrument-201436
High Frequency Trading (HFT) platforms elevate financial market
volatility. Cou pling HFT with ChatGPT will likely exponentiate volatility.
[Is e^sh*tload >= Spinal Tap's "11"?]
------------------------------
Date: Thu, 18 May 2023 05:29:01 -0700
From: geoff goodfellow <
geoff@iconia.com>
Subject: Professor Flunks All His Students After ChatGPT Falsely Claims It
Wrote Their Papers (Rolling Stone)
Texas A&M University commerce seniors who have already graduated were
denied their diplomas because of an instructor who incorrectly used AI
software to detect cheating.
https://www.rollingstone.com/culture/culture-features/texas-am-chatgpt-ai-professor-flunks-students-false-claims-1234736601/
https://news.slashdot.org/story/23/05/17/2023212/professor-failed-more-than-half-his-class-after-chatgpt-falsely-claimed-it-wrote-their-final-papers
------------------------------
Date: Thu, 18 May 2023 07:42:33 -0700
From: Steve Bacher <
sebmb1@verizon.net>
Subject: Top French court backs AI-powered surveillance cameras for Paris
Olympics (Politico)
https://www.politico.eu/article/french-top-court-backs-olympics-ai-powered-surveillance-cameras/
------------------------------
Date: Sat, 20 May 2023 15:36:15 PDT
From: Peter Neumann <
neumann@csl.sri.com>
Subject: Meta's Big AI Giveaway (Metz/Isaac)
Cade Metz and Mike Isaac, *The New York Times* business section, front
page continued inside, 20 May 2023
``Do you want every AI system to be under the control of a couple of
powerful American companies?'' Yann LeCun, Meta chief scientist.
As tech giant makes its latest innovation open-source, rivals view it
as a dangerous move.
------------------------------
Date: Mon, 22 May 2023 14:30:41 -0600
From: Matthew Kruk <
mkrukg@gmail.com>
Subject: Meta hit with record fine by Irish regulator over U.S. data
transfers (CBC)
https://www.cbc.ca/news/business/meta-europe-fine-data-transfers-1.6851243
Facebook parent company Meta was hit with a record 1.2 billion euro ($1.75 billion Cdn) fine by its lead European Union privacy regulator over its handling of user information and given five months to stop transferring
users' data to the United States.
The fine, imposed by Ireland's Data Protection Commissioner (DPC), came
after Meta continued to transfer data beyond a 2020 EU court ruling that invalidated an EU-U.S. data transfer pact. It tops the previous record EU privacy fine of 746 million euros ($1.09 billion Cdn) handed by Luxembourg
to Amazon.com Inc in 2021. [...]
------------------------------
Date: Tue, 23 May 2023 06:33:54 -0600
From: Matthew Kruk <
mkrukg@gmail.com>
Subject: AI scanner used in hundreds of US schools misses knives (BBC)
https://www.bbc.com/news/technology-65342798
A security firm that sells AI weapons scanners to schools is facing fresh questions about its technology after a student was attacked with a knife
that the $3.7m system failed to detect.
On Halloween last year, student Ehni Ler Htoo was walking in the corridor
of his school in Utica, New York, when another student walked up behind him
and stabbed him with a knife.
Speaking exclusively to the BBC, the victim's lawyer said the 18-year-old suffered multiple stab wounds to his head, neck, face, shoulder, back and
hand.
The knife used in the attack was brought into Proctor High School despite a multimillion weapons-detection system installed by a company called Evolv Technology, a security firm that wants to replace traditional metal
detectors with AI weapons scanners.
------------------------------
Date: Tue, 23 May 2023 13:45:48 +0000 (UTC)
From: Steve Bacher <
sebmb1@verizon.net>
Subject: Milton resident's against CVS raises questions about the use of
AI lie detectors in hiring (The Boston Globe)
https://www.boston.com/news/the-boston-globe/2023/05/22/milton-residents-lawsuit-cvs-ai-lie-detectors
It's illegal for employers in Mass. to use a lie detector to screen job applicants, but what if they use AI to assess a candidate's honesty?
------------------------------
Date: Thu, 25 May 2023 02:39:50 +0000
From: Prashanth Mundkur <
prashanth.mundkur@sri.com>
Subject: EPIC on Generative AI
In case you haven't seen this great report:
https://epic.org/new-epic-report-sheds-light-on-generative-a-i-harms/
------------------------------
Date: Wed, 24 May 2023 12:20:15 +0000
From: Richard Marlon Stein <
rmstein@protonmail.com>
Subject: Reality check: What will generative AI really do for cybersecurity?
(Cyberscoop)
https://cyberscoop.com/generative-ai-chatbots-cybersecurity/ via
https://www.washingtonpost.com/politics/2023/05/24/food-agriculture-industry-gets-new-center-share-cybersecurity-information/.
"Cleaning data to get it usable for machine learning required time and resources, and once the agency rolled out the models for analysts to use
some were resistant and were concerned that they could be displaced. 'It
took a while until it was accepted that such models could triage and to give them a more effective role,' Neuberger said."
Data cleansing requires skilled eyes and hands, not something an LLM
possesses out-of-the-box. Inculcating these skills into the LLM is
equivalent to outsourcing and off-shoring.
If the data cleansers and infosec engineers were given certain copyright or patent royalties over the knowledge they transferred into the LLM, cybersecurity engineering organizational effectiveness would likely
experience less turnover.
------------------------------
Date: Thu, 01 Jun 2023 12:00:26 +0000
From: Richard Marlon Stein <
rmstein@protonmail.com>
Subject: Moody's cites credit risk from state-backed cyber intrusions into
U.S. critical infrastructure (cybersecuritydive.com)
https://www.cybersecuritydive.com/news/moodys-credit-risk-cyber-critical-infrastructure/651656/
Corporate credit ratings are affected by cybersecurity risk assessments.
This expense should motivate rapid adoption of hardening infrastructure measures to elevate ratings based on infosec audits and preparedness,
thereby reduce cyber-insurance costs.
Where are the skilled hands to competently rollout these core capabilities
and to sustain vigilant operations? Cyber infrastructure engineers and
watchdogs are hard to train, recruit, and retain. Not a role typically
out-sourced or off-shore, unlike the domestic US computer manufacturing
operations migrated to cut costs.
Remains to be seen if AI mitigates hockey-stick cybersecurity cost
expenditures while improving protection benchmarks established by
CISA. Business expenses attributed to cyberinsurance cannot be absorbed by
consumers indefinitely.
------------------------------
Date: Sat, 27 May 2023 13:45:02 -0400
From: Jan Wolitzky <
jan.wolitzky@gmail.com>
Subject: What Happens When Your Lawyer Uses ChatGPT (NYTimes)
[continued in next message]
--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)