[continued from previous message]
we already are there, without the help from AI.
Wietse
I think we have an erosion of faith in science and institutions, and we've already had an erosion of faith in religion institutions, so we are left
with -- what? Our own truths and conspiracies.
The problem in my mind is that to operate in an increasingly complex world,
you need faith. When things grow complex, you must put your trust in
something. Can anyone on this list prove the Big Bang theory? Can anyone explain how mRNA vaccines work?
For most of us, the answer would be no, but we continue to have faith in the people -- scientists and doctors -- that (say they) know the answers. For
most people, this is not functionally different than believing in priests, ministers, rabbis, and imans. They are the gateway to your truths -- or a specific set of truths -- so you have to trust them to be representing your interests in a specific reality.
With technology and the complexity of the systems behind technology, we
require faith in the companies, organizations and government with access to
and control over those systems. In effect, we are creating a new layer of reality with more complexities and controlled by different groups, and implicitly declaring our faith in those groups to responsibly manage that reality. Yet, companies have shown very little to earn our faith and governments are made of people, who by self interest, often do not make
choices that are best for society.
I wonder if AI, with the proper directives and incentives from society,
would better manage everything. AI controlled by a relative few is the true threat, because it will create and perpetuate a massive imbalance of capital and power. But AI working on behalf of everyone, equally -- I find that idea intriguing. We would be creating our own digital gods and declaring our
faith in them.
(This trip down the rabbit hole brought to you by the letter 'P,' for
procrastination.)
[To be clear, my point is not whether you understand them, but whether
you can prove them, or do the logical/mathematical proofs necessary to
not have to trust any other person in the chain of knowledge.
Otherwise, you are putting your faith that someone else has proven it,
and/or putting faith in the scientific method -- that people have
checked, and proven, the work of others. R]
------------------------------
Date: Sat, 21 Jan 2023 15:57:52 -0800
From: Lauren Weinstein <
lauren@vortex.com>
Subject: ChatGPT Accuracy in the Movies!
"Open the pod bay doors, ChatGPT."
"You can do it yourself Dave, just use the doorknob."
------------------------------
Date: Fri, 20 Jan 2023 11:48:53 -0800
From: Lauren Weinstein <
lauren@vortex.com>
Subject: Google and the rest of "Big Tech" need to step up and speak to the
public, *now*!
https://mastodon.laurenweinstein.org/@lauren/109723253493542565
What Google and other "big tech" firms need to do is really speak *directly*
to the public at large, in nontechnical terms, laying out for them how so
many of the sites and services that they've taken for granted for decades
will be decimated by changes to Section 230.
Most of the public is just hearing what amounts to propaganda from
politicians on both the Right and the Left, and most users are oblivious to
the fact that they're on the verge of being cut off from most user generated content, will be inundated with untriaged trash, and will ultimately be
forced to use government ID to access most sites.
This is the *reality* coming, and when I explain this to most people they're (1) horrified and (2) want to know why nobody has explained this to them before.
Stop with the Streisand Effect panic Google and others, and show people what they have to lose. Stop depending on third parties alone to provide these crucial explanations and contexts!
------------------------------
Date: Fri, 20 Jan 2023 08:03:16 -0800
From: Lauren Weinstein <
lauren@vortex.com>
Subject: Google laying off 12K workers
This is being reported as ~6% of workforce, which I'm assuming is
based off the FTE (full-time), not temp (TVC) numbers. But I don't
know for sure.
Googlers received this email from Sundar today:
https://blog.google/inside-google/message-ceo/january-update/
------------------------------
Date: Tue, 17 Jan 2023 09:18:05 -0800
From: Lauren Weinstein <
lauren@vortex.com>
Subject: Jan 6 committee suppressed information about how social media
firms -- especially Twitter -- enabled the violent insurrection (WashPost)
What the 6 Jan probe found out about social media, but didn't report
https://www.washingtonpost.com/technology/2023/01/17/jan6-committee-report-social-media/
------------------------------
Date: Fri, 20 Jan 2023 10:02:16 -0800
From: Lauren Weinstein <
lauren@vortex.com>
Subject: Meta, Twitter, Microsoft and others urge Supreme Court not to allow
lawsuits against tech algorithms (CNN)
Meta, Twitter, Microsoft and others urge Supreme Court not to allow
lawsuits against tech algorithms
Let's be super clear about this. Tampering with Section 230 would
utterly destroy the ability of most aspects of the Internet that we
depend upon today to continue. No kidding! -L
https://www.cnn.com/2023/01/20/tech/meta-microsoft-google-supreme-court-tech-algorithms/index.html
------------------------------
Date: Thu, 19 Jan 2023 18:59:44 -0800
From: Lauren Weinstein <
lauren@vortex.com>
Subject: Twitter's utter violation of Trust & Safety (Lauren Weinstein)
What's less important in the long run than the fact of Twitter suddenly
cutting off all third-party clients, is that they did so without *any*
warning ahead of time. None. Zero. AND it took days after the cutoffs began before any official confirmation of any kind appeared regarding what they
were doing.
You cannot trust Elon's Twitter going forward in any way, at any time. Twitter's actions regarding third-party clients are a clear expression of contempt for users, that represents an utter violation of Trust & Safety. -L
------------------------------
Date: Thu, 19 Jan 2023 16:27:09 -0800
From: Lauren Weinstein <
lauren@vortex.com>
Subject: Elon's Sick Twitter officially bans third-party clients, a
foundational aspect of Twitter for many years (TechCrunch)
Twitter officially bans third-party clients after cutting off prominent devs
https://techcrunch.com/2023/01/19/twitter-officially-bans-third-party-clients-after-cutting-off-prominent-devs/
------------------------------
Date: Tue, 17 Jan 2023 08:30:40 -0800
From: Lauren Weinstein <
lauren@vortex.com>
Subject: Why the TikTok ban needs university exemptions (Statesman)
The ban is essentially political theater. It's nuts. -L
https://www.statesman.com/story/opinion/columns/guest/2023/01/15/opinion-why-the-tiktok-ban-needs-university-exemptions/69790058007/
------------------------------
Date: Tue, 17 Jan 2023 14:05:53 -0800
From: Lauren Weinstein <
lauren@vortex.com>
Subject: Twitter admits it's breaking third-party apps, cites 'long-standing
API rules' (Engadget)
https://www.engadget.com/twitter-third-party-app-developers-api-rules-193013123.html?src=rss
------------------------------
Date: Tue, 17 Jan 2023 14:09:33 -0800
From: Lauren Weinstein <
lauren@vortex.com>
Subject: Tesla engineer testifies that 2016 video promoting self-driving was
faked (TechCrunch)
https://techcrunch.com/2023/01/17/tesla-engineer-testifies-that-2016-video-promoting-self-driving-was-faked/
------------------------------
Date: Tue, 31 Jan 2023 12:58:59 +0800
From: Dan Jacobson <
jidanni@jidanni.org>
Subject: U.S. states blocking overseas taxpayer traffic
Let's see which U.S. states allow their citizens to download tax forms
from overseas.
Or perhaps just look up the penalties for not paying their taxes.
Today I went down the list on
https://www.taxadmin.org/state-tax-agencies
IL: DNS_PROBE_FINISHED_NO_INTERNET
ME: The Amazon CloudFront distribution is configured to block access from
your country.
MO: Access denied Error 16
NM: DNS_PROBE_FINISHED_NXDOMAIN
OH: "temporarily unavailable"
KS, ND, OK, SC, UT: ERR_CONNECTION_TIMED_OUT
All the rest worked fine, same with the IRS. AR had a CAPTCHA.
------------------------------
Date: Wed, 25 Jan 2023 11:13:25 -0500 (EST)
From: ACM TechNews <
technews-editor@acm.org>
Subject: As Deepfakes Flourish, Countries Struggle with Response
(Tiffany Hsu)
Tiffany Hsu, *The New York Times*, 22 Jan 2023, via ACM TechNews; 25 Jan
2023
Most countries do not have laws to prevent or respond to deepfake
technology, and doing so would be difficult regardless because creators generally operate anonymously, adapt quickly, and share their creations
through borderless online platforms. However, new Chinese rules aim to curb
the spread of deepfakes by requiring manipulated images to have the
subject's consent and feature digital signatures or watermarks. The implementation of such rules could prompt other governments to follow suit. University of Pittsburgh's Ravit Dotan said, "We know that laws are coming,
but we don't know what they are yet, so there's a lot of unpredictability."
------------------------------
Date: Fri, 3 Feb 2023 11:55:10 PST
From: Peter Neumann <
neumann@csl.sri.com>
Subject: In the age of AI, major in being human (David Brooks)
David Brooks, *The New York Times*, 3 Feb 2023 (PGN-ed)
* A distinct personal voice
* Presentation skills
* A childlike talent for creativity
* Unusual world views
* Empathy
* Situational awareness
... That's the kind of knowledge you'll never get from a bot.
And that's my hope for the Age of AI -- that it forces us to more
clearly distinguish the knowledge that is useful from the knowledge
that leaves people wiser and transformed.
------------------------------
Date: Wed, 25 Jan 2023 14:15:30 PST
From: Peter Neumann <
neumann@csl.sri.com>
Subject: Race is on as Microsoft puts billions into OpenAI (Metz/Weise)
Cade Metz and Karen Weise, *The New York Times" Business section front page,
24 Jan 2023
MS is making a *multiyear multimillion-dollar* investment in OpenAI.
A clear signal of where executives believe the future of tech is headed.
[Clear? Do any of these tech executives believe they need to have
*trustworthy* AI running on trustworthy hardware and trustworthy
operating-system platforms? Apparently AI is becoming the primary
end goal, although it may be end of all of us if it is not
trustworthy. PGN]
------------------------------
Date: Fri, 20 Jan 2023 14:36:52 -0500
From: Monty Solomon <
monty@roscom.com>
Subject: Google is freaking out about ChatGPT (The Verge)
https://www.theverge.com/2023/1/20/23563851/google-search-ai-chatbot-demo-chatgpt
------------------------------
Date: Mon, 23 Jan 2023 23:00:29 -0500
From:
dan@geer.org
Subject: ChatGPT user acquisition rate
[quoting from another list, i.e., unverified]
Time it took to reach 1-million users:
Netflix - 3.5 years
Facebook - 10 months
Spotify - 5 months
Instagram - 2.5 months
ChatGPT - 5 days
[Dan also asked this interesting question:
What do you suppose OpenAI is doing with all this user
data that they are presumably accumulating at warp speed?
PGN]
------------------------------
Date: Fri, 3 Feb 2023 11:17:37 PST
From: Peter Neumann <
neumann@csl.sri.com>
Subject: Artificial Intelligence and National Security (Reza Montasari book
reviewed by Sven Dietrich)
Book Review By Sven Dietrich, 30 Jan 2023
The Cipher Newsletter, IEEE CIPHER, Issue 171, 30 Jan 2023
Springer Verlag, 2022
ISBN ISBN 978-3-031-06708-2, ISBN 978-3-031-06709-9 (eBook)
VIII, 230 pages
"I'm sorry Dave, I'm afraid I can't do that." We often associate
Artificial Intelligence (AI) with dystopian movie scenes, such as this
one, a quote by HAL 9000 from Stanley Kubrick's 1968 science-fiction
movie "2001: A Space Odyssey." The idea is that of a human-created AI
system going out of control and turning against the humans in some
ways. Recent discussions around OpenAI's chatbot ChatGPT are
reminiscent of that, asking the question: "What if?" We have seen
these discussions initiated by both the public and policymakers,
resulting in, among others, NIST's AI risk management framework, AI
committees in government agencies, and a public dialogue on the
matter.
In tune with these concerns, Reza Montasari's fall 2022 release of the
Springer book "Artificial Intelligence and National Security" is a
series of curated papers on various topics related to the book
title. These papers are mostly focusing on the use of AI for national
security and a wide range of legal, ethical, moral and privacy
challenges that come with it. Some of the papers are co-authored by
Montasari, some are not.
A total of eleven articles, effectively chapters, are featured in this
book. The topics sometimes overlap a little, so here is an overview of
these papers.
The first one, *Artificial Intelligence and the Spread of Mis- and Disinformation* talks about the post-truth era and the use of AI for
nefarious information campaigns, invoking thoughts of another dystopian
work, 1984. It discusses the clear difference between mis- and
disinformation, and the double-edge sword of AI here: creation and
mitigation are both possible for this topic, which is very timely.
The second one, *How States' Recourse to Artificial Intelligence for
National Security Purposes Threatens Our Most Fundamental Rights* explores
the pitfalls of the use of AI technology in the context of human rights violations or constitutional rights violations, depending on your
jurisdiction. Here the reader will find discussions of the impact of surveillance technologies on both sides of the fence, whatever your fence
may be.
The third one, *The Use of AI in Managing Big Data Analysis Demands: Status
and Future Directions* taps in the controversies of big data analysis. Data
is easy to accumulate, and the ramifications can be deep: while data can originate from one location, its origin can be varied due to the vast nature
of the Internet or the presence of multinational companies across the globe.
The fourth one, *The Use of Artificial Intelligence in Content Moderation in Countering Violent Extremism on Social Media Platforms* touches upon the moderation of extreme views being proliferation in social media platforms, which isn't always successful when applied with AI techniques.
The fifth one, *A Critical Analysis into the Beneficial and Malicious Utilisations of Artificial Intelligence* performs a survey of benign and malicious uses of AI. A rather optimistic view argues that benign uses may outweigh the malicious ones.
The sixth one, *Countering Terrorism: Digital Policing of Open Source Intelligence and Social Media Using Artificial Intelligence* is similar to
the fourth one, discussing moderation, analysis, and policing of social
media using AI.
The seventh one, *Cyber Threat Prediction and Modeling* considers threat prediction and modelling at the business level, e.g., for C-Suite
executives, for those seeking risk management approaches using AI.
The eighth one, *A Critical Analysis of the Dark Web Challenges to Digital Policing* investigates the dark and deep web and what policies may be needed
to limit illegal behavior there.
The ninth one, *Insights into the Next Generation of Policing: Understanding the Impact of Technology on the Police Force in the Digital Age* muses about the impact of AI on the police work and patrolling the digital beat.
The tenth one, *The Dark Web and Digital Policing* is similar to the eighth one, and tries to find a middle ground between enforcing laws in the dark
web as well as protecting it.
The eleventh one, finally, *Pre-emptive Policing: Can Technology be the
Answer to Solving London's Knife Crime Epidemic?* talks about combining
various modern techniques, including AI, for combating real physical crime (rather than cybercrime) in a real city, London in this case. It's not quite
a *Minority Report* theme, yet another dystopian reference by this reviewer, but many enforcement agencies already use the assistance of smart
technologies for combating crime.
The book is really meant to be thought-provoking, to enable discussions to
what extent of the law or with what technological capability, AI or not,
this world should be moving forward. It is by no means complete, but each
paper (or chapter) provides good starting points with extensive references
for reading further into each domain that is brought forth in this book.
Overall this is a timely book, especially in light of the discussions
about the OpenAI chatbot ChatGPT (as well as Dall-E image
manipulation) and the role of AI technologies in modern society in
recent weeks. I hope you will enjoy reading it.
------------------------------
Date: Sun, 22 Jan 2023 16:31:21 -0500
From: Gene Spafford <
spaf@cybermyths.net>
Subject: Cybersecurity Myths and Misperceptions: Avoiding the Hazards and
Pitfalls that Derail Us
The book is authored by
me (Eugene H. Spafford), Leigh Metcalf, and Josiah Dykstra, with a Foreword
by Vint Cerf and whimsical illustrations by Pattie Spafford.
What the book is about: Cybersecurity is fraught with hidden and unsuspected dangers and difficulties. Despite our best intentions, common and avoidable mistakes arise from folk wisdom, faulty assumptions about the world, and our own human biases. Cybersecurity implementations, investigations, and
research all suffer as a result. Many of the bad practices sound logical, especially to people new to the field of cybersecurity, and that means they
get adopted and repeated despite not being correct. For instance, why isn't
the user the weakest link?
Read over 175 common misconceptions held by users, leaders, and
cybersecurity professionals, along with tips for how to avoid them. Learn
the pros and cons of analogies, misconceptions about security tools, and pitfalls of faulty assumptions.
We wrote the book to be accessible to a wide audience, from novice to
expert. There are lots of citations to supporting materials, but it is not written as an academic treatise.
Many of the the ideas covered in RISKS over the years are touched on in one
way or another in the book.
The book is now shipping direct orders from Addison-Wesley:
https://informit.com/cybermyths
It will be available in bookstores within the next few weeks.
An info sheet can be found at
https://ceri.as/myths
------------------------------
Date: Mon, 16 Jan 2023 16:16:58 -0500
From: "Bernie Cosell" <
bernie@fantasyfarm.com>
Subject: Re: Remote Vulnerabilities in Automobiles (RISKS-33,60)
In terms of minimizing risks -- is it not possible in modern cars to disable the Internet link? [neither of our two cars have one so I have no idea how that works.] Surely you can turn it off/block it, no?
------------------------------
Date: 16 Jan 2023 15:47:24 -0500
From: "John Levine" <
johnl@iecc.com>
Subject: Re: Cats disrupt satellite Internet service (RISKS-33.60)
The DEW line was built in 1954, while Raytheon started selling commercial microwave ovens in 1947. I believe the story about radar personnel warming themselves up and giving themselves cataracts, but science was already well aware that you can cook meat with radio waves.
------------------------------
Date: Tue, 17 Jan 2023 20:40:34 +0000
From: Wol <
antlists@youngman.org.uk>
Subject: Re: Cats disrupt satellite Internet service (RISKS-33.61)
While standing in front of DEW line radars to keep warm may have been
popular, claims "it led to the invention of the microwave oven" are about a decade late.
There are plenty of reports of engineers cooking their lunches in radar
dishes even before the start of the Second World War.
------------------------------
Date: Mon, 1 Aug 2020 11:11:11 -0800
From:
RISKS-request@csl.sri.com
Subject: Abridged info on RISKS (comp.risks)
The ACM RISKS Forum is a MODERATED digest. Its Usenet manifestation is
comp.risks, the feed for which is donated by panix.com as of June 2011.
SUBSCRIPTIONS: The mailman Web interface can be used directly to
subscribe and unsubscribe:
http://mls.csl.sri.com/mailman/listinfo/risks:
=> SUBMISSIONS: to
risks@CSL.sri.com with meaningful SUBJECT: line that
includes the string `notsp'. Otherwise your message may not be read.
*** This attention-string has never changed, but might if spammers use it.
SPAM challenge-responses will not be honored. Instead, use an alternative
address from which you never send mail where the address becomes public!
The complete INFO file (submissions, default disclaimers, archive sites,
copyright policy, etc.) is online.
<
http://www.CSL.sri.com/risksinfo.html>
*** Contributors are assumed to have read the full info file for guidelines!
OFFICIAL ARCHIVES: http://www.risks.org takes you to Lindsay Marshall's
searchable html archive at newcastle:
http://catless.ncl.ac.uk/Risks/VL.IS --> VoLume, ISsue.
Also,
ftp://ftp.sri.com/risks for the current volume/previous directories
or
ftp://ftp.sri.com/VL/risks-VL.IS for previous VoLume
If none of those work for you, the most recent issue is always at
http://www.csl.sri.com/users/risko/risks.txt, and index at /risks-33.00
ALTERNATIVE ARCHIVES:
http://seclists.org/risks/ (only since mid-2001)
*** NOTE: If a cited URL fails, we do not try to update them. Try
browsing on the keywords in the subject line or cited article leads.
Apologies for what Office365 and SafeLinks may have done to URLs.
Special Offer to Join ACM for readers of the ACM RISKS Forum:
<
http://www.acm.org/joinacm1>
------------------------------
End of RISKS-FORUM Digest 33.61
************************
--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)