• Risks Digest 34.37

    From RISKS List Owner@21:1/5 to All on Fri Jul 26 04:17:12 2024
    RISKS-LIST: Risks-Forum Digest Thursday 25 Jul 2024 Volume 34 : Issue 37

    ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks) Peter G. Neumann, founder and still moderator

    ***** See last item for further information, disclaimers, caveats, etc. ***** This issue is archived at <http://www.risks.org> as
    <http://catless.ncl.ac.uk/Risks/34.37>
    The current issue can also be found at
    <http://www.csl.sri.com/users/risko/risks.txt>

    Contents:
    When it comes to math, AI is dumb (Steve Lohr)
    Microsoft's Global Sprawl Under Fire After Historic Outage
    (WashPost)
    Why no public outrage over CrowdStrike/Microsoft and ATT failures?
    (John Rushby, Andy Poggio)
    Worldwide BSOD outage (via Rebecca Mercuri)
    Crowdstrike references (Cliff Kilby)
    Secure Boot is Completely Compromised (ArsTechnica via Wendy Grossman)
    Hackers could create traffic jams thanks to flaw in
    traffic-light controller, researcher says (TechCrunch)
    Encultured: an AI doomer’s video game startup pivots to
    medicine. It’ll be fine. (Pivot to AI)
    New findings shed light on risks and benefits of integrating AI
    into medical decision-making (medicalxpress.com)
    Steven Wilson Struggles To Hear That It's Not Him Singingxo
    AI-Created Songs (Blabbermouth)
    Limitless AI (Gabe Goldberg)
    AI captions (Jim Geissman)
    Switzerland now requires all government software to be
    open source (ZDNET)
    Bipartisan legislation that would require all users to use government IDs to
    access major websites advances in Senate (NBC News)
    LLM AI Bios (Rob Slade)
    Re: U.S. Gender Care Is Ignoring Science (Martin Ward)
    Re: In Ukraine War, AI Begins Ushering In an Age of Killer Robots
    (Amos Shapir)
    Re: Fwd: Ozone Hole Mk. II (Cliff Kilby)
    Abridged info on RISKS (comp.risks)

    ----------------------------------------------------------------------

    Date: Tue, 23 Jul 2024 10:23:22 PDT
    From: Peter G Neumann <neumann@csl.sri.com>
    Subject: When it comes to math, AI is dumb (Steve Lohr)

    Steve Lohr, *The New York Times* Business Section front
    page, 23 Jul 2024

    Early computers followed rules. AI follows probabilities. But in
    mathematics, there is no probable answer, only the right one.

    ``This technology does brilliant things, but it doesn't do everything.''
    Kristian Hammond

    ------------------------------

    Date: Wed, 24 Jul 2024 10:44:35 -0400 (EDT)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Microsoft's Global Sprawl Under Fire After Historic Outage"\
    (WashPost)

    Cristiano Lima-Strong, Cat Zakrzewski, and Jeff Stein,
    *The Washington Post*, 20 Jul 2024

    The July 19 computer outage resulting from a defective CrowdStrike update to Windows systems worldwide shines a spotlight on the global economy's
    dependence on Microsoft. Although Microsoft said only an estimated 8.5
    million devices were impacted, accounting for less than 1% of computers
    running the Windows operating system, U.S. Federal Trade Commission Chair
    Lina Khan said it underscores "how concentration can create fragile
    systems."

    ------------------------------

    Date: Tue, 23 Jul 2024 12:30:06 -0700
    From: John Rushby <rushby@csl.sri.com>
    Subject: Why no public outrage over CrowdStrike/Microsoft and ATT
    failures?

    There's been plenty of anger at the consequences of these failures,
    but I'm surprised to see no public outrage over their causes.

    The CrowdStrike bug was apparently a null dereference in C++ code
    operating in kernel mode.

    First of all, this is an inexcusable failure of quality assurance
    and raises all manner of questions about CrowdStrike's competence.
    Why C++, why no static analysis and other checks?

    Then there's the whole rationale for the existence of this stuff. Why do organizations need a third-party virus detector? And why does it need to operate in kernel mode?

    Because Microsoft's OS is a pile of vulnerabilities.
    And why does it need to operate in kernel mode?
    And yet they tout Windows 365 as a security "solution". And we use it.

    Not to mention that their laggard design combined with ruthless
    business practices retarded human progress by a decade
    (compared with might have been from Engelbart/PARC/Apple).

    It's obvious that any adversary will have cyberattacks ready to go that will inflict greater and more lasting outages during any period of threat or conflict.

    Then there's the ATT debacle. The public suffers ever more intrusive
    and ponderous "security" procedures: cryptographically strong and
    frequently changed passwords, dual-factor authentication, you name it.
    These impose economic as well as personal costs. Yet in the entire
    history of computing I doubt there's been any widespread loss due to penetration of individual accounts: why bother when it's easier to get
    the whole lot from the corporate source?

    It seems to me that these failures of corporate competence and
    diligence are of the same order (and have the same source) as Boeing's
    safety failures, yet the public and government and legislature are not
    showing comparable outrage and investigative zeal.

    Why not? Is there something we should be doing?

    ------------------------------

    Date: Tue, 23 Jul 2024 20:37:30 -0700
    From: Andy Poggio <poggio@csl.sri.com>
    Subject: Why no public outrage over CrowdStrike/Microsoft and ATT
    failures?

    I am very sympathetic to John Rushby’s concerns, and concur with his analysis.

    With respect to Internet security, the guilty parties are the Internet creators, of which SRI was one of three -— the other two were BBN in Cambridge, MA and ISI in Santa Monica, CA. Vint Cerf at DARPA was a leader
    and driving force. We just didn’t consider security -— it was challenging enough to get things to work at all.

    To give you an example, in the 70s and 80s there were a few Packet Radio networks around (the precursor to cellular networks). One was in the the
    San Francisco Bay Area with its central router located at SRI. There were a number of radio nodes scattered around local mountain tops as well as in a
    van (a model of which is in SRI’s Menlo Park lobby) and other locations. Anybody could send a certain type of packet to a radio node. When the radio node received the packet and determined that it was this certain type, it executed a JUMP to the first word of the packet payload and executed
    whatever code the packet happened to contain. These packets were called “flying subroutines”. Why would anyone implement such a dangerous capability?

    1. The mountaintop nodes had no connection to anything besides the radio
    connection. And, the nodes were time-consuming to reach physically. We
    needed to have the nodes make changes that we couldn’t predict in
    advance, and the nodes were very resource limited, especially RAM. The
    flying subroutines solved the problem in an effective, if insecure, way.

    2. There were no bad guys. Essentially no one outside of the research
    community had ever heard of the Internet, let alone had any access to it.
    So security wasn’t yet an issue. To my knowledge, no malicious use of
    flying subroutines ever occurred.

    So, the take away is: let’s not do this again. And my question is: are we doing it again with AI and its currently, promising technologies, LLMs and friends? What security issues are not being addressed?

    [Some of you will remember the four-hour ARPAnet collapse on 27 Oct 1980,
    when multiple bit-corrupted once-a-minute status messages accidentally
    propagated, overflowing the buffers when the six-bit indices could not be
    deleted: A > B > C > A with unchecked wrap-around and the first-ever
    failure of the deletion algorithm for the previous status messages).
    There's an article in my ACM SIGSOFT Software Engineering Notes, vol 6 no
    1. by Eric Rosen, Vulnerabilities of network control protocols, January
    1981, pp. 6-8. It's online, along with all other SEN issues (thanks to
    Will Tracz). BBN learned that the status messages circulating on the net
    needed error-detection or even error-correction. PGN]

    ------------------------------

    Date: Tue, 23 Jul 2024 16:31:21 -0400
    From: DrM <notable@mindspring.com>
    Subject: Worldwide BSOD outage

    Pete Buttigieg announced today that the Department of Transportation has
    opened an investigation into Delta over flight disruptions. (Search -- pete Buttigieg Delta -- for a whole bunch of other links to recent news coverage regarding that carrier.)

    Here's his recent postings on X:

    .@USDOT has opened an investigation into Delta Air Lines to ensure the
    airline is following the law and taking care of its passengers during
    continued widespread disruptions. All airline passengers have the right

    [PGN noted Delta's Delays Signal Slow Recovery from Tech Outage
    Christine Chung and Yan Zhuang, *The New York Times* Business Section front
    page, 23 Jul 2024
    Scalded by Buttigieg for *unacceptable customer service*.]

    ------------------------------

    Date: Wed, 24 Jul 2024 08:43:27 -0400
    From: Cliff Kilby <cliffjkilby@gmail.com>
    Subject: CrowdStrike references

    If your org had to do a manual recovery from CrowdStrike, you should
    probably rotate your BitLocker recovery keys, as this appears to have
    happened.

    https://old.reddit.com/r/sysadmin/comments/1ea5x7t/who_converted_all_of_their_bitlocker_keys_to_qr/

    https://www.itsupportguides.com/knowledge-base/windows-10/windows-10-how-to-reset-bitlocker-recovery-key/

    You should probably review your companies' secure token/authentication
    policies as well. I'm seeing a lot of admins needing to be written up, or

    An Excel sheet with every BitLocker recovery key? Shared? Google Sheets to generate the QR?

    I can sympathize with the problem of needing to manually key in a long
    password on lots of End-User Devices, or kiosk devices, but creating massive security vulnerabilities to expedite management... wait, no that's Windows.

    Nevermind, as you were.

    ------------------------------

    Date: Thu, 25 Jul 2024 19:09:48 +0100
    From: "Wendy M. Grossman" <wendyg@pelicancrossing.net>
    Subject: Secure Boot is Completely Compromised (ArsTechnica_

    https://arstechnica.com/security/2024/07/secure-boot-is-completely-compromised-on-200-models-from-5-big-device-makers/?utm_brand=arstechnica&utm_social-type=owned&utm_source=mastodon&utm_medium=social

    The worst is the device you think you can trust.

    ------------------------------

    Date: Thu, 18 Jul 2024 22:53:34 -0400
    From: Monty Solomon <monty@roscom.com>
    Subject: Hackers could create traffic jams thanks to flaw in
    traffic-light controller, researcher says (TechCrunch)

    https://techcrunch.com/2024/07/18/hackers-could-create-traffic-jams-thanks-to-flaw-in-traffic-light-controller-researcher-says/

    ------------------------------

    Date: Sun, 21 Jul 2024 18:24:15 -0400
    From: "Gabe Goldberg" <gabe@gabegold.com>
    Subject: Encultured: an AI doomer’s video game startup pivots to
    medicine. It’ll be fine. (Pivot to AI)

    Encultured AI was started in 2022 by regulars of AI doomsday site
    LessWrong to work out new safety benchmarks for existential risk —
    whether the AI would turn us all into paperclips. [LessWrong, 2022]

    They needed some way to get humans to engage with the testing. So they
    thought: Let’s write a video game! [LessWrong, 2022]

    The comments rapidly filled with worries about a game AI somehow
    transmuting into a human-level AI, realizing it was being tested in a
    video game, and deciding to lie so it could escape the box and go full
    Skynet. Because these are normal worries about a video game. [LessWrong,
    2022]

    We’re a bit safer now from the prospect of end-of-level bosses escaping
    into reality — because Encultured has pivoted to launching, not one, but
    two medical startups: HealthcareAgents and BayesMed! [HealthcareAgents; BayesMed]

    BayesMed will use AI for “Summarizing patient records; Linking records
    to medical best-practices; Organizing statistical reasoning about
    diagnoses and treatments.” Now, you might think this sounds rather more
    like an incredibly tempting way for a cheap organization to abuse an LLM.

    HealthcareAgents will “advocate for patients to get them early access to AI-enhanced healthcare services.” The problem patients actually have is
    being fobbed off onto a bot rather than a human, but anyway.

    Jaan Tallinn of Encultured has a track record of achievements, having co-founded Skype and Kazaa. But then he discovered LessWrong. Nowadays,
    his companies are writing paragraphs like this: [blog post]

    Our vision for 2027 and beyond remains similar, namely, the development of artificial general healthcare: Technological processes capable of repairing damage to a diverse range of complex systems, including human cells, organs, individuals, and perhaps even groups of people. Why so general? The
    multi-agent dynamical systems theory needed to heal internal conflicts such
    as auto-immune disorders may not be so different from those needed to heal external conflicts as well, including breakdowns in social and political systems. We don’t expect to be able to control such large-scale systems, but we think heal best word to describe our desired relationship with them: As a contributing member of a well-functioning whole.

    Remember that this is a pivot from writing video games. That’s one
    pretty versatile game AI.

    https://pivot-to-ai.com/2024/07/21/encultured-an-ai-doomers-video-game-startup-pivots-to-medicine-itll-be-fine/

    ------------------------------

    Date: Tue, 23 Jul 2024 10:11:34 +0000
    From: Richard Marlon Stein <rmstein@protonmail.com>
    Subject: New findings shed light on risks and benefits of integrating AI
    into medical decision-making (medicalxpress.com)

    https://medicalxpress.com/news/2024-07-benefits-ai-medical-decision.html

    "Integration of AI into health care holds great promise as a tool to help medical professionals diagnose patients faster, allowing them to start treatment sooner," said NLM Acting Director, Stephen Sherry, Ph.D. "However,
    as this study shows, AI is not advanced enough yet to replace human
    experience, which is crucial for accurate diagnosis."

    IBM Watson déjà vu all over again? Human experience is like institutional
    or corporate memory: if you'd worked 'here' long enough, you would know
    what to do or where to go without asking a colleague for help.

    ------------------------------

    Date: Wed, 24 Jul 2024 10:40:25 -0700
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: Steven Wilson Struggles To Hear That It's Not Him Singing
    AI-Created Songs (Blabbermouth)

    Acclaimed British musician, singer-songwriter and record producer Steven
    Wilson has expressed his concern for the rise of artificial intelligence in
    the music industry. His comments come after several songs used AI technology
    to "clone" his vocals and create new tracks.  (Other artists comment as
    well.)

    https://blabbermouth.net/news/steven-wilson-struggles-to-hear-that-its-not-him-singing-a-i-created-songs-this-is-uncanny-almost-surreal

    ------------------------------

    Date: Tue, 23 Jul 2024 19:52:56 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Limitless AI

    Go beyond your mind’s limitations
    Personalized AI powered by what you’ve seen, said, and heard.

    https://www.limitless.ai/

    All seeing/hearing/knowing/telling -- what could go wrong?

    ------------------------------

    Date: Thu, 25 Jul 2024 08:47:43 -0700
    From: Jim Geissman <jgeissman@socal.rr.com>
    Subject: AI captions

    If you turn on closed captioning for BBC News on cable, you will see that
    the text is first put out on the screen and then corrected. It looks like
    there is AI, because often the initial text is nothing like the sounds being spoken, and the corrections totally change it. It seems to me the initial
    text is a chatbot-type guess at what will be said next, not an
    interpretation of the sounds, at least not initially.

    ------------------------------

    Date: Wed, 24 Jul 2024 13:59:42 -0700
    From: geoff goodfellow <geoff@iconia.com>
    Subject: Switzerland now requires all government software to be
    open source (ZDNET)

    *The United States remains reluctant to work with open source, but European countries are bolder.*

    Several European countries are betting on open-source software. In the
    United States, eh, not so much. In the latest news from across the Atlantic, Switzerland has taken a major step forward with its "Federal Law on the Use
    of Electronic Means for the Fulfillment of Government Tasks" (EMBAG). <https://datenrecht.ch/en/bundesgesetz-ueber-den-einsatz-elektronischer-mittel-zur-erfuellung-von-behoerdenaufgaben-embag-in-schlussabstimmung-angenommen/>

    This groundbreaking legislation mandates using open-source software (OSS) in the public sector.

    This new law requires all public bodies to disclose the source code of
    software developed by or for them unless third-party rights or security concerns prevent it. This "public money, public code" approach aims to
    enhance government operations' transparency, security, and efficiency.

    Also: German state ditches Microsoft for Linux and LibreOffice <https://www.zdnet.com/article/german-state-ditches-microsoft-for-linux-and-libreoffice/>

    Making this move wasn't easy. It began in 2011 when the Swiss Federal
    Supreme Court published its court application, Open Justitia, under an OSS license <https://www.openjustitia.ch/DE/interne_Open_Justitia.html>. The proprietary legal software company Weblaw <https://www.weblaw.ch/> wasn't
    happy about this. There were heated political and legal fights for more
    than a decade. Finally, the EMBAG was passed in 2023. Now, the law not only allows the release of OSS by the Swiss government or its contractors, but
    also requires the code to be released under an open-source license "unless
    the rights of third parties or security-related reasons would exclude or restrict this." [...]

    https://www.zdnet.com/article/switzerland-now-requires-all-government-software-to-be-open-source/

    ------------------------------

    Date: Thu, 25 Jul 2024 14:03:27 -0700
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: Bipartisan legislation that would require all users to use
    government IDs to access major websites advances in Senate (NBC News)

    Under the guise of "protect the children" (it would only actually hurt
    them in the long run), both parties continue to push legislation that
    would require all users of social media (and eventually, most sites)
    to link government IDs to their accounts, enabling new levels of
    privacy invasions, cybercrime when data leaks, massive government
    tracking, and worse. -L

    https://www.nbcnews.com/politics/congress/senate-push-forward-children-online-safety-bills-week-rcna163335

    ------------------------------

    Date: Thu, 25 Jul 2024 08:31:44 -0700
    From: Rob Slade <rslade@gmail.com>
    Subject: LLM AI Bios

    Did you know that large language models are described in the Bible? And not just in the New Testament. Isaiah 28:10 literally translates as, "For it is precept upon precept, precept upon precept; line upon line, line upon line;
    a little here, a little there." But, in the original Hebrew, it reads more like, "Tzav la-tzav, tzav la-tzav, kav la-kav, kav la-kav z=E2= =80=98eir
    sham, z=E2=80=98eir sham." In other words, what it is really saying is, "H=
    e keeps telling us blah blah blah blah blah."

    Both literally and idiomatically this is a *really* great description

    The bloom seems to be coming off the AI rose. Yes, corporations are still investing millions, and even billions, of dollars in artificial
    intelligence, mostly in companies that are pursuing the "Large Language
    Model" chimera. Initially, I thought that the large language models were a cute trick, and might possibly have some uses. As time went on, we found
    out about "hallucinations," disinformation, "jailbreaking," and a whole
    bunch of other problems with large language models. What we *didn't* find
    was any activity or process where the large language models were really
    useful.

    Businesses said we can use these large language models to automate mundane tasks. But, really, how mundane does the task have to get before we
    entrust it to a large language model? And if the task gets that mundane,
    is it really a task that we need to do? I am reminded of Peter Drucker's famous quote that there is nothing so useless as doing, efficiently, that
    which should not be done at all.

    So, for quite a while, my take on the large language models has been that
    they are a solution in search of a problem.

    Given the money being thrown at them, that search seems to have become more desperate.

    I have been doing presentations, to various groups, on artificial
    intelligence and the different types of approaches to the field that
    preceded the large language models, and would seem to be considerably
    different in approach. As well as the various risks, both in not pursuing artificial intelligence, and of pursuing artificial intelligence too
    avidly. Recently I was given a date for a presentation to a group that I
    knew would want a bio.

    I hate writing bios.

    I hate *listening* to bios, for that matter. They always seem to be full
    of stock phrases and puff pieces, and rather short on actual facts or
    reasons that I should listen to this particular person, who is doing this presentation not out of any particular interest in or insight into the
    topic, but as a means of reaching for the next rung on the ladder of fame
    and success.

    I hate doing them, on myself. So, I thought it would be amusing to have ChatGPT write a bio of me. ChatGPT is, after all, the tool that pretty
    much everybody thinks about when they think about the large language
    models. In order to see how much the technology has improved over the past
    few months, I decided to also submit the same request to Claude, the tool
    from Anthropic. (Claude is supposed to have better "guard rails" against jailbreaking, than does ChatGPT.) And, today, Meta announced Llama 3.1, so
    I included Meta AI.

    Well, It was somewhat amusing.

    But it has also become part of my presentation. The bios that all three systems produced point out, in large measure, the problems associated with
    the large language models of artificial intelligence.

    Rob Slade was born, variously, in 1948, 1952, and "the 1950's." That last
    is accurate, but rather imprecise. I had not known that there was so much controversy over the date of my birth (although I *was* very young, at the time), especially since it is not exactly a secret. So, some of the
    material is purely in error. I have absolutely no idea where they got 1952
    and 1948 from. I also wonder why all three systems decided that it is important to give the year of my birth, but none mentions where I was born,
    or lived most of my life, or, indeed, where I live now. (Claude *did*
    manage to figure out that I am Canadian.) Again, there is no particular
    secret about this.

    I gave them a three hundred word limit, and, somewhat to my surprise, given
    the weird and wonderful errors that LLMs seem to be capable of making, all three did come in "under budget," at 246, 268, and 279 words. All three systems wasted an awful lot of their word count on what could primarily be called promotional or sales material. I had noted that this is a tendency
    in the large language models. This isn't terribly surprising, given that
    most of the material that they would have been able to abstract from the Internet, would have been primarily sales, marketing, or other promotional material. I don't know whether this speaks to the tendency, on the part of
    the large language models, to hallucinate.

    It is nice to know that I am renowned, with a career spanning several
    decades, have made significant contributions to the field of cybersecurity, authoring numerous books and papers, with a solid foundation for my
    expertise, I'm influential and my publications have served as essential resources for both novices and seasoned professionals, I give engaging presentations, and my ability to demystify complex security concepts make
    me a sought-after speaker and educator, with a career marked by significant achievements and a commitment to advancing the field of information
    security, my work has been instrumental in shaping the understanding of
    digital threats and has left an indelible mark on the information security landscape. My legacy serves as a testament to the importance of
    dedication, expertise, and innovation in the ever-evolving landscape of information security. You will note that none of these claims are really verifiable, and so they are also basically unchallengeable. On the other
    hand, my contributions have been recognized with several awards. (Well, I *did* get a mug from IBM, at an event ...)

    I am also known as "The Father of Viruses." Oh, gee, thanks, Meta.

    ChatGPT found three of my books, Claude two, and Meta one. Nobody found
    all five. There are other Robert Slades on the Internet. Over thirty
    years ago we had the "Robert Slade Internet Club, with a membership of
    about a dozen. There is a Robert Slade who publishes on Greek pottery and inscriptions, another who publishes on fishing lures, another who teaches mathematics, and another who is a DJ for events and parties. In order to
    give AI the best chance, I specified that I wanted a biography of the
    Robert Slade who was an information security expert. To their credit, none
    of the models came up with specific references to the publications of these other Robert Slades.

    ------------------------------

    Date: Mon, 22 Jul 2024 10:27:35 +0100
    From: Martin Ward <mwardgkc@gmail.com>
    Subject: Re: U.S. Gender Care Is Ignoring Science
    (Pamela Paul, RISKS-34.36)

    [Thanks to Martin for debunking this item. I ran it because I generally
    trust *The New York Times* fact checkers -- although sometimes only a few
    days later. PGN]

    Imagine a comprehensive review of research on a treatment for
    children found ``remarkably weak evidence'' that it was effective.

    The so-called "comprehensive review" is the UK Cass Report which has been widely criticised for ignoring 98% of the published science: because these studies did not use double-blind testing. But in a medical environment
    where a treatment is already known to be effective, double-blind testing is unethical and evil.

    Even so the following quotes are instructive: "After 4 years of research the Cass Report found that only 10 youths out of 3,488 had stopped transition-related treatments" "The report also recommends to spend millions and create regional clinics to better serve trans populations" "In a new interview, Cass contradicts many parts of her own report. She said that
    hormone blockers are prescribed too late and hormone treatment should be available based on individual needs. Either she has not read her report or
    has already changed her mind"

    This page contains links to 51 published studies that found that gender transition improves the well-being of transgender people. There are only 4 studies that contain mixed or null findings on the effect of gender
    transition on transgender well-being.

    https://whatweknow.inequality.cornell.edu/topics/lgbt-equality/what-does-the-scholarly-research-say-about-the-well-being-of-transgender-people/

    We conducted a systematic literature review of all peer-reviewed articles published in English between 1991 and June 2017 that assess the effect of gender transition on transgender well-being. We identified 55 studies that consist of primary research on this topic, of which 51 (93%) found that
    gender transition improves the overall well-being of transgender people,
    while 4 (7%) report mixed or null findings. We found no studies concluding
    that gender transition causes overall harm. As an added resource, we
    separately include 17 additional studies that consist of literature reviews
    and practitioner guidelines.

    ------------------------------

    Date: Mon, 22 Jul 2024 17:13:32 +0300
    From: Amos Shapir <amos083@gmail.com>
    Subject: Re: In Ukraine War, AI Begins Ushering In an Age of
    Killer Robots (The New York Times, RISKS-34.36)

    Meanwhile, in Lebanon: https://www.timesofisrael.com/top-hezbollah-field-commander-killed-in-idf-drone-strike-in-south-lebanon/

    Makes one wonder...

    ------------------------------

    Date: Sat, 13 Jul 2024 14:56:15 -0400
    From: Cliff Kilby <cliffjkilby@gmail.com>
    Subject: Re: Fwd: Ozone Hole Mk. II (Ward)

    https://phys.org/news/2024-06-satellite-megaconstellations-jeopardize-recovery-ozone.html

    Dr Ward, As to the amount, and its relation to the background amounts, I
    have to admit my ignorance.

    My point was that humanity has already once started a mass consumer epidemic that severely impacted the ozone layer, from which we still have not
    recovered. This article points to a source of long-tail reactions which will tend to produce the same effects that Montreal was drafted to fix. This is even more of a concern with each launch burning vast quantities of
    carbon. SpaceX Merlin is RP-1 fueled (https://en.m.wikipedia.org/wiki/SpaceX_Merlin). Other rockets use other
    fuels like the Ariane Vulcan ( https://en.m.wikipedia.org/wiki/Ariane_5).
    But, all indications seem to be that bulk hydrogen is not carbon neutral as
    the feedstock is still mostly methane, and that carbon isn't being
    sequestered during reforming. This uncaptured carbon makes grey H and has
    the same net results as burning the feedstock directly.

    As of 2023, less than 1% of dedicated hydrogen production is low-carbon,
    i.e. blue hydrogen, green hydrogen, and hydrogen produced from biomass ( https://www.iea.org/energy-system/low-emission-fuels/hydrogen).

    I find it hard to believe that the cost of installing a network of
    microwave backhauls or 4g+ cell towers is more expensive than having to
    renew an entire constellation every decade.

    This is especially difficult to believe considering cell coverage maps ( https://www.cellularmaps.com/5g-coverage/). I know those are US biased, but most of the US is not densely populated, yet we've managed to almost fill
    the map.

    ------------------------------

    Date: Sat, 28 Oct 2023 11:11:11 -0800
    From: RISKS-request@csl.sri.com
    Subject: Abridged info on RISKS (comp.risks)

    The ACM RISKS Forum is a MODERATED digest. Its Usenet manifestation is
    comp.risks, the feed for which is donated by panix.com as of June 2011.
    SUBSCRIPTIONS: The mailman Web interface can be used directly to
    subscribe and unsubscribe:
    http://mls.csl.sri.com/mailman/listinfo/risks

    SUBMISSIONS: to risks@CSL.sri.com with meaningful SUBJECT: line that
    includes the string `notsp'. Otherwise your message may not be read.
    *** This attention-string has never changed, but might if spammers use it.
    SPAM challenge-responses will not be honored. Instead, use an alternative
    address from which you never send mail where the address becomes public!
    The complete INFO file (submissions, default disclaimers, archive sites,
    copyright policy, etc.) has moved to the ftp.sri.com site:
    <risksinfo.html>.
    *** Contributors are assumed to have read the full info file for guidelines!

    OFFICIAL ARCHIVES: http://www.risks.org takes you to Lindsay Marshall's
    delightfully searchable html archive at newcastle:
    http://catless.ncl.ac.uk/Risks/VL.IS --> VoLume, ISsue.
    Also, ftp://ftp.sri.com/risks for the current volume/previous directories
    or ftp://ftp.sri.com/VL/risks-VL.IS for previous VoLume
    If none of those work for you, the most recent issue is always at
    http://www.csl.sri.com/users/risko/risks.txt, and index at /risks-34.00
    ALTERNATIVE ARCHIVES: http://seclists.org/risks/ (only since mid-2001)
    *** NOTE: If a cited URL fails, we do not try to update them. Try
    browsing on the keywords in the subject line or cited article leads.
    Apologies for what Office365 and SafeLinks may have done to URLs.
    Special Offer to Join ACM for readers of the ACM RISKS Forum:
    <http://www.acm.org/joinacm1>

    ------------------------------

    End of RISKS-FORUM Digest 34.37
    ************************

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)