• Risks Digest 34.06

    From RISKS List Owner@21:1/5 to All on Tue Feb 13 02:43:20 2024
    RISKS-LIST: Risks-Forum Digest Monday 12 February 2024 Volume 34 : Issue 06

    ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks) Peter G. Neumann, founder and still moderator

    ***** See last item for further information, disclaimers, caveats, etc. ***** This issue is archived at <http://www.risks.org> as
    <http://catless.ncl.ac.uk/Risks/34.06>
    The current issue can also be found at
    <http://www.csl.sri.com/users/risko/risks.txt>

    Contents: Backlogged; at least half included
    Most Distant Space Probe Jeopardized by Glitch (Stephen Clar)
    Chinese malware removed from SOHO routers after FBI issues covert
    commands (Ars Technica)
    Deep fakes (CNN)
    Have we lost faith in technology? (BBC)
    AIs sometimes consider nuclear war the best way to achieve peace
    (Lauren Weinstein)
    Police Turn to AI to Review Bodycam Footage (ProPublica)
    The real wolf menacing the news business? AI (Jim Albrecht)
    Google CEO suggests that "*hallucinating* AI misinformation is a *feature*
    (WiReD)
    Diving deep into OpenAI's new study on LLM's and bioweapons
    (Gary Marcus vis Gabe Goldberg)
    How AI is quietly changing everyday life (Politico)
    FCC votes to ban AI-generated misleading robocalls, which ...
    (Lauren Weinstein)
    Google changes Bard to Gemini -- and links it to Google Assistant
    -- but it's still a misleading idiot LLM AI (Lauren Weinstein)
    The Internet of Toothbrushes (Tom Van Vleck)
    No, 3 million electric toothbrushes were not used in a DDoS attack
    (Bleeping Computer via Steve Bacher)
    AI deepfakes get very real as 2024 election season begins]
    (Fast Company)
    Hurd in reflection (Jon Callas)
    VR fail safe vs. driving (Lauren Weinstein)
    Manipulated Biden Video Can Remain Online (CNN)
    Re: AI maxim (Ian)
    Re: ChatGPT can answer yes or no at the same time (DJC)
    Re: Even after a recall, Tesla's Autopilot does dumb dangerous things
    (John Levine)
    A Whistleblower's tale about the Boeing 737 MAX 9 door plug
    (LeeHamNews via Thomas Koenig)
    Re: Why the 737 MAX 9 door plug blew out (Dick Mills)
    Abridged info on RISKS (comp.risks)

    ----------------------------------------------------------------------

    Date: Mon, 12 Feb 2024 11:06:32 -0500 (EST)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Most Distant Space Probe Jeopardized by Glitch
    (Stephen Clark)

    Stephen Clark, Ars Technica, 6 Feb 2024

    Researchers at NASA's Jet Propulsion Laboratory have not received telemetry data from the Voyager 1 space probe since a 14 Nov computer glitch in its Flight Data Subsystem (FDS). They believe the problem involves corrupted
    memory in the FDS, but without the telemetry data, they cannot identify the root cause. Said Voyager project manager Suzanne Dodd, "It would be the
    biggest miracle if we get it back. We certainly haven't given up."

    ------------------------------

    Date: Mon, 12 Feb 2024 11:06:32 -0500 (EST)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Attacks in the Metaverse Are Booming. Police Start to Pay Attention
    (Naomi Nix)

    Naomi Nix, *The Washington Post*. 4 Feb 2024

    Law enforcement is paying closer attention to reports of attacks,
    harassment, and sexual assault in virtual environments. The Zero Abuse
    Project received a grant from the U.S. Department of Justice to educate
    state and local police on crimes committed in VR. There are concerns about
    the psychological impact of harassment in VR, but legal precedent would need
    a significant overhaul for virtual crimes to be prosecuted.

    ------------------------------

    Date: Thu, 1 Feb 2024 15:19:06 +0000
    From: Victor Miller <victorsmiller@gmail.com>
    Subject: Chinese malware removed from SOHO routers after FBI issues covert
    commands (Ars Technica)

    https://arstechnica.com/security/2024/01/chinese-malware-removed-from-soho-routers-after-fbi-issues-covert-commands/

    ------------------------------

    Date: Sun, 4 Feb 2024 22:20:30 +0000
    From: Victor Miller <victorsmiller@gmail.com>
    Subject: Deep fakes (CNN) https://www.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk?cid=ios_app

    ------------------------------

    Date: Fri, 9 Feb 2024 22:02:32 -0700
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: Have we lost faith in technology? (BBC)

    https://www.bbc.com/news/business-68057193

    Relationship status: it's complicated.

    When it comes to technology, never before have we been both more reliant,
    and more wary.

    Society is more connected, but also more lonely; more productive, but also
    more burnt-out; we have more privacy tools, but arguably less privacy.

    There's no doubt that some tech innovation has been universally great. The formula for a new antibiotic that killed a previously lethal hospital
    superbug was invented by an AI tool.


    Machines that can suck carbon dioxide out of the air could be a huge help
    in the fight against climate change. Video games and movies are more
    immersive and entertaining because of better screens and better effects.

    But on the other hand, tech-related scandals dominate headlines. Stories
    about data breaches, cyber attacks and horrific online abuse are regularly
    on the news.

    ------------------------------

    Date: Wed, 7 Feb 2024 10:01:59 -0800
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: AIs sometimes consider nuclear war the best way to achieve peace

    The comparisons to the 1970 film "Colossus: The Forbin Project" are
    painfully obvious. -L

    Escalation Risks from Language Models in Military and Diplomatic Decision-Making

    https://arxiv.org/pdf/2401.03408.pdf

    ------------------------------

    Date: Sun, 11 Feb 2024 22:05:20 -0500
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Police Turn to AI to Review Bodycam Footage
    (ProPublica)

    Body camera video equivalent to 25 million copies of “Barbie” is collected but rarely reviewed. Some cities are looking to new technology to examine
    this stockpile of footage to identify problematic officers and patterns of behavior.

    ...

    Christopher J. Schneider, a professor at Canada’s Brandon University who studies the impact of emerging technology on social perceptions of police,
    said the lack of disclosure makes him skeptical that AI tools will fix the problems in modern policing.

    Even if police departments buy the software and find problematic officers or patterns of behavior, those findings might be kept from the public just as
    many internal investigations are.

    Because it’s confidential,” he said, “the public are not going to know which
    officers are bad or have been disciplined or not been disciplined.”

    https://www.propublica.org/article/police-body-cameras-video-ai-law-enforcement

    ------------------------------

    Date: Mon, 12 Feb 2024 08:22:55 -0800
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: The real wolf menacing the news business? AI.
    (Jim Albrecht)

    [By a laid off senior ex-Googler]

    The author, Jim Albrecht, was senior director of news ecosystem products at Google until he was laid off last year as part of a purge of the team. -L

    https://www.washingtonpost.com/opinions/2024/02/06/ai-news-business-links-google-chatgpt/

    ------------------------------

    Date: Thu, 8 Feb 2024 11:51:07 -0800
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: Google CEO suggests that "*hallucinating* AI misinformation is a
    *feature*

    https://www.wired.com/story/google-prepares-for-a-future-where-search-isnt-king/

    [Tongue-twister? Google Gargoyle Gargles Goggles. Giggle? PGN]

    ------------------------------

    Date: Sun, 4 Feb 2024 15:10:06 -0500
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Diving deep into OpenAI's new study on LLM's
    and bioweapons

    When looked at carefully, OpenAI's new study on GPT-4 and bioweapons is
    deeply worrisome What they didn't quite tell you, and why it might matter, a lot

    https://garymarcus.substack.com/p/when-looked-at-carefully-openais

    ------------------------------

    Date: Sun, 4 Feb 2024 06:57:20 -0800
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: How AI is quietly changing everyday life (Politico)

    A growing share of businesses, schools, and medical professionals have
    quietly embraced generative AI, and there’s really no going back. It is
    being used to screen job candidates, tutor kids, buy a home and dole out medical advice.

    The Biden administration is trying to marshal federal agencies https://www.politico.com/news/2023/10/27/white-house-ai-executive-order-00124067
    to assess what kind of rules make sense for the technology. But lawmakers in Washington, state capitals and city halls have been slow to figure out how
    to protect people’s privacy and guard against echoing the human biases baked into much of the data AIs are trained on.

    “There are things that we can use AI for that will really benefit people,
    but there are lots of ways that AI can harm people and perpetuate
    inequalities and discrimination that we’ve seen for our entire history,” said Lisa Rice, president and CEO of the National Fair Housing Alliance.

    While key federal regulators have said decades-old anti-discrimination laws
    and other protections can be used to police some aspects of artificial intelligence, Congress has struggled to advance proposals <https://www.politico.com/news/2023/09/13/schumer-senate-ai-policy-00115794> for new licensing and liability systems for AI models and requirements
    focused on transparency and kids’ safety.

    “The average layperson out there doesn’t know what are the boundaries of this technology?” said Apostol Vassilev, a research team supervisor focusing on AI at the National Institute of Standards and Technology. “What are the possible avenues for failure and how these failures may actually affect your life?”

    Here’s how AI is already affecting [...]

    https://www.politico.com/news/2024/02/04/how-ai-is-quietly-changing-everyday-life-00138341

    ------------------------------

    Date: Thu, 8 Feb 2024 09:20:28 -0800
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: FCC votes to ban AI-generated misleading robocalls, which ...
    FCC has voted to ban AI-generated misleading robocalls. Which will
    have essentially no effect on actually reducing the number of such
    calls. -L

    -----------------------------

    Date: Thu, 8 Feb 2024 08:53:00 -0800
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: Google changes Bard to Gemini -- and links it to Google Assistant
    -- but it's still a misleading idiot LLM AI

    But if you want advanced "Ultra idiot" -- oops, I mean "Gemini Ultra",
    you can pay Google $20/mo for a subscription! Whoopee. Meanwhile
    Search continues to get flushed down the toilet. -L

    ------------------------------

    Date: Wed, 7 Feb 2024 08:13:52 -0500
    From: Tom Van Vleck <thvv@multicians.org>
    Subject: The Internet of Toothbrushes

    https://it.slashdot.org/story/24/02/06/2219207/3-million-malware-infected-smart-toothbrushes-used-in-swiss-ddos-attacks

    A former OpenAI person and really does believe it's dangerous) or that
    it only needs the sort of regulation we put on nuclear power plants or
    smallpox research.

    The argument is so all over the place I, like you all, couldn't really
    follow it. I kept thinking "where the heck are you going with this" as
    I read it. I believe that he doesn't want to be an iconoclast and
    smash all the icons, and he doesn't want to say that it doesn't need regulation, and most of all, he doesn't think Sam Altman should be
    head of The Spacers Guild (which Sam may not *want*, but he sure
    wouldn't turn down).

    [See also https://www.lawfaremedia.org/article/it-s-morning-again-in-pennsylvania-rebooting-computer-security-through-a-bureau-of-technology-safety
    PGN]

    LATER: Aw, shucks, the Internet of Toothbrushes did not happen https://it.slashdot.org/story/24/02/08/2115202/the-viral-smart-toothbrush-botnet-story-is-not-real

    ------------------------------

    Date: Thu, 8 Feb 2024 10:35:59 -0800
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: No, 3 million electric toothbrushes were not used in a DDoS attack

    www.bleepingcomputer.com

    A widely reported story that 3 million electric toothbrushes were hacked
    with malware to conduct distributed denial of service (DDoS) attacks is
    likely a hypothetical scenario instead of an actual attack.

    [It will most likely happen, sooner or later. My new electric toothbrush
    has Bluetooth ability to connect to the Internet of Things, but it won't
    happen in my house. I practice what I preach. PGN]

    ------------------------------

    Date: Thu, 1 Feb 2024 9:52:48 PST
    From: Peter Neumann <neumann@csl.sri.com>
    Subject: AI deepfakes get very real as 2024 election season begins]

    Recent incidents with fake Biden robocall and explicit Taylor Swift
    deepfakes could further ratchet up disinformation fears.

    https://www.fastcompany.com/91020077/ai-deepfakes-taylor-swift-joe-biden-2024-election

    https://www.fastcompany.com/91020077/ai-deepfakes-taylor-swift-joe-biden-2024-election

    [My Truthache needs Deep Cleaning of the (IMM)Oral Decay. PGN]

    ------------------------------

    Date: Thu, 1 Feb 2024 14:00:45 -0800
    From: Jon Callas <jon@callas.org>
    Subject: Hurd in reflection

    Re: Will Hurd, Should 4 People Be Able to Control the Equivalent of
    a Nuke?

    https://www.politico.com/news/magazine/2024/01/30/will-hurd-ai-regulation-00136941

    Part of the AI discussion going on now has in it the proposition that
    unchecked AI is far more dangerous than nuclear fission/fusion,
    genetic tinkering, chemical research, and so on. They are saying with
    an apparently straight face that AI represents an actual to all human
    life, if not all life on this planet, if not life in the universe
    itself. The folks in that camp are asserting as their starting
    position that the latent dangers of AI are *worse* than nuclear
    weapons.

    I find this maddening in part because there is also an argument they
    are making that I caricature as saying that AI is so dangerous that
    they should be given a legal monopoly to it. Only by giving OpenAI,
    Microsoft, Anthropic, and others total control of AI, can we avert
    extinction as a species. (Well, there's a mere 5-25% chance of
    extinction by their calculations; let me be fair to them for some
    suitable definition of fair.)

    It smells to me like an appeal to legislators for a business grab
    backed with some sort of legal and governmental apparatus that
    resembles the DOE more than the SEC or FTC. It's asking for a business
    moat enforced by draconian crackdowns on competition.

    It's really hard to construct a good argument against this. Many of the
    things I would say to shoot it down appear to actually strengthen the
    power grab. If you forgive the Dune allusions, they're giving us the
    dilemma of choosing between human extinction (which many of us don't
    want), the Butlerian Jihad that smashes all GPUs if not all computers
    (which also many of us don't want), and letting them be CHOAM [Dune] --
    the cartel/oligarchy that gets to control all of the dangerous
    technology, presumably with the authority to use government's monopoly
    on violence to enforce their monopoly on AI.

    not to doomsday, merely to weapons of mass destruction. He's trying to
    argue that if it's really that dangerous then maybe we oughta just go
    to DOE level controls, lest we get to Jehanne Butler. He's trying to
    tackle the greased pig of the AI doomer desire to own it all without
    either saying that it only needs FTC-style regulation (because he's a
    former OpenAI person and really does believe it's dangerous) or that
    it only needs the sort of regulation we put on nuclear power plants or
    smallpox research.

    The argument is so all over the place I, like you all, couldn't really
    follow it. I kept thinking "where the heck are you going with this" as
    I read it. I believe that he doesn't want to be an iconoclast and
    smash all the icons, and he doesn't want to say that it doesn't need regulation, and most of all, he doesn't think Sam Altman should be
    head of The Spacers Guild (which Sam may not *want*, but he sure
    wouldn't turn down).

    [See also https://www.lawfaremedia.org/article/it-s-morning-again-in-pennsylvania-rebooting-computer-security-through-a-bureau-of-technology-safety
    PGN]

    ------------------------------

    Date: Sun, 4 Feb 2024 11:50:05 -0800
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: VR fail safe vs. driving

    I can't think of any mechanism that would be hardware crash or
    software crash fail safe that depended on a display for all visuals.
    It would have to have some sort of physical direct (non-display)
    pass-through for such circumstances, that would operate instantly in
    the event of any kind of failure (including power failure). Not
    impossible, but no signs of that happening on anyone's road map, no
    pun intended. -L

    ------------------------------

    Date: Fri, 9 Feb 2024 11:34:57 -0500 (EST)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Manipulated Biden Video Can Remain Online (CNN)

    Brian Fung and Donie O'Sullivan, *CNN*, 25 Feb 2024

    Meta's Oversight Board said a manipulated video of President Joe Biden can remain on Facebook due to a loophole in the company's manipulated media
    policy that allows it to be enforced only when a video has been altered by
    AI and makes it appear as if a person said something they did not. Because Biden actually did place an *I Voted* sticker on his adult granddaughter,
    the board ruled the video can stay on Facebook despite being edited to make
    it appear as though he touched her chest repeatedly and inappropriately. The board called on Meta to ``reconsider this policy quickly, given the number
    of elections in 2024.''

    [It's very difficult to stay abreast of all the fake shenanigans. PGN]

    ------------------------------

    Date: Thu, 1 Feb 2024 14:50:25 +0000 (GMT)
    From: Ian <risks-4536@jusme.com>
    Subject: Re: AI maxim

    The familiar computing maxim "garbage in, garbage out" -- dating to the
    late 1950s or early 1960s -- needs to be updated to "quality in, garbage
    out" when it comes to most generative AI systems. -L

    What's scary, is when it become "garbage in, gospel out".

    (and given that they're usually feeding off the open Internet, it really
    isn't a high percentage of gospel or quality going in...)

    ------------------------------

    Date: Thu, 1 Feb 2024 09:01:40 +0100
    From: djc <djc@resiak.org>
    Subject: Re: ChatGPT can answer yes or no at the same time
    (Shapir, RISKS-34.05)

    Would you accept an accounting system that makes simple calculation
    errors?

    Working for DEC in the late 1980s, while working on my household budget I discovered and pinpointed a calculation error bug in the company's
    proprietary spreadsheet program, and reported it through internal channels.
    It was never fixed, and was present until the program's demise several years later. A bug so obvious that I detected it while verifying my checkbook!

    It wasn't fixed, I was eventually told, because it stemmed from code that
    was difficult to change. I imagine no one important ever noticed the errors
    or made a fuss. I kept asking myself "How can they not see it?"
    Inattention, I suppose. So they simply accepted it.

    ------------------------------

    Date: 1 Feb 2024 14:05:26 -0500
    From: "John Levine" <johnl@iecc.com>
    Subject: Re: Even after a recall, Tesla's Autopilot does dumb dangerous
    things (Kuenning, RISKS-34.05)

    I was completely unimpressed by the Washington Post article on Tesla's >autosteering feature. Cancel that: I was disgusted.
    I am hardly a Tesla fan. But the author of the article complained that the >automatic STEERING feature blew through stop signs. No duh. ...

    Did we read the same article? Tesla says Autopilot "is designed for
    use on highways that have a center divider, clear lane markings, and
    no cross-traffic."

    The author noted that the car knew there was a stop sign since it
    appeared on the display. You'd think that'd be a pretty strong hint
    that you're not on a freeway, so it should turn Autopilot off and
    force the driver to drive. But nope.

    ------------------------------

    Date: Sun, 4 Feb 2024 12:19:14 +0100
    From: Thomas Koenig <tkoenig@netcologne.de>:
    Subject: A Whistleblower's tale about the Boeing 737 MAX 9 door plug
    (LeeHamNews)

    An extremely interesting account of the circumstances leading to the
    blowout of the Boeing 737 MAX 9 has been published in the comment
    section of an article about the subject, at

    https://leehamnews.com/2024/01/15/unplanned-removal-installation-inspection-procedure-at-boeing/#comment-509962

    It contains a lot of internal details, some of them corroborated
    by other sources. To anybody who knows large corporations,
    it also sounds quite believable. Salient points include:

    The mid-fuselage door installations delivered by Spirit to Boeing
    had 392 "nonconforming findings" in a single year (both for doors
    and for door plugs). Apparently, this was accepted. A team
    from the supplier, Spirit, was on-site to fix warranty issues.
    There are two record systems used side-by-side, one official one,
    which Spirit employees have no write access too, and one unofficial
    one, which is then used to coordinate with them.

    A defect was found and routed to Spirit via the unofficial system.
    Instead of fixing the issue, it was literally painted over (apparently
    a federal crime if an airplane mechanic had done so).

    After the second fix, a problem with the seal was discovered. A decision
    was then made to "open" that plug and exchange the seal.

    This is physically not possible, such a plug needs to be removed, which
    would have had to be recorded in the official system. Instead, they
    called the removal *opening*, didn't record it, and (apparently)
    forgot to put the bolts back in.

    We'll see what the investigation report shows.

    Some other comments were also interesting - by lawyers wishing to
    represent the whistleblower, by people concerned that Boeing would
    find out his identity, by people claiming to be journalists from
    several large news organizations and by a former chairman of the
    Transportation and Infrastructure Committee of the US House
    of Representatives, who claimed that they toughened laws on
    aviation safety - well, apparently not enough).

    It would also interesting to see if at least one of the people
    claiming to be a journalist was in fact somebody who tried to
    get the whistleblower's identity.

    And, of course, as the saying goes: Just because somebody says
    something on the Internet, it doesn't necessarily mean it is true.

    [Also noted in this unusual place by Thomas Koenig. PGN]

    ------------------------------

    Date: Fri, 9 Feb 2024 15:57:11 -0500
    From: Dick Mills <dickandlibbymills@gmail.com>
    Subject: Re: Why the 737 MAX 9 door plug blew out

    Two weeks ago, the Blancolirio channel on Youtube revealed the underlying
    error to be a bug in written QA documents. https://www.youtube.com/watch?v=XhRYqvCAX_k&t=451s

    Responding to complaints about a leak from the seal, the door was opened
    and the seal replaced. The bolts must be removed to open the door plug or
    to remove the door plug. However, Boeing's Quality Assurance Documents
    require QA inspection of the bolts after plug removal, but inspection was
    not required if the plug was only opened. In my view, that makes it a
    bug, analogous to a software error. Faulty written instructions were the cause, even if executed by humans rather than executed by computer.

    The contractor, Spirit, has their own independent Quality Assurance
    Documents that are not identical to Boeing's. Blanoilirio also discussed
    that. That suggests to me that there are other unexplored ways for QA to
    fail.

    It makes me wonder if academics who worked on proofs of software
    correctness, have ever applied those methods to written instructions other
    than computer software.

    ------------------------------

    Date: Sat, 28 Oct 2023 11:11:11 -0800
    From: RISKS-request@csl.sri.com
    Subject: Abridged info on RISKS (comp.risks)

    The ACM RISKS Forum is a MODERATED digest. Its Usenet manifestation is
    comp.risks, the feed for which is donated by panix.com as of June 2011.
    SUBSCRIPTIONS: The mailman Web interface can be used directly to
    subscribe and unsubscribe:
    http://mls.csl.sri.com/mailman/listinfo/risks

    SUBMISSIONS: to risks@CSL.sri.com with meaningful SUBJECT: line that
    includes the string `notsp'. Otherwise your message may not be read.
    *** This attention-string has never changed, but might if spammers use it.
    SPAM challenge-responses will not be honored. Instead, use an alternative
    address from which you never send mail where the address becomes public!
    The complete INFO file (submissions, default disclaimers, archive sites,
    copyright policy, etc.) has moved to the ftp.sri.com site:
    <risksinfo.html>.
    *** Contributors are assumed to have read the full info file for guidelines!

    OFFICIAL ARCHIVES: http://www.risks.org takes you to Lindsay Marshall's
    delightfully searchable html archive at newcastle:
    http://catless.ncl.ac.uk/Risks/VL.IS --> VoLume, ISsue.
    Also, ftp://ftp.sri.com/risks for the current volume/previous directories
    or ftp://ftp.sri.com/VL/risks-VL.IS for previous VoLume
    If none of those work for you, the most recent issue is always at
    http://www.csl.sri.com/users/risko/risks.txt, and index at /risks-34.00
    ALTERNATIVE ARCHIVES: http://seclists.org/risks/ (only since mid-2001)
    *** NOTE: If a cited URL fails, we do not try to update them. Try
    browsing on the keywords in the subject line or cited article leads.
    Apologies for what Office365 and SafeLinks may have done to URLs.
    Special Offer to Join ACM for readers of the ACM RISKS Forum:
    <http://www.acm.org/joinacm1>

    ------------------------------

    End of RISKS-FORUM Digest 34.06
    ************************

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)