• Risks Digest 33.68 (1/2)

    From RISKS List Owner@21:1/5 to All on Sat Apr 1 01:01:04 2023
    RISKS-LIST: Risks-Forum Digest Saturday 1 April 2023 Volume 33 : Issue 68

    ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks) Peter G. Neumann, founder and still moderator

    ***** See last item for further information, disclaimers, caveats, etc. ***** This issue is archived at <http://www.risks.org> as
    <http://catless.ncl.ac.uk/Risks/33.68>
    The current issue can also be found at
    <http://www.csl.sri.com/users/risko/risks.txt>

    Contents:
    Ifixme.com announces 'Right to Repair' program for your human body
    (via Henry Baker)
    In Gen Z's world of dupes, fake is fabulous -- until you try it on
    (WashPost)
    Grindr warns Egyptian police may be using fake accounts to trap users
    (WashPost)
    A scammer tricked Instagram into banning influencers with millions of
    followers. Then he made them pay to recover their accounts. (ProPublica) Amazon Begs Employees Not to Leak Corporate Secrets to ChatGPT (Futurism) People talking about what AI will do to society, here's a niche example
    that's happening right now (TJStebbing)
    Google and Microsoft's chatbots are already citing one another in a
    misinformation sh*tshow (The Verge)
    Warning: AI-generated YouTube Video Tutorials Spreading Infostealer Malware
    (The Hacker News)
    AI-Powered Vehicle Descriptions: Save Money, Save, Time, Sell More!
    (slisghtly redacted by PGN)
    Elon Musk and other tech leaders call for pausee on 'dangerous race' to make
    AI as advanced as humans (CNBC)
    On using Microsoft's Bing Chat for programming (PGN)
    Microsoft Patched Bing Vulnerability That Allowed Snooping on Email, Other
    Data (Robert McMillan)
    DC Metro Will Retrofit Faregates To Cut Down On Fare Evasion (DCist)
    Metro operator investigated for using automation system without clearance
    (The Washington Post)
    Biden Acts to Restrict U.S. Government Use of Spyware (NTTimes)
    Flight problems, not turbulence, found in death of former White House
    official (WashPost)
    Researchers exploit vulnerabilities of smart-device microphones and voice
    assistants (techxplore.com)
    OpenSSL KDF and secure by default (OpenSSL)
    All of your Internet usage will be subject to government tracking and
    control. (Lauren Weinstein)
    Cryptocurrencies (Amy Castor)
    Pwn2Own Hackers Breach a Tesla Twice (Marco Marcelline)>
    Voting vendor in Reality Winner's leak is coming to Texas
    (Texas Observer)
    Malicious Actors Use Unicode Support in Python to Evade Detection
    (Phylum via Monty Solomon)
    Progressives Across Nation Locked Out Of Accounts After CAPTCHA Asks 'Select
    All Squares That Contain A Woman' (Babylonbee)
    SF loses 150K daily office workers during pandemic (SanFranChron)
    Any friend that can be replaced by GPT-4 ... (Rob Slade)
    Abridged info on RISKS (comp.risks)

    ----------------------------------------------------------------------

    Date: Sat, 1 April 2023 00:00:57 +0000
    From: Henry Baker <hbaker1@pipeline.com>
    Subject: Ifixme.com announces 'Right to Repair' program for your human body

    S. California, April 1, 2023. -- Ifixme.com (http://Ifixyou.com) announced today its foray into the medical self-repair business with its 'Right to Repair' program for the human body. Ifixme.com (http://Ifixyou.com) is building on its successful self repair and battery-replacement programs for Medical Devices, and brings a host of interested volunteers to do teardowns, write repair manuals, and participate in forums with many thousands of users and professionals. Ifixme.com has been a supporter of 'Right to Repair'
    laws across the UnitedStates, and intends to stand up to the doctors' and dentists' lobbies to enable ordinary people to perform their own procedures. https://www.cnbc.com/2023/03/29/elon-musk-other-tech-leaders-pause-training-ai-beyond-gpt-4.html

    [Lauren later added this apt comment:]

    The Open Letter to Stop 'Dangerous' AI Race Is a Huge Mess https://www.vice.com/en/article/qjvppm/the-open-letter-to-stop-dangerous-ai-race
    -is-a-huge-mess

    Yeah, you ain't kidding. -L
    PGN]

    ------------------------------

    Date: Mon, 27 Mar 2023 14:17:11 PDT
    From: Peter Neumann <neumann@csl.sri.com>
    Subject: On using Microsoft's Bing Chat for programming

    Dani Barrack pointed an interesting article on letting ChatBots write
    critical code:

    Planting Undetectable Backdoors in Machine Learning Models
    https://arxiv.org/abs/2204.06974

    This paper is full of RISKS-worthy warnings about what might *not* be appropriate for generating code for systems with life-critical and other stringent requirements. It is worth reading by those who think it might be
    a good idea. PGN

    ------------------------------

    Date: Fri, 31 Mar 2023 12:22:00 -0400 (EDT)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Microsoft Patched Bing Vulnerability That Allowed Snooping on
    Email, Other Data (Robert McMillan)

    Robert McMillan, *The Wall Street Journal*, 29 Mar 2023

    Microsoft last month patched an issue discovered by security firm Wiz Inc.
    in the Bing search engine that allowed unauthorized access to email and
    other data. The researchers determined an error in the way applications were configured on Microsoft's Azure cloud-computing platform could allow unauthorized access to Bing users' Microsoft 365 emails, documents,
    calendars, and other tools. The software giant said a small number of applications usingthe Azure Active Directory login management service were impacted by the misconfiguration issue. Wiz said it had no evidence the
    issue had been used by anyone. In announcing in a blog post the issue had
    been fixed, Microsoft offered ways in which companies and consumers can
    better protect themselves from such unauthorized intrusions.

    ------------------------------

    Date: Thu, 23 Mar 2023 15:55:16 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: DC Metro Will Retrofit Faregates To Cut Down On Fare Evasion
    (DCist)

    Metro says it will spend up to $40 million to redesign its new faregates, making it harder to jump over them and evade paying the fare. [...]

    New faregates, which were installed across all 97 stations last year, now
    have sensors that can detect when someone jumps them. That's the beep you
    may often hear in stations. Metro spent $70 million on the faregate replacement, which also added new features like larger and brighter
    displays, bi-directional access, and improved safety features. The old
    ones, installed in 1990, had reached the end of their useful life.

    Metro board members at the time didn't want to make the faregates too cage-like, similar to NYC, so it didn't hurt the atmosphere of Metro
    stations. But new General Manager Randy Clarke has put a renewed emphasis on stopping fare evasion as the transit agency faces a fiscal cliff next year.

    The transit agency released new data Monday saying 13% of Metrorail riders
    did not tap in and pay for their rides, amounting to 40,000 fare evasions
    each weekday during the first two-and-a-half months of 2023.

    https://dcist.com/story/23/03/21/metro-will-retrofit-faregates-to-cut-down-on-far
    e-evasion/

    [How long will it take to catch $70M worth of offenders to make it
    worthwhile? At an average fare of $5 and roughly 200,000 offenders each
    year, the answer is 70 years. That's really nifty long-term planning.
    PGN]

    ------------------------------

    Date: Mon, 27 Mar 2023 16:52:39 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Metro operator investigated for using automation system without
    clearance (The Washington Post)

    The Washington Metrorail Safety Commission said it is investigating a train operator, raising questions about the self-piloting system Metro is testing.

    Metro has been testing ATO for more than a year as it moves toward returning train operations to automatic piloting. Metrorail was designed for the ATO system and had been operating that way for decades until a fatal train crash
    14 years ago. Train movements have since been controlled manually by
    operators in each train's cab.

    The train operating in ATO earlier this month shot past the Innovation
    Center station platform, said Max Smith, spokesman for the safety
    commission. During its ongoing investigation, the commission discovered the operator had used the ATO system multiple times, even though the commission hasn't given the transit agency permission for its use.

    ``The evidence does show that this operator had been using it over the course of that day and had previously used ATO,'' Smith said.

    ``When he was interviewed, he admitted he was curious to see if ATO would work,'' Benson said. ``Based on the investigation, there is no evidence this
    is a systemic problem.'' [...]

    Benson said the overrun occurred at a station where a team that is testing
    and preparing Metro for ATO had not yet installed the necessary track
    equipment that interacts with the ATO system, and also had not conducted engineering tests.

    https://www.washingtonpost.com/transportation/2023/03/24/metrorail-ato-train-oper
    ator/

    ------------------------------

    Date: Mon, 27 Mar 2023 18:11:53 -0400
    From: Jan Wolitzky <jan.wolitzky@gmail.com>
    Subject: Biden Acts to Restrict U.S. Government Use of Spyware (NYTimes)

    President Biden on Monday signed an executive order restricting American government use of a class of powerful surveillance tools that have been
    abused by both autocracies and democracies around the world to spy on
    political dissidents, journalists and human rights activists.

    The tools in question, known as commercial spyware, give governments the
    power to hack the mobile phones of private citizens, extracting data and tracking their movements. The global market for their use is booming, and
    some U.S. government agencies have studied or deployed the technology.

    Commercial spyware, including Pegasus, made by the Israeli firm NSO Group,
    has also been used against American government officials overseas. On
    Monday, a senior administration official said that at least 50 U.S.
    government personnel in at least 10 countries had been hacked with spyware,
    a larger number than was previously known.

    https://www.nytimes.com/2023/03/27/us/politics/biden-spyware-executive-order.html

    ------------------------------

    Date: Sat, 25 Mar 2023 02:07:31 -0400
    From: Monty Solomon <monty@roscom.com>
    Subject: Flight problems, not turbulence, found in death of former White
    House official (WashPost)

    Flight problems, not turbulence, found in death of former White House
    official The flight was marked by a series of missteps, alerts and system issues before the plane lurched violently in the sky, killing Dana Hyde, the NTSB said.

    https://www.washingtonpost.com/transportation/2023/03/24/dana-hyde-airplane-turbulence/

    ------------------------------

    Date: Fri, 24 Mar 2023 08:55:31 +0000
    From: Richard Marlon Stein <rmstein@protonmail.com>
    Subject: Researchers exploit vulnerabilities of smart-device microphones and
    voice assistants (techxplore.com)

    https://techxplore.com/news/2023-03-exploit-vulnerabilities-smart-device-micropho
    nes.html

    ``The researchers developed Near-Ultrasound Inaudible Trojan, or NUIT (French for *nighttime*) to study how hackers exploit speakers and attack voice assistants remotely and silently through the Internet.''

    Ultrasound exploit of assistants like Siri and Alexa via mobile
    devices. Unwise to connect Siri or Alexa to your door locks.

    RISKS-30.46 subj11 identified ultrasound surveillance hacks in SEP2017.

    ------------------------------

    From: Cliff Kilby <cliffjkilby@gmail.com>
    Date: Thu, 23 Mar 2023 11:39:03 -0400
    Subject: OpenSSL KDF and secure by default (OpenSSL)

    OpenSSL is the hammer for just about every screw related to certificates and encryption and has recently even added mainstream support for key derivation functions (KDF). This class of functions allows for stretching a potentially weak memorized secret into a more resistant authenticator in a systematic manner.

    OpenSSL has been using passwords and passphrases for a long time for
    protecting private keys, so there is a whole class of functions for use of those secrets and even some guidance provided for them. https://www.openssl.org/docs/manmaster/man1/openssl-passphrase-options.html
    If no password argument is given and a password is required then the user is prompted to enter one. *pass:**password*

    The actual password is *password*. Since the password is visible to
    utilities (like 'ps' under Unix) this form should only be used where
    security is not important.

    So far, so good. You can put the password in the command line, but it is flagged appropriately and alternatives exist that shuffle the password from memory to memory without being exposed to the process list. No so with openssl's kdf module. https://www.openssl.org/docs/manmaster/man1/openssl-kdf.html *-kdfopt* *nm*:*v**pass:**string*

    ------------------------------

    Date: Fri, 24 Mar 2023 15:35:26 -0700
    From: Lauren Weinstein <lauren@vortex.com>
    Subject: All of your Internet usage will be subject to government tracking
    and control.

    It appears that a lot of people don't understand the implications of laws
    like Utah's -- which will extend beyond the state, and be copied by many
    other states -- involving limits on children accessing social media. In
    order to prevent children from creating social media accounts by themselves,
    it is required that *all* adult users of social media be identified via government IDs. This is literally the beginning of Chinese-style control and tracking of ALL Internet usage here in the U.S. Nothing less. -L

    https://lauren.vortex.com/2023/03/23/government-internet-id-nightmare

    ------------------------------

    Date: Sat, 25 Mar 2023 15:26:09 -0400
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Cryptocurrencies (Amy Castor)

    This chapter lays out the Biden administration's policy toward crypto. It
    is strident, as you'd expect just after a huge disaster like FTX. This is
    the no-coiner view coming from the highest levels of power.

    Crypto bros and their pet politicians have long claimed that if you overregulate crypto, you'll kill innovation. The White House is saying that, for all the promises and hot air, there is no innovation here, so the path
    is clear to regulate the hell out of you.

    https://amycastor.com/2023/03/24/do-kwon-arrested-white-house-hates-crypto-coinbase-wells-notice-sec-charges-justin-sun-signature-sold-ftx-bahamas-party-fund-returns/

    ------------------------------

    Date: Wed, 29 Mar 2023 11:41:43 -0400 (EDT)
    From: ACM TechNews <technews-editor@acm.org>
    Subject: Pwn2Own Hackers Breach a Tesla Twice (Marco Marcelline)>

    Marco Marcelline, *PC Magazine*, 25 Mar 2023, via ACM TechNews

    Participants of the Pwn2Own software exploitation conference hacked
    technology from automaker Tesla twice at the Zero Day Initiative's Pwn2Own software exploitation conference, earning $350,000 and a Model 3
    infotainment system. The team from French security company Synacktiv
    executed a time-of-check-to-time-of-use (TOCTOU) exploit against a Tesla Gateway, then employed a heap overflow and an out-of-band write
    vulnerability to gain access to and compromise the Model 3. Pwn2Own
    describes a TOCTOU exploit as a ``file-based race condition that occurs when
    a resource is checked for a particular value, and that value changes before
    the resource is used, invalidating the results of the check.'' SecurityWeek said Tesla is expected to release patches to correct the flaws exposed by
    the Synacktiv hacks.

    ------------------------------

    Date: Thu, 30 Mar 2023 21:37:12 +0000
    From: Douglas Lucas <dal@riseup.net>
    Subject: Voting vendor in Reality Winner's leak is coming to Texas
    (Texas Observer)

    First part of a series at the Texas Observer, Austin-based news magazine founded in 1954:

    https://www.texasobserver.org/reality-winner-vr-systems-whistleblower/

    This article, authored by me, discusses the 2016 cyberattacks Reality Winner disclosed -- two related spearphishing offensives by Kremlin military
    officers, first against election technology supplier VR Systems, then
    against local Florida elections officials -- in the context of the Texas Secretary of State's office certifying the vendor's e-pollbooks for use in elections statewide a little more than a year ago. I interview a county information security officer, two county elections administrators, as well
    as Winner's mother and lawyer, all of them Texans. Toward the end of the
    piece, I discuss polarization and historical context around various
    evidence, and around various lack of Risks include spearphishing,
    proprietary nature of evidence preventing Congressional and public
    oversight, lawsuits as propaganda, and more.

    I'm looking to better understand the Texas Secretary of State's examiner reports of electronic pollbooks and election management systems, so if
    anyone likes my article and has expertise on these subjects, please feel
    free to contact me offlist.

    Oh, and there's a bit of anchor text in my article near the conclusion,
    namely ``computer security trainwrecks fill the news on the daily'' that hyperlinks a certain email list regarding threats to computer systems.
    What's the threat? Eternal September. (I jest...)

    [Great place to publish it. Ronnie Dugger was long-time publisher of *The
    Texas Observer*, and he was influential in bringing many election
    technology problems to light -- e.g., *The New Yorker* in November 1988,
    and *The Nation* Aug 16-23 2004. See RISKS-7.70, 9.32, 33.47. PGN]

    ------------------------------

    Date: Sat, 25 Mar 2023 12:52:59 -0400
    From: Monty Solomon <monty@roscom.com>
    Subject: Malicious Actors Use Unicode Support in Python to Evade Detection
    (Phylum via Monty Solomon)

    Phylum uncovers a threat actor taking advantage of how the Python
    interpreter handles Unicode to obfuscate their malware.

    https://blog.phylum.io/malicious-actors-use-unicode-support-in-python-to-evade-detection

    ------------------------------

    Date: Mon, 27 Mar 2023 16:08:11 -0700
    From: geoff goodfellow <geoff@iconia.com>
    Subject: Progressives Across Nation Locked Out Of Accounts After
    CAPTCHA Asks 'Select All Squares That Contain A Woman' (Babylonbee)

    https://babylonbee.com/news/progressive-locked-out-of-bank-account-after-captcha
    -prompt-select-all-the-squares-that-contain-a-woman

    ------------------------------

    Date: Sat, 25 Mar 2023 21:16:11 -0700
    From: geoff goodfellow <geoff@iconia.com>
    Subject: SF loses 150K daily office workers during pandemic (SanFranChron)

    City also drops 33K jobs in hotels, restaurants and retail in shift to
    work and shop at home

    Enough office workers left Downtown San Francisco during the pandemic to
    fill almost four Giants games at Oracle Park.

    The city has lost nearly 150,000 daily office workers since the start of the pandemic in early 2020 during a shift to remote work and online shopping,
    *the San Francisco Chronicle* reported citing a city budget report. <https://www.sfchronicle.com/sf/article/vacant-17804926.phpf>

    The city has lost an estimated 147,303 daily office workers since the coronavirus, according to an analysis from the city's Budget and Legislative Analyst Legislative Analyst's Office sent to Supervisor Connie Chan.

    In March 2020, there were 245,505 office jobs in Downtown San Francisco.

    Downtown also lost 32,688 jobs since 2019 in the hospitality, food service
    and retail industries, according to the report.

    The report studied economic challenges to Downtown, including the impact of remote work on tax revenue from offices, how workers benefit small
    businesses, vacant commercial space, diversifying industries and a lack of housing.

    A study conducted by Stanford University cited in the report said that,
    before the pandemic, office workers would spend $168 per week near their workplaces. [...]

    https://therealdeal.com/sanfrancisco/2023/02/28/sf-loses-150k-daily-office-workers-during-pandemic/

    ------------------------------

    Date: Wed, 29 Mar 2023 06:44:58 -0700
    From: Rob Slade <rslade@gmail.com>
    Subject: Any friend that can be replaced by GPT-4 ...

    (I seem to have wandered into a number of digressions in composing this
    piece, but they all seem to tie together, so I hope you'll bear with me ...)

    Decades ago, I was at a teacher's conference. I was in a session dealing
    with computers in education. The morning paper had published an article
    about computers in education, and, particularly, using computers to teach,
    and, therefore, replacing teachers. Someone asked about this. The
    presenter thought for a moment, and replied that any teacher who could be replaced by a computer, *should* be replaced by a computer. His point was
    that teaching was a complex task, and that any teacher who taught in such a rote manner that he (or she) could be replaced by a machine would be better
    off out of the profession, and the profession (and the education system)
    would be better off without him (or her).

    Which story I am relaying to lead into:

    We are worrying about the wrong thing with regard to AI.

    The programs DALL-E, ChatGPT, and others that rely on machine-learning
    and pattern models derived from large data sets, have recently racked up an impressive series of accomplishments. They have produced some amazing
    results. Everyone is now talking about artificial intelligence as if it is
    an accomplished fact. It isn't.

    These programs have been able to produce some absolutely amazing results.
    But they have been able to produce amazing results for people who have been able to learn how to use them. That does not fit my definition of any kind
    of intelligence, let alone an artificial one. If the impressive results
    can only be obtained by people who are willing to put in the time to learn
    how to use these tools, then they *are* tools. Just tools. Complicated
    and impressive tools, yes. But just tools. They do not have their own intelligence.

    Intelligence would require that the system would be able to provide satisfactory results for pretty much anybody. A person, and intelligence,
    is able to query the requestor as to whether the results provided are satisfactory. If the results are not satisfactory, the intelligence is
    able to query the requester and find out why not, and use this information
    to modify the results until the results *are* satisfactory. And that is,
    of course, only one of the aspects of intelligence. There are many others, such as motivation. So, while I'm willing to grant that these tools are
    very sophisticated, complicated, and definitely useful developments, they
    don't get us that much closer to actual artificial intelligence.

    The results from these tools have created a great deal of interest, even in
    the general populace. It has particularly created interest within the
    business community, and new investment artificial intelligence projects and companies is probably a good thing. (Unless, of course, we are all on a
    hiding to nothing and we never *will* get real artificial intelligence.
    But let's assume for the moment that we will.) It has also engendered a
    good deal of discussion on the wisdom of pursuing artificial intelligence,
    and the dangers of artificial intelligence. Since my particular field is dangers associated with information systems, I have been very interested in
    all of this, and think it's a good thing. We should be considering the dangers, particularly the dangers, with regard to machine learning, that we have created, and are perpetuating, bias in our systems, particularly when
    the data sets that we use to train machine learning systems are,
    themselves, collected, collated, and maintained, by artificial intelligence systems. Which may already be affected by various forms of bias that we engendered in the first place, and have never realized are even there.

    There is, however, one fairly consistent theme that appears in discussions
    of the dangers of artificial intelligence, and which DALL-E, ChatGPT, and
    their ilk have indicated is a false concern. While it is primarily a
    screaming point of the conspiracy theory and tin foil hat crowd, many
    people are concerned about the possibility of what tends to be referred to
    as *The Singularity*. This is the hypothesis (and it is a fairly logical hypothesis), that when we do, actually, get artificial intelligence, that
    is truly intelligent, and can work on improving itself, that such a system would advance so rapidly that there would be absolutely no way that we
    could keep up, and it would, from our perspective, almost immediately
    become so intelligent that we would have no chance of controlling it. It
    would rapidly become intelligent enough that any of our protections, which
    are never perfect, would leave open a vulnerability which the system itself could exploit, and therefore it would, again, almost immediately, from our perspective, be beyond our control. What happens at that point is open to
    a variety of conjectures. This intelligence could turn evil, from our perspective, and wipe out the human race. (Some people would consider this
    a good thing.) Or, it might create a kind of benevolent dictatorship,
    managing our lives and having pretty much complete control of the entire
    human race, since it would be able to commandeer all information systems,
    which means basically every form of business, industry, entertainment, and
    any other human activity. Or, the artificial intelligence may simply take
    us. Or, well, there are all kinds of other options that people have
    explored and theorized.

    None of these options particularly scare me.

    That's the wrong thing to worry about. What we should be worrying about is relying on artificial intelligence, and, particularly, these recent
    examples. These tools are not really intelligent. They do not
    understand. They do not comprehend. They do not appreciate. They just predict the likelihood of the next piece of output from patterns, in masses
    of data, that they have being fed. (I have mentioned with, elsewhere, the
    fact that what we are feeding them is possibly biasing them, and that the
    bias is probably self reinforcing. And we'll come back to that point.)

    I asked ChatGPT to write a sermon. It did a very banal, pedestrian job.
    When I pointed out some of the flaws, ChatGPT basically gave me back the
    same thing, all over again. It didn't understand my complaint: it just responded based upon my statement. It didn't understand my statement: my statement was just a prompt to the system, and had similar enough terms to
    the first prompt that the output was, basically, identical.

    I gave a friend an opportunity of a trial with it. He said that it
    produced a reasonable Wikipedia article.

    I think this is illustrative in ways that most people wouldn't. I have
    never thought highly of Wikipedia. While I applaud the general concept, I
    feel that, in actual implementation, Wikipedia is the classic example of
    the pooling of ignorance. When I first set out to assess Wikipedia, I, of course, as an expert in the field, looked up the entry on computer
    viruses. It was terrible. As far as I know, having checked it several
    times in the intervening years (although I haven't looked at it recently),
    it's still terrible. At one point it had more than one factual error per sentence. And, of course, in those early, carefree, bygone days when I
    still have some thought that maybe Wikipedia might be a useful exercise, I
    made corrections to these errors. Corrections which were, of course, immediately rescinded by Wikipedia's editorial staff.

    Wikipedia does not rely on expert opinion. How could it? The editorial
    staff of Wikipedia do not know how to judge who is expert, and who is not,
    on a given entry, or topic. The original computer virus entry did, and as
    far as I know still does, contain the common received wisdom on computer viruses, with all of the mistakes, errors, and misconceptions, that the
    common man holds about computer viruses. Therefore, when I tried to
    correct these errors, the Wikipedia staff felt that I was introducing
    errors, and so they reverted back to their original mistake-ridden text.
    For an actual expert, there is, actually, no point in even attempting to correct the errors in Wikipedia. Wikipedia relies upon the common man's perception, and, therefore, it's pretty close to social media as a source
    of information. There is an enormous quantity, but there is not
    necessarily very much quality.

    (My take on, and attitude towards, Wikipedia, while formed many years ago
    on the basis of the number of mistakes in the technical entries may be [possibly unfairly] reinforced by the fact that after Gloria died,
    Wikipedia removed all entries to her from my entry in Wikipedia. I found
    this very personally hurtful, and, to this day, I have no idea why they did it.)

    Wikipedia relies upon entries available on the web, and therefore may rely heavily on social media. Wikipedia also goes by seniority, not by
    expertise. If you are higher up on the Wikipedia editorial food chain, you
    can reverse any entry or correction that an expert makes. Therefore, it is
    no surprise that Wikipedia is riddled with errors, particularly in recent discoveries, and in any area where expert opinion is of value. Wikipedia
    has become the Funk and Wagnalls of the information age. It's widely available, possibly useful in general cases, and very often wrong.

    This is why my friend's further comment that it made *the classic error*,
    was also illustrative. *The classic error* will be repeated, in many
    articles, and postings, made on the Web, by those who think they know the
    case, but are not necessarily fully informed. This type of material will
    be repeated, ad nauseam, on social media, thereby reinforcing the truth and validity of this erroneous material.

    And, of course, ChatGPT has been trained on social media. ChatGPT has been trained on material, and text, that could be gathered to give an indication
    of how we humans speak in response to queries. Or challenges. (This is
    also why ChatGPT is likely to become obnoxious and abusive if you challenge
    it. That's the way people react on social media, and it's social media that provides the material that has trained ChatGPT.)

    ChatGPT, and DALL-E, the graphic, or art, generating version of the pattern model tool, are simply responding, with patterns that they can predict from
    a massive database that they have assessed, of what is to be produced in response to any prompt. It's simply using statistical models (very complex statistical models, to be sure), to generate what the average human being
    would generate, if challenged in the same way. There is no understanding
    on the part of either ChatGPT, or DALL-E, or any others of those pattern
    model tools. They do not understand. They do not comprehend. They don't
    have to. They just churn out what it is likely that a human being would
    churn out in response to the same prompt.

    I asked ChatGPT to produce various materials in recent tests. What I got
    was pedestrian and uninspired. Well, of course it was. ChatGPT is not understanding, and doesn't have any way to obtain inspiration. It's just
    going to generate something in response to a prompt. And it is going to generate what most human beings would generate. And most human beings are, let's face it, lazy. So, what most human beings would produce, when
    challenged to produce a an article, or a sermon, or a presentation outline, would be pedestrian, banal, and uninspired. It's the type of article that
    you read in most trade magazines. Vendors go to professional authors and
    ask them to produce an article on blat. The professional author does a
    quick Google search on the topic, feels that they are expert, and turns out banal, pedestrian, uninspired text. There is nothing innovative, and there
    is nothing in the material that leads to any item or idea that would spark creative thought. That's not what most human beings do, that's not what
    most of the material on social media is, and so that's what ChatGPT
    produces.

    Many years ago, I ran across a quote which said that creativity is allowing yourself to make mistakes. Art is knowing which ones to keep. ChatGPT

    [continued in next message]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)