• Risks Digest 33.68 (2/2)

    From RISKS List Owner@21:1/5 to All on Sat Apr 1 01:01:04 2023
    [continued from previous message]

    does make mistakes. But most of them simply are not worth keeping.
    ChatGPT doesn't think about what it's doing: it just predicts the next,
    most likely next, probable word that a human being would write in this
    stream of text. So, ChatGPT isn't going to create anything that's
    inspired, isn't going to create anything that's creative, isn't going to produce much of anything that is much of use for anything, and if we fail
    to understand this, we fail to realize what relying on ChatGPT can produce
    for us. Which is, basically, so much dross.

    I have recently read many articles which assert that ChatGPT can provide
    for us mundane letters, mundane article outlines, and mundane articles themselves, which will be of a help in business. But that is only because
    we, as a society, have become accustomed to the mundane, and accept it.
    And, if we continue to use ChatGPT for these types of purposes, we will, in fact, produce more mundane dross, and, increasingly find that garbage acceptable. We are training ourselves to accept the banal, and the uninformative. Eventually we will train ourselves to accept a word salad
    which is completely devoid of any meaning at all.

    ChatGPT is becoming more capable, or at least more facile. It is being
    trained on larger and larger data sets. Unfortunately, those data sets are being harvested, by and large from social media, and by and large with the
    aid of existing artificial intelligence tools. Therefore, the fear that
    some have raised, that we have already biased our artificial intelligence
    tools by the data that we gave to them, is now being self-reinforced. The biased artificial intelligence tools that we created with biased data, are
    now being used to harvest data, in order to feed to the next generation of pattern model tools. This means that the bias, far from being eliminated,
    is being steadily reinforced, as is the bias towards meaningless dross. If
    we rely on these tools, that is, increasingly, what we are going to get.

    And, with the reliance on artificial intelligence in the metaverse, that is what we are going to get in the metaverse. The metaverse is an incredibly complex undertaking. It is, if all the parts that we have been promised,
    are included, a hugely complex system, orders of magnitude more complex
    than any we have yet devised, with the possible exception of the Internet,
    and the World Wide Web and social media itself. We will need to have artificial intelligence tools to manage the metaverse. And these tools are going to have our existing biases, and are going to have the bias towards uncreative, uninspired garbage. And therefore, that's what the metaverse
    is going to give us.

    Increasingly readable, and convincing, garbage to be sure, but garbage nonetheless. Do we really want to be convinced, by garbage?

    At any rate, in another test, I complained to ChatGPT that I was lonely. I mean, most people don't listen anyways, and most people don't listen very
    well. So I figured that ChatGPT would be at least as good as one of my friends, who, after all, have disappeared, since they are terrified that
    I'm going to talk about Gloria, or death, or grief, or pain, all of which
    are taboo subjects in our society.

    The thing is, ChatGPT doesn't know about the taboo subjects in our
    society. So, it gave me an actually reasonable response. Now, it wasn't great. ChatGPT cannot understand what I am going through, and cannot understand or appreciate the depths of my pain and loneliness. But at
    least it was reasonable. It suggested a few things. Now, they are all
    things that I have tried. But they were reasonable things. It said to
    talk to my friends. As previously mentioned I can't. When challenged,
    ChatGPT fairly quickly goes into a loop, basically suggesting the same
    things over and over again. But it also suggested that I take up volunteer work. Now, of course, I knew this. It is something that I suggest to
    people who are in depression. And I have done it. And, it does help, to a certain extent. So, a half point, at the very least, for ChatGPT.

    I can give more points to that than that to ChatGPT. It doesn't give me
    facile and stupid cliches. It didn't say anything starting with *at least*.
    It didn't tell me that Gloria was in a better place. It didn't tell me that bad things wouldn't happen to me if I only had more faith. All of which
    people have said to me. And it's all very hurtful. So ChatGPT at least
    gets another half point for not being hurtful. (If we are still trying for
    the Turing test, at this point, I would say that, in order to pass, we would have to make ChatGPT more stupid and inconsiderate.)

    But I'm not willing to give ChatGPT very much credit at this point. It's
    not very useful. It wasn't very analytical. And I did challenge some of
    its suggestions, to see what kind of response I got when I challenged
    ChatGPT on various points. I did sort of challenge it on the friend's
    point, and it didn't get defensive about that. So, at least another half
    point to ChatGPT.

    But, as I say, it's not very good. It's as good as a trade rag article,
    and it's probably as good as any Wikipedia article. In other words, not
    very good. The material is pedestrian and. I don't think that bereavement counselors have anything to worry about, quite yet.

    I should also note that so far, I have the free version of ChatGPT, and therefore I am not talking to GPT-4. This is GPT-3. So it's not as good
    as the latest version. And I would like to give the latest version a try,
    but I strongly suspect that it wouldn't do all that much better. But it
    would be an interesting test.

    Relying on ChatGPT, for anything but the absolute, most pedestrian tasks is asking for trouble. It can't understand. It is going to make mistakes.
    If you present it as an interface, and, talking about my test about
    loneliness and bereavement, I realize that I may have prompted some idiot
    with a grief account to try and tie ChatGPT on to a grief account, as a
    kind of automated bereavement counselor, well, that's really asking for trouble. Trying to use ChatGPT with people who are, in fact, in real
    trouble, could create a disaster. Please, those of you with grief
    accounts, do not try this at home. This is only for trained idiots, who actually know that there is no such thing as artificial intelligence, and realize that ChatGPT isn't that much more of an event from ELIZA. (If you don't know who ELIZA was, it passed the Turing test more than four decades
    ago, and it only took two pages of BASIC code.)

    There is concern that adding the appearance of an emotional component to computer systems, and particularly artificial intelligence systems, will
    create dangerous situations for users. This is a very realistic concern.
    We have seen a number of instances, over at least half a century, where individuals have attributed to, sometimes very simple systems,
    intelligence, personality, and even concepts of a soul.

    As only one aspect of the difficulties, but also the importance, of looking
    at emotive, or affective, artificial intelligence, or any kind of
    intelligence in any computer system, consider the case of risk analysis.
    In information security, we need to teach students of the field that penetration testing, and even vulnerability analysis, does not lead you directly to risk analysis. This is because penetration testing, auditing,
    and vulnerability analysis are generally performed by outside specialists. These people may be very skilled, and may be able to produce a great deal
    that is of value to you, but there is one thing that they, signally, do not know: the value of the assets that you are protecting. The value, that is,
    to you. The value of an asset, whether a system, piece of information, or
    a database of collected information, has a value to the enterprise that
    holds it. But it is only that enterprise, and the people who work there,
    who really do understand the value of that asset. The value in a variety
    of ways, and therefore the protections that must be afforded to that
    asset. Therefore, no outside firm can do a complete risk analysis, since
    they do not understand, or fully comprehend, the value, or values, and the range of different types of value, that the asset holds. For the company.

    Currently, our so-called artificial intelligence tools, may be able to
    perform some interesting feats. But they do not understand. And,
    particularly in regard to affect and emotion, they do not understand, even
    what these are, let alone how important they are. Now, we can certainly
    make some effort to instruct artificial systems as to certain aspects of
    human behavior, and the indicators that the human may be in high states of emotion. However, the systems will have no understanding, no comprehension,
    of these emotional states. They will not understand the subtleties and
    nuances of emotional states. We can give them a set of directives as to how
    to behave with regard to people, but they will not understand, they will
    only behave. This is a backstop solution, and it cannot be complete. It is akin to the difference between justice and law, in all of our human
    societies. Supposedly, we think of our legal systems as providing justice.
    We even call institutions related to the legal system departments of
    justice. But we all know, in our heart of hearts, that there is a
    difference between legal and right. We all know that there are times when
    our laws come to an unexpected situation, and are then unjust. In the same way, we cannot simply give a set of commands to a computer, as to how to
    deal with a human that is in an emotional state, and expect that this will address all possible situations. Because the computers do not have an understanding of emotion.

    In this latter regard, I highly recommend reading *Affective Computing*, by Rosalind Picard http://victoria.tc.ca/int-grps/books/techrev/bkaffcmp.rvw .
    Her work looks not only at human factors engineering, but also at the significance of affect, or some similar analogue, in regard to motivation
    and decision in automated systems.

    [This long piece has obviously been written by ChatSLADE. I include it as
    the last item in this issue in case you might think it is overly long --
    even though it is clearly relevant to the Open-AI and ChatBot items up
    front in this issue. PGN]

    ------------------------------

    Date: Mon, 1 Aug 2020 11:11:11 -0800
    From: RISKS-request@csl.sri.com
    Subject: Abridged info on RISKS (comp.risks)

    The ACM RISKS Forum is a MODERATED digest. Its Usenet manifestation is
    comp.risks, the feed for which is donated by panix.com as of June 2011.
    SUBSCRIPTIONS: The mailman Web interface can be used directly to
    subscribe and unsubscribe:
    http://mls.csl.sri.com/mailman/listinfo/risks

    SUBMISSIONS: to risks@CSL.sri.com with meaningful SUBJECT: line that
    includes the string `notsp'. Otherwise your message may not be read.
    *** This attention-string has never changed, but might if spammers use it.
    SPAM challenge-responses will not be honored. Instead, use an alternative
    address from which you never send mail where the address becomes public!
    The complete INFO file (submissions, default disclaimers, archive sites,
    copyright policy, etc.) is online.
    <http://www.CSL.sri.com/risksinfo.html>
    *** Contributors are assumed to have read the full info file for guidelines!

    OFFICIAL ARCHIVES: http://www.risks.org takes you to Lindsay Marshall's
    searchable html archive at newcastle:
    http://catless.ncl.ac.uk/Risks/VL.IS --> VoLume, ISsue.
    Also, ftp://ftp.sri.com/risks for the current volume/previous directories
    or ftp://ftp.sri.com/VL/risks-VL.IS for previous VoLume
    If none of those work for you, the most recent issue is always at
    http://www.csl.sri.com/users/risko/risks.txt, and index at /risks-33.00
    ALTERNATIVE ARCHIVES: http://seclists.org/risks/ (only since mid-2001)
    *** NOTE: If a cited URL fails, we do not try to update them. Try
    browsing on the keywords in the subject line or cited article leads.
    Apologies for what Office365 and SafeLinks may have done to URLs.
    Special Offer to Join ACM for readers of the ACM RISKS Forum:
    <http://www.acm.org/joinacm1>

    ------------------------------

    End of RISKS-FORUM Digest 33.68
    ************************

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)