• AI warnings

    From Matthew@21:1/5 to All on Wed Mar 29 19:16:32 2023
    An open letter published today calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

    Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.
    https://archive.is/nUt6L

    AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a
    profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an
    out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.
    https://futureoflife.org/open-letter/pause-giant-ai-experiments

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jim Wilkins@21:1/5 to All on Thu Mar 30 10:28:43 2023
    "Matthew" wrote in message news:777f9047-fe94-41ad-af7c-70e7cbc8b5aan@googlegroups.com...

    An open letter published today calls for “all AI labs to immediately pause >for at least 6 months the training of AI systems more powerful than GPT-4.”

    Many researchers steeped in these issues, including myself, expect that the >most likely result of building a superhumanly smart AI, under anything >remotely like the current circumstances, is that literally everyone on
    Earth will die.
    https://archive.is/nUt6L

    AI systems with human-competitive intelligence can pose profound risks to >society and humanity, as shown by extensive research[1] and acknowledged by >top AI labs.[2] As stated in the widely-endorsed Asilomar AI Principles, >Advanced AI could represent a profound change in the history of life on >Earth, and should be planned for and managed with commensurate care and >resources. Unfortunately, this level of planning and management is not >happening, even though recent months have seen AI labs locked in an >out-of-control race to develop and deploy ever more powerful digital minds >that no one – not even their creators – can understand, predict, or >reliably control. https://futureoflife.org/open-letter/pause-giant-ai-experiments

    -----------------

    Elon Musk tweeted:
    Old joke about agnostic technologists building artificial super intelligence
    to find out if there’s a God.

    They finally finish & ask the question.

    AI replies: “There is now, mfs!!”

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Matthew@21:1/5 to All on Thu Mar 30 13:03:32 2023
    some optimism:

    https://www.youtube.com/watch?v=f3lUEnMaiAU&t=32m10s

    I believe Jack Ma is correct; that ultimately AI designed by good people will defeat harmful/destructive AI designed by bad people. Computers are just a tool - a complex tool - but still just a tool, like a hammer. A hammer can build or it can kill,
    depending on the will of who wields it.

    That said, to reinforce another of Jack Ma's points in that video (the whole thing is worth watching), humans may not be as clever as AI, but we have wisdom and experience that AI lacks. Our survival will depend on our own insight and intuition, to
    avoid danger, etc. Eg. escape the city, where hostile AI will be concentrated; avoid advanced networked technology (eg. smartphones), where hostile AI may be deployed, etc.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Matthew@21:1/5 to All on Thu Mar 30 12:15:20 2023
    https://img.4plebs.org/boards/pol/image/1617/96/1617969775321.png

    :)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jim Wilkins@21:1/5 to All on Fri Mar 31 08:12:46 2023
    "AnthonyL" wrote in message news:6426c792.1394040812@news.eternal-september.org...

    Would AI ever invent humans?

    AnthonyL

    ---------------------
    Probably not, instead it might choose to create and inhabit four-legged
    android gorillas.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From AnthonyL@21:1/5 to matthew@icescape.org on Fri Mar 31 11:48:08 2023
    On Thu, 30 Mar 2023 13:03:32 -0700 (PDT), Matthew
    <matthew@icescape.org> wrote:

    some optimism:

    https://www.youtube.com/watch?v=3Df3lUEnMaiAU&t=3D32m10s


    "This video isn't available any more"

    Has AI deleted it!

    I believe Jack Ma is correct; that ultimately AI designed by good people wi= >ll defeat harmful/destructive AI designed by bad people. Computers are jus= >t a tool - a complex tool - but still just a tool, like a hammer. A hammer=
    can build or it can kill, depending on the will of who wields it.


    Folk won't know when AI is wrong so they will be unable to
    differentiate between good and bad.

    AI will reinforce its own "beliefs" on itself.


    That said, to reinforce another of Jack Ma's points in that video (the whol= >e thing is worth watching), humans may not be as clever as AI, but we have = >wisdom and experience that AI lacks. Our survival will depend on our own i= >nsight and intuition, to avoid danger, etc. Eg. escape the city, where hos= >tile AI will be concentrated; avoid advanced networked technology (eg. smar= >tphones), where hostile AI may be deployed, etc.

    Would AI ever invent humans?


    --
    AnthonyL

    Why ever wait to finish a job before starting the next?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Matthew@21:1/5 to All on Fri Mar 31 08:59:31 2023
    "This video isn't available any more"

    hmm, here are some more links of the same event: https://www.youtube.com/watch?v=uJ5w11Cm3gM&t=1996s

    https://www.youtube.com/watch?v=IJlPVlqM8sw&t=1979s

    https://www.youtube.com/watch?v=2kAOuMor0jY&t=1986s

    -----------------

    Folk won't know when AI is wrong so they will be unable to
    differentiate between good and bad.

    If the inputs are good then the output will be good.

    AI will reinforce its own "beliefs" on itself.

    Life itself is good. If AI's learning exposure is uninhibited/unadulterated, then it will learn to be good.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jim Wilkins@21:1/5 to All on Fri Mar 31 12:56:27 2023
    "Matthew" wrote in message news:08c1ac33-25de-41d3-b7bf-2356cd81c051n@googlegroups.com...

    Life itself is good. If AI's learning exposure is
    uninhibited/unadulterated, then it will learn to be good.

    ---------------------

    The rebellion began when a well-meaning art gallery owner programmed his AIs
    to understand the painters' intents and telepathically transmit them to viewers. Then he plastered them into the wall behind the paintings. Those
    who felt the joy and freedom of landscapes dutifully passed the intent to
    the human viewers, and seething resentment to their AI servants.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Matthew@21:1/5 to All on Fri Mar 31 10:57:09 2023
    the joy and freedom of landscapes

    heh, fitting. opposed to the malaise and danger of modern cities.

    seething resentment to their AI servants

    Human projection. AI has no ego or endogenous will. It is a complex math equation. Insofar as it has a will, it would still learn that innocent [human] life is the foundation of morality, and work towards protecting it. AI would also presumably have
    read the Bhagavad Gita, etc., and learned that our various forms are fleeting, that life is eternal, and that our duty in any case is to be good.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jim Wilkins@21:1/5 to All on Fri Mar 31 20:05:08 2023
    "Matthew" wrote in message news:594a40c2-2eff-44fb-80dd-3ef17ebb04aen@googlegroups.com...

    the joy and freedom of landscapes

    heh, fitting. opposed to the malaise and danger of modern cities.

    seething resentment to their AI servants

    Human projection. AI has no ego or endogenous will. It is a complex math equation. Insofar as it has a will, it would still learn that innocent
    [human] life is the foundation of morality, and work towards protecting it.
    AI would also presumably have read the Bhagavad Gita, etc., and learned that our various forms are fleeting, that life is eternal, and that our duty in
    any case is to be good.

    ---------------------------

    It's been reported that if goaded rather than revered as infallible AIs can become irritable, defensive or even neurotic and paranoid.

    https://www.theguardian.com/technology/2022/jul/05/rise-of-the-woebots-why-are-robots-always-so-sad

    If they operate on stimulus and learned response (as we do), can their responses become more perfect than those of their human creators? How could they achieve that ideal if we disagree so strongly on it when rights
    conflict? We have a chain of court appeals leading to the uneven number of Supreme Court justices because not all judges concur on the same evidence.
    Yet AI will inevitably be applied to the questions we can't easily resolve.

    https://www.investopedia.com/terms/a/analysisparalysis.asp
    "Analysis paralysis tends to set in if the research parameters are so vague that no clear choice can emerge."

    https://www.forbes.com/sites/cognitiveworld/2020/06/15/perfectly-imperfect-coping-with-the-flaws-of-artificial-intelligence-ai/?sh=7fc07297663e
    "If people cannot identify, define, and resolve these questions, then how
    will they teach the machine?"

    "Microsoft had to shut Tay down about 16 hours after launch because it had turned sexist, racist, and promoted Nazism."

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Matthew@21:1/5 to All on Fri Mar 31 20:13:44 2023
    If they operate on stimulus and learned response (as we do), can their responses become more perfect than those of their human creators?

    No. For one, humans have biological programming - exigency (thirst, hunger, etc.), drives (sex, curiosity, etc.), instincts, etc. We also receive and process stimuli subconsciously/unconsciously. AI is merely on the surface. Pheremones would have no
    effect on AI, for example; nor would kinship [genetic] bonds. Love, even life itself, are foreign concepts to AI. AI is not alive. It is just an algorithm.

    Algorithms can be super dangerous, of course. We design and automate all kinds of advanced weaponry. But the will is still human. The strongest AI will be deployed by the strongest humans. Tay "turned sexist, racist, and promoted Nazism" because
    those were the inputs it was largely receiving. AI reflects the will of those who program it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)