• For openers I submit an article that can get the ball rolling

    From Nic@21:1/5 to All on Thu May 18 18:26:14 2023
    If a computer can fool 70% of judges in a five-minute conversation into thinking it’s a person, the computer passes the test. Would passing the Turing test – something which now seems imminent – show that an AI has achieved thought and
    understanding?

    From https://theconversation.com/chatgpt-cant-think-consciousness-is-something-entirely-different-to-todays-ai-204823

    I invite all serious posters to participate.


    --
    Nic <Nic@none.net>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Sn!pe@21:1/5 to Nic on Fri May 19 00:19:23 2023
    Nic <Nic@none.net> wrote:

    If a computer can fool 70% of judges in a five-minute conversation into thinking it's a person, the computer passes the test. Would passing the Turing test – something which now seems imminent – show that an AI has achieved thought and understanding?

    From <https://theconversation.com/chatgpt-cant-think-consciousness-is-something-entirely-different-to-todays-ai-204823>

    I invite all serious posters to participate.


    --
    Nic <Nic@none.net>

    Your .sig separator is broken. It should comprise
    "dash, dash, space, return" all on a line by itself.
    This facilitates automatic stripping from quotation
    by standards compliant Newsreader programs.

    Re: your question:
    Does consciousness imply independent self-awareness?
    If so then no, current AIs are not conscious, merely programmed.

    Are AIs capable of deception by way of passing the Turing test?
    I would say yes; IMO the Turing Test is insufficiently rigorous and
    not fit for purpose.

    --
    ^Ï^. – Sn!pe – <https://youtu.be/_kqytf31a8E>

    My pet rock Gordon mourns the death of privacy.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Burns@21:1/5 to All on Fri May 19 06:54:08 2023
    Sn!pe wrote:

    Does consciousness imply independent self-awareness?
    If so then no, current AIs are not conscious, merely programmed.

    Do you consider training separate from programming?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From occam@21:1/5 to All on Fri May 19 08:17:25 2023
    On 19/05/2023 01:19, Sn!pe wrote:
    Nic <Nic@none.net> wrote:

    If a computer can fool 70% of judges in a five-minute conversation into
    thinking it's a person, the computer passes the test. Would passing the
    Turing test – something which now seems imminent – show that an AI has >> achieved thought and understanding?

    From
    <https://theconversation.com/chatgpt-cant-think-consciousness-is-something-entirely-different-to-todays-ai-204823>

    I invite all serious posters to participate.


    --
    Nic <Nic@none.net>

    Your .sig separator is broken.

    You should be aware that it is not only Nic's separator that is broken.
    Nic is broken - has been for a long while. ('Nic' is short for
    'nincompoop'.)


    <snip>


    Re: your question:
    Does consciousness imply independent self-awareness?

    That was not his question. His (implied) question was 'Is passing the
    Turing Test the same as having consciousness and awareness?'

    No, and no. There is a large chasm between 'pretending to be a human in conversation' and 'self-awareness'. I blame author of the article for journalistic sensationalism.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Sn!pe@21:1/5 to Andy Burns on Fri May 19 10:59:23 2023
    Andy Burns <usenet@andyburns.uk> wrote:

    Sn!pe wrote:

    Does consciousness imply independent self-awareness?
    If so then no, current AIs are not conscious, merely programmed.

    Do you consider training separate from programming?

    I think there are probably large areas in common but otherwise, yes.
    What about the effect of how initial program parameters are set?
    As I envisage it, that would be the control(s) to adjust the AI's bias^W balance between altruism and evilness; definitely programming rather
    than training.

    --
    ^Ï^. – Sn!pe – <https://youtu.be/_kqytf31a8E>

    My pet rock Gordon mourns the death of privacy.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Sn!pe@21:1/5 to occam on Fri May 19 10:59:24 2023
    occam <occam@nowhere.nix> wrote:
    [...]
    Re: your question:
    Does consciousness imply independent self-awareness?

    That was not his question. His (implied) question was 'Is passing the
    Turing Test the same as having consciousness and awareness?'

    No, and no. There is a large chasm between 'pretending to be a human
    in conversation' and 'self-awareness'. I blame author of the article for journalistic sensationalism.

    I agree.

    --
    ^Ï^. – Sn!pe – <https://youtu.be/_kqytf31a8E>

    My pet rock Gordon mourns the death of privacy.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Johnny@21:1/5 to Andy Burns on Fri May 19 07:34:11 2023
    On Fri, 19 May 2023 06:54:08 +0100
    Andy Burns <usenet@andyburns.uk> wrote:

    Sn!pe wrote:

    Does consciousness imply independent self-awareness?
    If so then no, current AIs are not conscious, merely programmed.

    Do you consider training separate from programming?

    Training is for pets and children. Programming is for machines.

    I don't see how a machine made of nuts bolts and electronic components
    could ever be aware of it's existence.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Blue-Maned_Hawk@21:1/5 to All on Sun May 21 00:11:01 2023
    DQrigItBIGZ1bmRhbWVudGFsIG5lY2Vzc2l0eSBvZiBhIFR1cmluZyB0ZXN0IGlzIHRoYXQg aXQgX211c3RfIGJlIA0KX2ltcG9zc2libGVfIGZvciBhIHBlcnNvbiB0byBmYWlsIGl04oCU b3RoZXJ3aXNlLCB3ZSByaXNrIHN1Y2ggYSB0ZXN0IA0KYmVpbmcgdXNlZCBieSB0aGUgcG93 ZXJmdWwgdG8gZGVodW1hbml6ZSBncm91cHMgYXMgYW4gZXhjdXNlIGZvciBhdHJvY2l0aWVz Lg0KDQotLSANCuKal++4jiB8IC9ibHUubcmbaW4uZMqwYWsvIHwgc2hvcnRlbnMgdG8gIkhh d2siIHwgaGUvaGltL2hpcy9oaW1zZWxmL01yLg0KYmx1ZW1hbmVkaGF3ay5naXRodWIuaW8N CkJpdGNoZXMgc3RvbGUgbXkgd2hvbGUgYXNzIOKQlPCfrZbhqrPht7/wnbyX4beN4o+n8JKS q/CQu77go5vihonvv73ig6MgcXVvdGVkLXByaW50YWJsZSwgY2FuJ3QgDQpoYXZlIHNoaXQg aW4gVGh1bmRlcmJpcmQg8J+YqQ0KDQo=

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Sn!pe@21:1/5 to bluemanedhawk@gmail.com on Sun May 21 05:15:37 2023
    Blue-Maned_Hawk <bluemanedhawk@gmail.com> wrote:

    ?A fundamental necessity of a Turing test is that it _must_ be
    _impossible_ for a person to fail it—otherwise, we risk such a test
    being used by the powerful to dehumanize groups as an excuse for atrocities.

    I bet that I could fail a Turing Test a Turing Test a Turing Test
    a Turing Test a Turing Test a Turing Test a Turing Test a Turing
    Test a Turing T[Ctrl-Alt-Del]

    --
    ^Ï^. – Sn!pe – <https://youtu.be/_kqytf31a8E>

    My pet rock Gordon is spooked.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From vallor@21:1/5 to Johnny on Sat May 27 08:16:17 2023
    On Fri, 19 May 2023 07:34:11 -0500, Johnny wrote:

    On Fri, 19 May 2023 06:54:08 +0100 Andy Burns <usenet@andyburns.uk>
    wrote:

    Sn!pe wrote:

    Does consciousness imply independent self-awareness?
    If so then no, current AIs are not conscious, merely programmed.

    Do you consider training separate from programming?

    Training is for pets and children. Programming is for machines.

    I don't see how a machine made of nuts bolts and electronic components
    could ever be aware of it's existence.

    What if you programmed it to have neural nets that can learn, then you
    trained that?

    And imagine what a thinking machine might think if they found you?

    https://www.mit.edu/people/dpolicar/writing/prose/text/thinkingMeat.html

    Obquote:
    _ _ _ _ _ _ _
    "They're made out of meat."

    "Meat?"

    "Meat. They're made out of meat."

    "Meat?"

    "There's no doubt about it. We picked several from different parts of the planet, took them aboard our recon vessels, probed them all the way
    through. They're completely meat."

    "That's impossible. What about the radio signals? The messages to the
    stars."

    "They use the radio waves to talk, but the signals don't come from them.
    The signals come from machines."

    "So who made the machines? That's who we want to contact."

    "They made the machines. That's what I'm trying to tell you. Meat made the machines."

    "That's ridiculous. How can meat make a machine? You're asking me to
    believe in sentient meat."
    [. . .]
    _ _ _ _ _ _ _

    --
    -v

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From vallor@21:1/5 to All on Sat May 27 08:23:44 2023
    On Fri, 19 May 2023 00:19:23 +0100, Sn!pe wrote:

    Nic <Nic@none.net> wrote:

    If a computer can fool 70% of judges in a five-minute conversation into
    thinking it's a person, the computer passes the test. Would passing the
    Turing test – something which now seems imminent – show that an AI has >> achieved thought and understanding?

    From
    <https://theconversation.com/chatgpt-cant-think-consciousness-is- something-entirely-different-to-todays-ai-204823>

    I invite all serious posters to participate.


    --
    Nic <Nic@none.net>

    Your .sig separator is broken. It should comprise "dash, dash, space, return" all on a line by itself.
    This facilitates automatic stripping from quotation by standards
    compliant Newsreader programs.

    Re: your question:
    Does consciousness imply independent self-awareness?
    If so then no, current AIs are not conscious, merely programmed.

    Programmed *to "learn"*, then "taught".


    Are AIs capable of deception by way of passing the Turing test?
    I would say yes; IMO the Turing Test is insufficiently rigorous and not
    fit for purpose.

    Would you be willing to suggest a better test?

    --
    -v

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From vallor@21:1/5 to vallor on Sat May 27 08:38:58 2023
    On Sat, 27 May 2023 08:23:44 GMT, vallor wrote:

    On Fri, 19 May 2023 00:19:23 +0100, Sn!pe wrote:

    Nic <Nic@none.net> wrote:

    If a computer can fool 70% of judges in a five-minute conversation
    into thinking it's a person, the computer passes the test. Would
    passing the Turing test – something which now seems imminent – show
    that an AI has achieved thought and understanding?

    From <https://theconversation.com/chatgpt-cant-think-consciousness-is-
    something-entirely-different-to-todays-ai-204823>

    I invite all serious posters to participate.


    --
    Nic <Nic@none.net>

    Your .sig separator is broken. It should comprise "dash, dash, space,
    return" all on a line by itself.
    This facilitates automatic stripping from quotation by standards
    compliant Newsreader programs.

    Re: your question:
    Does consciousness imply independent self-awareness?
    If so then no, current AIs are not conscious, merely programmed.

    Programmed *to "learn"*, then "taught".


    Are AIs capable of deception by way of passing the Turing test?
    I would say yes; IMO the Turing Test is insufficiently rigorous and not
    fit for purpose.

    Would you be willing to suggest a better test?

    ChatGPT let's you share chat links now:

    https://chat.openai.com/share/f480d3e5-e675-4b29-a7ca-5d0e528f3088

    Let me know if that works properly.

    Here is the chat, which I probably won't need to include in the future, hopefully:
    _ _ _ _ _ _ _
    Q:
    is there a better test of general intelligence than the Turing Test?

    ChatGPT:
    The Turing Test, proposed by the British mathematician and computer
    scientist Alan Turing in 1950, has been a long-standing benchmark for artificial intelligence. However, it is not without its critics and limitations. Some believe it focuses too much on the ability to mimic
    human conversation, which is just one aspect of intelligence.

    In response to the shortcomings of the Turing Test, multiple alternatives
    have been proposed over the years:

    The Winograd Schema Challenge: This is a test of natural language understanding, named after computer scientist Terry Winograd. It involves resolving the reference of a pronoun in a sentence, which is a challenging problem for computers but easy for humans. This test attempts to measure
    an AI's understanding of a sentence's context, which is a key aspect of intelligence.

    The Lovelace 2.0 Test of Artificial Creativity and Intelligence: Named
    after Ada Lovelace, this test assesses a machine's ability to create a
    piece of work (like a poem, painting, or novel) that the machine's
    programmers did not explicitly design it to create. This tests for
    creativity, a hallmark of human intelligence that the Turing Test doesn't measure.

    The Coffee Test: Proposed by AI researcher Ben Goertzel, this test
    requires a robot to go into an average American home and figure out how to
    make coffee: find the coffee machine, find the coffee, add water, find a
    mug, etc. This tests the AI's ability to perform tasks in an unfamiliar environment, which is another aspect of intelligence.

    The BICA (Biologically Inspired Cognitive Architectures) Test: This is a
    much more comprehensive, albeit difficult, test that requires the AI to simulate the entire human mind. This would involve demonstrating a full
    range of human cognitive abilities, including emotion, self-awareness, consciousness, and theory of mind.

    The Animal-AI Olympics: This is a competition that evaluates AI based on
    tasks commonly used in animal intelligence research. The tasks are
    contained within a 3D environment where AI agents interact with objects, navigate mazes, and use tools, among other things.

    However, all of these tests have their limitations, and they each measure different aspects of what we might consider intelligence. There may never
    be a single definitive test of general intelligence, whether for humans, animals, or machines, given that intelligence is multi-faceted and context-dependent. It's also important to note that intelligence tests are
    just tools; they don't necessarily define the upper limits of an AI
    system's abilities or potential.
    _ _ _ _ _ _ _

    Once again, it's a chat machine, not an answer machine. I wonder
    how accurate that answer is?

    --
    -v

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Sn!pe@21:1/5 to vallor on Sat May 27 09:45:39 2023
    vallor <vallor@vallor.earth> wrote:

    On Fri, 19 May 2023 00:19:23 +0100, Sn!pe wrote:

    Nic <Nic@none.net> wrote:

    If a computer can fool 70% of judges in a five-minute conversation into
    thinking it's a person, the computer passes the test. Would passing the
    Turing test – something which now seems imminent – show that an AI has >> achieved thought and understanding?

    From
    <https://theconversation.com/chatgpt-cant-think-consciousness-is-something-entirely-different-to-todays-ai-204823>

    I invite all serious posters to participate.

    [...]
    Re: your question:
    Does consciousness imply independent self-awareness?
    If so then no, current AIs are not conscious, merely programmed.


    Programmed *to "learn"*, then "taught".


    Are AIs capable of deception by way of passing the Turing test?
    I would say yes; IMO the Turing Test is insufficiently rigorous and not
    fit for purpose.


    Would you be willing to suggest a better test?


    To what end? Feel free to DYOR and get back to us.

    --
    ^Ï^. – Sn!pe – <https://youtu.be/_kqytf31a8E>

    My pet rock Gordon just is.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Sn!pe@21:1/5 to vallor on Sat May 27 10:01:44 2023
    vallor <vallor@vallor.earth> wrote:

    On Sat, 27 May 2023 08:23:44 GMT, vallor wrote:

    On Fri, 19 May 2023 00:19:23 +0100, Sn!pe wrote:

    Nic <Nic@none.net> wrote:

    If a computer can fool 70% of judges in a five-minute conversation
    into thinking it's a person, the computer passes the test. Would
    passing the Turing test – something which now seems imminent – show >>> that an AI has achieved thought and understanding?

    From <https://theconversation.com/chatgpt-cant-think-consciousness-is-
    something-entirely-different-to-todays-ai-204823>

    I invite all serious posters to participate.


    --
    Nic <Nic@none.net>

    Your .sig separator is broken. It should comprise "dash, dash, space,
    return" all on a line by itself.
    This facilitates automatic stripping from quotation by standards
    compliant Newsreader programs.

    Re: your question:
    Does consciousness imply independent self-awareness?
    If so then no, current AIs are not conscious, merely programmed.

    Programmed *to "learn"*, then "taught".


    Are AIs capable of deception by way of passing the Turing test?
    I would say yes; IMO the Turing Test is insufficiently rigorous and not
    fit for purpose.

    Would you be willing to suggest a better test?

    ChatGPT let's you share chat links now:

    https://chat.openai.com/share/f480d3e5-e675-4b29-a7ca-5d0e528f3088

    Let me know if that works properly.

    Here is the chat, which I probably won't need to include in the future, hopefully:
    _ _ _ _ _ _ _
    Q:
    is there a better test of general intelligence than the Turing Test?

    ChatGPT:
    The Turing Test, proposed by the British mathematician and computer
    scientist Alan Turing in 1950, has been a long-standing benchmark for artificial intelligence. However, it is not without its critics and limitations. Some believe it focuses too much on the ability to mimic
    human conversation, which is just one aspect of intelligence.

    In response to the shortcomings of the Turing Test, multiple alternatives have been proposed over the years:

    The Winograd Schema Challenge: This is a test of natural language understanding, named after computer scientist Terry Winograd. It involves resolving the reference of a pronoun in a sentence, which is a challenging problem for computers but easy for humans. This test attempts to measure
    an AI's understanding of a sentence's context, which is a key aspect of intelligence.

    The Lovelace 2.0 Test of Artificial Creativity and Intelligence: Named
    after Ada Lovelace, this test assesses a machine's ability to create a
    piece of work (like a poem, painting, or novel) that the machine's programmers did not explicitly design it to create. This tests for creativity, a hallmark of human intelligence that the Turing Test doesn't measure.

    The Coffee Test: Proposed by AI researcher Ben Goertzel, this test
    requires a robot to go into an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a
    mug, etc. This tests the AI's ability to perform tasks in an unfamiliar environment, which is another aspect of intelligence.

    The BICA (Biologically Inspired Cognitive Architectures) Test: This is a
    much more comprehensive, albeit difficult, test that requires the AI to simulate the entire human mind. This would involve demonstrating a full
    range of human cognitive abilities, including emotion, self-awareness, consciousness, and theory of mind.

    The Animal-AI Olympics: This is a competition that evaluates AI based on tasks commonly used in animal intelligence research. The tasks are
    contained within a 3D environment where AI agents interact with objects, navigate mazes, and use tools, among other things.

    However, all of these tests have their limitations, and they each measure different aspects of what we might consider intelligence. There may never
    be a single definitive test of general intelligence, whether for humans, animals, or machines, given that intelligence is multi-faceted and context-dependent. It's also important to note that intelligence tests are just tools; they don't necessarily define the upper limits of an AI
    system's abilities or potential.
    _ _ _ _ _ _ _

    Once again, it's a chat machine, not an answer machine. I wonder
    how accurate that answer is?


    In your article above, you appear to have used the chat machine exactly
    as an answer machine, have you not? Whatever, I too wonder about the
    accuracy of the answer. In the absence of citations we have no way to
    check other than by searching for them with a real search engine, or
    relying on e.g. Wikipedia that ~does~ provide cites.

    Imagine the utility of having an all too plausible disinformation engine readily available to e.g. an unscrupulous politician.

    --
    ^Ï^. – Sn!pe – <https://youtu.be/_kqytf31a8E>

    My pet rock Gordon just is.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Sn!pe@21:1/5 to vallor on Sat May 27 09:46:14 2023
    vallor <vallor@vallor.earth> wrote:

    On Fri, 19 May 2023 07:34:11 -0500, Johnny wrote:

    On Fri, 19 May 2023 06:54:08 +0100 Andy Burns <usenet@andyburns.uk>
    wrote:

    Sn!pe wrote:

    Does consciousness imply independent self-awareness?
    If so then no, current AIs are not conscious, merely programmed.

    Do you consider training separate from programming?

    Training is for pets and children. Programming is for machines.

    I don't see how a machine made of nuts bolts and electronic components could ever be aware of it's existence.

    What if you programmed it to have neural nets that can learn, then you trained that?


    See my answer in the adjacent subthread:
    Message-ID: <1qaz4o6.3i5mv4m4hn7sN%snipeco.2@gmail.com>


    And imagine what a thinking machine might think if they found you?


    Yes, it's amusing to parody something by turning it on its head. I'm
    sure I've read what follows somewhere before but I can't think where.
    It's a nice little idea, well worthy of SF.


    <https://www.mit.edu/people/dpolicar/writing/prose/text/thinkingMeat.html>

    Obquote:
    _ _ _ _ _ _ _
    "They're made out of meat."

    "Meat?"

    "Meat. They're made out of meat."

    "Meat?"

    "There's no doubt about it. We picked several from different parts of the planet, took them aboard our recon vessels, probed them all the way
    through. They're completely meat."

    "That's impossible. What about the radio signals? The messages to the stars."

    "They use the radio waves to talk, but the signals don't come from them.
    The signals come from machines."

    "So who made the machines? That's who we want to contact."

    "They made the machines. That's what I'm trying to tell you. Meat made the machines."

    "That's ridiculous. How can meat make a machine? You're asking me to believe in sentient meat."
    [. . .]
    _ _ _ _ _ _ _


    --
    ^Ï^. – Sn!pe – <https://youtu.be/_kqytf31a8E>

    My pet rock Gordon just is.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From vallor@21:1/5 to All on Sat May 27 10:05:55 2023
    On Sat, 27 May 2023 10:01:44 +0100, Sn!pe wrote:

    vallor <vallor@vallor.earth> wrote:

    On Sat, 27 May 2023 08:23:44 GMT, vallor wrote:

    On Fri, 19 May 2023 00:19:23 +0100, Sn!pe wrote:

    Nic <Nic@none.net> wrote:

    If a computer can fool 70% of judges in a five-minute conversation
    into thinking it's a person, the computer passes the test. Would
    passing the Turing test – something which now seems imminent – show >> >>> that an AI has achieved thought and understanding?

    From
    <https://theconversation.com/chatgpt-cant-think-consciousness-is-
    something-entirely-different-to-todays-ai-204823>

    I invite all serious posters to participate.


    --
    Nic <Nic@none.net>

    Your .sig separator is broken. It should comprise "dash, dash,
    space, return" all on a line by itself.
    This facilitates automatic stripping from quotation by standards
    compliant Newsreader programs.

    Re: your question:
    Does consciousness imply independent self-awareness?
    If so then no, current AIs are not conscious, merely programmed.

    Programmed *to "learn"*, then "taught".


    Are AIs capable of deception by way of passing the Turing test?
    I would say yes; IMO the Turing Test is insufficiently rigorous and
    not fit for purpose.

    Would you be willing to suggest a better test?

    ChatGPT let's you share chat links now:

    https://chat.openai.com/share/f480d3e5-e675-4b29-a7ca-5d0e528f3088

    Let me know if that works properly.

    Here is the chat, which I probably won't need to include in the future,
    hopefully:
    _ _ _ _ _ _ _
    Q:
    is there a better test of general intelligence than the Turing Test?

    ChatGPT:
    The Turing Test, proposed by the British mathematician and computer
    scientist Alan Turing in 1950, has been a long-standing benchmark for
    artificial intelligence. However, it is not without its critics and
    limitations. Some believe it focuses too much on the ability to mimic
    human conversation, which is just one aspect of intelligence.

    In response to the shortcomings of the Turing Test, multiple
    alternatives have been proposed over the years:

    The Winograd Schema Challenge: This is a test of natural language
    understanding, named after computer scientist Terry Winograd. It
    involves resolving the reference of a pronoun in a sentence, which is a
    challenging problem for computers but easy for humans. This test
    attempts to measure an AI's understanding of a sentence's context,
    which is a key aspect of intelligence.

    The Lovelace 2.0 Test of Artificial Creativity and Intelligence: Named
    after Ada Lovelace, this test assesses a machine's ability to create a
    piece of work (like a poem, painting, or novel) that the machine's
    programmers did not explicitly design it to create. This tests for
    creativity, a hallmark of human intelligence that the Turing Test
    doesn't measure.

    The Coffee Test: Proposed by AI researcher Ben Goertzel, this test
    requires a robot to go into an average American home and figure out how
    to make coffee: find the coffee machine, find the coffee, add water,
    find a mug, etc. This tests the AI's ability to perform tasks in an
    unfamiliar environment, which is another aspect of intelligence.

    The BICA (Biologically Inspired Cognitive Architectures) Test: This is
    a much more comprehensive, albeit difficult, test that requires the AI
    to simulate the entire human mind. This would involve demonstrating a
    full range of human cognitive abilities, including emotion,
    self-awareness, consciousness, and theory of mind.

    The Animal-AI Olympics: This is a competition that evaluates AI based
    on tasks commonly used in animal intelligence research. The tasks are
    contained within a 3D environment where AI agents interact with
    objects, navigate mazes, and use tools, among other things.

    However, all of these tests have their limitations, and they each
    measure different aspects of what we might consider intelligence. There
    may never be a single definitive test of general intelligence, whether
    for humans, animals, or machines, given that intelligence is
    multi-faceted and context-dependent. It's also important to note that
    intelligence tests are just tools; they don't necessarily define the
    upper limits of an AI system's abilities or potential.
    _ _ _ _ _ _ _

    Once again, it's a chat machine, not an answer machine. I wonder how
    accurate that answer is?


    In your article above, you appear to have used the chat machine exactly
    as an answer machine, have you not?

    Yes, and if I were smart, I'd have asked it for references.

    As it goes, I could get a list of URL's out of it by asking for reference
    URL's for each test:

    https://chat.openai.com/share/016e5f52-1353-4274-ae04-b677d3c0187d

    Whatever, I too wonder about the accuracy of the answer. In the absence
    of citations we have no way to check other than by searching for them
    with a real search engine, or relying on e.g. Wikipedia that ~does~
    provide cites.

    Well, if you ask it right, sometimes it will give you references. (Sometimes...)

    Imagine the utility of having an all too plausible disinformation engine readily available to e.g. an unscrupulous politician.

    You can always verify what it says -- and since I didn't do that with my ChatGPT conversation before, I would say I gave a poor example.

    IAW, I see your point.

    --
    -v

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Sn!pe@21:1/5 to vallor on Sat May 27 10:57:38 2023
    vallor <vallor@vallor.earth> wrote:

    On Fri, 19 May 2023 10:59:23 +0100, Sn!pe wrote:
    Andy Burns <usenet@andyburns.uk> wrote:
    Sn!pe wrote:

    Does consciousness imply independent self-awareness?
    If so then no, current AIs are not conscious, merely programmed.


    Do you consider training separate from programming?


    I think there are probably large areas in common but otherwise, yes.
    What about the effect of how initial program parameters are set?
    As I envisage it, that would be the control(s) to adjust the AI's bias^W balance between altruism and evilness; definitely programming rather
    than training.


    ChatGPT is like a Mad Libs Matrix of shifting concepts that has these
    amazing emergent properties that don't seem to be well understood.

    (Please *don't* trust; *do* verify.)


    Agreed.

    --
    ^Ï^. – Sn!pe – <https://youtu.be/_kqytf31a8E>

    My pet rock Gordon just is.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From vallor@21:1/5 to All on Sat May 27 09:25:45 2023
    On Fri, 19 May 2023 10:59:23 +0100, Sn!pe wrote:

    Andy Burns <usenet@andyburns.uk> wrote:

    Sn!pe wrote:

    Does consciousness imply independent self-awareness?
    If so then no, current AIs are not conscious, merely programmed.

    Do you consider training separate from programming?

    I think there are probably large areas in common but otherwise, yes.
    What about the effect of how initial program parameters are set?
    As I envisage it, that would be the control(s) to adjust the AI's bias^W balance between altruism and evilness; definitely programming rather
    than training.

    ChatGPT is like a Mad Libs Matrix of shifting concepts that has these
    amazing emergent properties that don't seem to be well understood.

    (Please *don't* trust; *do* verify.)

    --
    -v

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Sn!pe@21:1/5 to vallor on Sat May 27 11:34:37 2023
    vallor <vallor@vallor.earth> wrote:
    [Sn!pe]
    In your article above, you appear to have used the chat machine exactly
    as an answer machine, have you not?

    Yes, and if I were smart, I'd have asked it for references.

    As it goes, I could get a list of URL's out of it by asking for reference URL's for each test:

    https://chat.openai.com/share/016e5f52-1353-4274-ae04-b677d3c0187d

    Whatever, I too wonder about the accuracy of the answer. In the absence
    of citations we have no way to check other than by searching for them
    with a real search engine, or relying on e.g. Wikipedia that ~does~
    provide cites.

    Well, if you ask it right, sometimes it will give you references. (Sometimes...)

    Imagine the utility of having an all too plausible disinformation engine readily available to e.g. an unscrupulous politician.

    You can always verify what it says -- and since I didn't do that with my ChatGPT conversation before, I would say I gave a poor example.

    IAW, I see your point.


    When it comes to disinformation: I rather doubt that those who might be influenced by and are the target of such disinformation are even aware
    of the possibility of verification. Don't forget, these are people who
    have been brought up to believe every answer that a pocket calculator
    might give them, mis-keyings notwithstanding. Such ill-educated people
    (i.e. 99% of the population) don't even have the concept of a sanity
    check. "I've read it in the paper / seen it on TV / the computer says" therefore it must be true.

    Do I have a poor opinion of my gullible fellow man, or at least those
    from generations after us boomers (we who were trained to question the potentially suspect information that we were fed)? Damn right I do.
    These lumpen, uncritical, masses are what wannabe tyrants rely upon.

    --
    ^Ï^. – Sn!pe – <https://youtu.be/_kqytf31a8E>

    My pet rock Gordon is humming:
    # Climb up on my knee, Suni Boi -- Al Johnson.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)