If a computer can fool 70% of judges in a five-minute conversation into thinking it's a person, the computer passes the test. Would passing the Turing test – something which now seems imminent – show that an AI has achieved thought and understanding?
From <https://theconversation.com/chatgpt-cant-think-consciousness-is-something-entirely-different-to-todays-ai-204823>
I invite all serious posters to participate.
--
Nic <Nic@none.net>
Does consciousness imply independent self-awareness?
If so then no, current AIs are not conscious, merely programmed.
Nic <Nic@none.net> wrote:
If a computer can fool 70% of judges in a five-minute conversation into
thinking it's a person, the computer passes the test. Would passing the
Turing test – something which now seems imminent – show that an AI has >> achieved thought and understanding?
From
<https://theconversation.com/chatgpt-cant-think-consciousness-is-something-entirely-different-to-todays-ai-204823>
I invite all serious posters to participate.
--
Nic <Nic@none.net>
Your .sig separator is broken.
Re: your question:
Does consciousness imply independent self-awareness?
Sn!pe wrote:
Does consciousness imply independent self-awareness?
If so then no, current AIs are not conscious, merely programmed.
Do you consider training separate from programming?
Re: your question:
Does consciousness imply independent self-awareness?
That was not his question. His (implied) question was 'Is passing the
Turing Test the same as having consciousness and awareness?'
No, and no. There is a large chasm between 'pretending to be a human
in conversation' and 'self-awareness'. I blame author of the article for journalistic sensationalism.
Sn!pe wrote:
Does consciousness imply independent self-awareness?
If so then no, current AIs are not conscious, merely programmed.
Do you consider training separate from programming?
?A fundamental necessity of a Turing test is that it _must_ be
_impossible_ for a person to fail it—otherwise, we risk such a test
being used by the powerful to dehumanize groups as an excuse for atrocities.
On Fri, 19 May 2023 06:54:08 +0100 Andy Burns <usenet@andyburns.uk>
wrote:
Sn!pe wrote:
Does consciousness imply independent self-awareness?
If so then no, current AIs are not conscious, merely programmed.
Do you consider training separate from programming?
Training is for pets and children. Programming is for machines.
I don't see how a machine made of nuts bolts and electronic components
could ever be aware of it's existence.
Nic <Nic@none.net> wrote:
If a computer can fool 70% of judges in a five-minute conversation into
thinking it's a person, the computer passes the test. Would passing the
Turing test – something which now seems imminent – show that an AI has >> achieved thought and understanding?
From
<https://theconversation.com/chatgpt-cant-think-consciousness-is- something-entirely-different-to-todays-ai-204823>
I invite all serious posters to participate.
--
Nic <Nic@none.net>
Your .sig separator is broken. It should comprise "dash, dash, space, return" all on a line by itself.
This facilitates automatic stripping from quotation by standards
compliant Newsreader programs.
Re: your question:
Does consciousness imply independent self-awareness?
If so then no, current AIs are not conscious, merely programmed.
Are AIs capable of deception by way of passing the Turing test?
I would say yes; IMO the Turing Test is insufficiently rigorous and not
fit for purpose.
On Fri, 19 May 2023 00:19:23 +0100, Sn!pe wrote:
Nic <Nic@none.net> wrote:something-entirely-different-to-todays-ai-204823>
If a computer can fool 70% of judges in a five-minute conversation
into thinking it's a person, the computer passes the test. Would
passing the Turing test – something which now seems imminent – show
that an AI has achieved thought and understanding?
From <https://theconversation.com/chatgpt-cant-think-consciousness-is-
I invite all serious posters to participate.
--
Nic <Nic@none.net>
Your .sig separator is broken. It should comprise "dash, dash, space,
return" all on a line by itself.
This facilitates automatic stripping from quotation by standards
compliant Newsreader programs.
Re: your question:
Does consciousness imply independent self-awareness?
If so then no, current AIs are not conscious, merely programmed.
Programmed *to "learn"*, then "taught".
Are AIs capable of deception by way of passing the Turing test?
I would say yes; IMO the Turing Test is insufficiently rigorous and not
fit for purpose.
Would you be willing to suggest a better test?
On Fri, 19 May 2023 00:19:23 +0100, Sn!pe wrote:[...]
Nic <Nic@none.net> wrote:
If a computer can fool 70% of judges in a five-minute conversation into
thinking it's a person, the computer passes the test. Would passing the
Turing test – something which now seems imminent – show that an AI has >> achieved thought and understanding?
From
<https://theconversation.com/chatgpt-cant-think-consciousness-is-something-entirely-different-to-todays-ai-204823>
I invite all serious posters to participate.
Re: your question:
Does consciousness imply independent self-awareness?
If so then no, current AIs are not conscious, merely programmed.
Programmed *to "learn"*, then "taught".
Are AIs capable of deception by way of passing the Turing test?
I would say yes; IMO the Turing Test is insufficiently rigorous and not
fit for purpose.
Would you be willing to suggest a better test?
On Sat, 27 May 2023 08:23:44 GMT, vallor wrote:
On Fri, 19 May 2023 00:19:23 +0100, Sn!pe wrote:
Nic <Nic@none.net> wrote:something-entirely-different-to-todays-ai-204823>
If a computer can fool 70% of judges in a five-minute conversation
into thinking it's a person, the computer passes the test. Would
passing the Turing test – something which now seems imminent – show >>> that an AI has achieved thought and understanding?
From <https://theconversation.com/chatgpt-cant-think-consciousness-is-
I invite all serious posters to participate.
--
Nic <Nic@none.net>
Your .sig separator is broken. It should comprise "dash, dash, space,
return" all on a line by itself.
This facilitates automatic stripping from quotation by standards
compliant Newsreader programs.
Re: your question:
Does consciousness imply independent self-awareness?
If so then no, current AIs are not conscious, merely programmed.
Programmed *to "learn"*, then "taught".
Are AIs capable of deception by way of passing the Turing test?
I would say yes; IMO the Turing Test is insufficiently rigorous and not
fit for purpose.
Would you be willing to suggest a better test?
ChatGPT let's you share chat links now:
https://chat.openai.com/share/f480d3e5-e675-4b29-a7ca-5d0e528f3088
Let me know if that works properly.
Here is the chat, which I probably won't need to include in the future, hopefully:
_ _ _ _ _ _ _
Q:
is there a better test of general intelligence than the Turing Test?
ChatGPT:
The Turing Test, proposed by the British mathematician and computer
scientist Alan Turing in 1950, has been a long-standing benchmark for artificial intelligence. However, it is not without its critics and limitations. Some believe it focuses too much on the ability to mimic
human conversation, which is just one aspect of intelligence.
In response to the shortcomings of the Turing Test, multiple alternatives have been proposed over the years:
The Winograd Schema Challenge: This is a test of natural language understanding, named after computer scientist Terry Winograd. It involves resolving the reference of a pronoun in a sentence, which is a challenging problem for computers but easy for humans. This test attempts to measure
an AI's understanding of a sentence's context, which is a key aspect of intelligence.
The Lovelace 2.0 Test of Artificial Creativity and Intelligence: Named
after Ada Lovelace, this test assesses a machine's ability to create a
piece of work (like a poem, painting, or novel) that the machine's programmers did not explicitly design it to create. This tests for creativity, a hallmark of human intelligence that the Turing Test doesn't measure.
The Coffee Test: Proposed by AI researcher Ben Goertzel, this test
requires a robot to go into an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a
mug, etc. This tests the AI's ability to perform tasks in an unfamiliar environment, which is another aspect of intelligence.
The BICA (Biologically Inspired Cognitive Architectures) Test: This is a
much more comprehensive, albeit difficult, test that requires the AI to simulate the entire human mind. This would involve demonstrating a full
range of human cognitive abilities, including emotion, self-awareness, consciousness, and theory of mind.
The Animal-AI Olympics: This is a competition that evaluates AI based on tasks commonly used in animal intelligence research. The tasks are
contained within a 3D environment where AI agents interact with objects, navigate mazes, and use tools, among other things.
However, all of these tests have their limitations, and they each measure different aspects of what we might consider intelligence. There may never
be a single definitive test of general intelligence, whether for humans, animals, or machines, given that intelligence is multi-faceted and context-dependent. It's also important to note that intelligence tests are just tools; they don't necessarily define the upper limits of an AI
system's abilities or potential.
_ _ _ _ _ _ _
Once again, it's a chat machine, not an answer machine. I wonder
how accurate that answer is?
On Fri, 19 May 2023 07:34:11 -0500, Johnny wrote:
On Fri, 19 May 2023 06:54:08 +0100 Andy Burns <usenet@andyburns.uk>
wrote:
Sn!pe wrote:
Does consciousness imply independent self-awareness?
If so then no, current AIs are not conscious, merely programmed.
Do you consider training separate from programming?
Training is for pets and children. Programming is for machines.
I don't see how a machine made of nuts bolts and electronic components could ever be aware of it's existence.
What if you programmed it to have neural nets that can learn, then you trained that?
And imagine what a thinking machine might think if they found you?
<https://www.mit.edu/people/dpolicar/writing/prose/text/thinkingMeat.html>
Obquote:
_ _ _ _ _ _ _
"They're made out of meat."
"Meat?"
"Meat. They're made out of meat."
"Meat?"
"There's no doubt about it. We picked several from different parts of the planet, took them aboard our recon vessels, probed them all the way
through. They're completely meat."
"That's impossible. What about the radio signals? The messages to the stars."
"They use the radio waves to talk, but the signals don't come from them.
The signals come from machines."
"So who made the machines? That's who we want to contact."
"They made the machines. That's what I'm trying to tell you. Meat made the machines."
"That's ridiculous. How can meat make a machine? You're asking me to believe in sentient meat."
[. . .]
_ _ _ _ _ _ _
vallor <vallor@vallor.earth> wrote:
On Sat, 27 May 2023 08:23:44 GMT, vallor wrote:In your article above, you appear to have used the chat machine exactly
On Fri, 19 May 2023 00:19:23 +0100, Sn!pe wrote:
Nic <Nic@none.net> wrote:something-entirely-different-to-todays-ai-204823>
If a computer can fool 70% of judges in a five-minute conversation
into thinking it's a person, the computer passes the test. Would
passing the Turing test – something which now seems imminent – show >> >>> that an AI has achieved thought and understanding?
From
<https://theconversation.com/chatgpt-cant-think-consciousness-is-
I invite all serious posters to participate.
--
Nic <Nic@none.net>
Your .sig separator is broken. It should comprise "dash, dash,
space, return" all on a line by itself.
This facilitates automatic stripping from quotation by standards
compliant Newsreader programs.
Re: your question:
Does consciousness imply independent self-awareness?
If so then no, current AIs are not conscious, merely programmed.
Programmed *to "learn"*, then "taught".
Are AIs capable of deception by way of passing the Turing test?
I would say yes; IMO the Turing Test is insufficiently rigorous and
not fit for purpose.
Would you be willing to suggest a better test?
ChatGPT let's you share chat links now:
https://chat.openai.com/share/f480d3e5-e675-4b29-a7ca-5d0e528f3088
Let me know if that works properly.
Here is the chat, which I probably won't need to include in the future,
hopefully:
_ _ _ _ _ _ _
Q:
is there a better test of general intelligence than the Turing Test?
ChatGPT:
The Turing Test, proposed by the British mathematician and computer
scientist Alan Turing in 1950, has been a long-standing benchmark for
artificial intelligence. However, it is not without its critics and
limitations. Some believe it focuses too much on the ability to mimic
human conversation, which is just one aspect of intelligence.
In response to the shortcomings of the Turing Test, multiple
alternatives have been proposed over the years:
The Winograd Schema Challenge: This is a test of natural language
understanding, named after computer scientist Terry Winograd. It
involves resolving the reference of a pronoun in a sentence, which is a
challenging problem for computers but easy for humans. This test
attempts to measure an AI's understanding of a sentence's context,
which is a key aspect of intelligence.
The Lovelace 2.0 Test of Artificial Creativity and Intelligence: Named
after Ada Lovelace, this test assesses a machine's ability to create a
piece of work (like a poem, painting, or novel) that the machine's
programmers did not explicitly design it to create. This tests for
creativity, a hallmark of human intelligence that the Turing Test
doesn't measure.
The Coffee Test: Proposed by AI researcher Ben Goertzel, this test
requires a robot to go into an average American home and figure out how
to make coffee: find the coffee machine, find the coffee, add water,
find a mug, etc. This tests the AI's ability to perform tasks in an
unfamiliar environment, which is another aspect of intelligence.
The BICA (Biologically Inspired Cognitive Architectures) Test: This is
a much more comprehensive, albeit difficult, test that requires the AI
to simulate the entire human mind. This would involve demonstrating a
full range of human cognitive abilities, including emotion,
self-awareness, consciousness, and theory of mind.
The Animal-AI Olympics: This is a competition that evaluates AI based
on tasks commonly used in animal intelligence research. The tasks are
contained within a 3D environment where AI agents interact with
objects, navigate mazes, and use tools, among other things.
However, all of these tests have their limitations, and they each
measure different aspects of what we might consider intelligence. There
may never be a single definitive test of general intelligence, whether
for humans, animals, or machines, given that intelligence is
multi-faceted and context-dependent. It's also important to note that
intelligence tests are just tools; they don't necessarily define the
upper limits of an AI system's abilities or potential.
_ _ _ _ _ _ _
Once again, it's a chat machine, not an answer machine. I wonder how
accurate that answer is?
as an answer machine, have you not?
Whatever, I too wonder about the accuracy of the answer. In the absence
of citations we have no way to check other than by searching for them
with a real search engine, or relying on e.g. Wikipedia that ~does~
provide cites.
Imagine the utility of having an all too plausible disinformation engine readily available to e.g. an unscrupulous politician.
On Fri, 19 May 2023 10:59:23 +0100, Sn!pe wrote:
Andy Burns <usenet@andyburns.uk> wrote:
Sn!pe wrote:
Does consciousness imply independent self-awareness?
If so then no, current AIs are not conscious, merely programmed.
Do you consider training separate from programming?
I think there are probably large areas in common but otherwise, yes.
What about the effect of how initial program parameters are set?
As I envisage it, that would be the control(s) to adjust the AI's bias^W balance between altruism and evilness; definitely programming rather
than training.
ChatGPT is like a Mad Libs Matrix of shifting concepts that has these
amazing emergent properties that don't seem to be well understood.
(Please *don't* trust; *do* verify.)
Andy Burns <usenet@andyburns.uk> wrote:
Sn!pe wrote:
Does consciousness imply independent self-awareness?
If so then no, current AIs are not conscious, merely programmed.
Do you consider training separate from programming?
I think there are probably large areas in common but otherwise, yes.
What about the effect of how initial program parameters are set?
As I envisage it, that would be the control(s) to adjust the AI's bias^W balance between altruism and evilness; definitely programming rather
than training.
In your article above, you appear to have used the chat machine exactly
as an answer machine, have you not?
Yes, and if I were smart, I'd have asked it for references.
As it goes, I could get a list of URL's out of it by asking for reference URL's for each test:
https://chat.openai.com/share/016e5f52-1353-4274-ae04-b677d3c0187d
Whatever, I too wonder about the accuracy of the answer. In the absence
of citations we have no way to check other than by searching for them
with a real search engine, or relying on e.g. Wikipedia that ~does~
provide cites.
Well, if you ask it right, sometimes it will give you references. (Sometimes...)
Imagine the utility of having an all too plausible disinformation engine readily available to e.g. an unscrupulous politician.
You can always verify what it says -- and since I didn't do that with my ChatGPT conversation before, I would say I gave a poor example.
IAW, I see your point.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 297 |
Nodes: | 16 (2 / 14) |
Uptime: | 10:25:07 |
Calls: | 6,666 |
Files: | 12,213 |
Messages: | 5,336,335 |