https://www.youtube.com/watch?v=5Peima-Uw7wSee graph at 9:50 in.I see this a lot, engineers wanting to do complex stuff because it'samusing to them, when simple common-sense things would work and bedone.
https://www.youtube.com/watch?v=5Peima-Uw7w
See graph at 9:50 in.
I see this a lot, engineers wanting to do complex stuff because it's
amusing to them, when simple common-sense things would work and be
done.
John Larkin <jjSNIPlarkin@highNONOlandtechnology.com> Wrote in
message:r
https://www.youtube.com/watch?v=5Peima-Uw7wSee graph at 9:50 in.I see this a lot, engineers wanting to do complex stuff because it'samusing to them, when simple common-sense things would work and bedone.
My current project requires iec62304 and it is amusing .
Cheers
On Fri, 17 May 2024 15:36:55 -0400 (EDT), Martin Rid ><martin_riddle@verison.net> wrote:
John Larkin <jjSNIPlarkin@highNONOlandtechnology.com> Wrote in
message:r
https://www.youtube.com/watch?v=5Peima-Uw7wSee graph at 9:50 in.I see this a lot, engineers wanting to do complex stuff because it'samusing to them, when simple common-sense things would work and bedone.
My current project requires iec62304 and it is amusing .
Cheers
Yikes. What does it cost to buy the standard? Does it reference other >standards?
On Fri, 17 May 2024 15:36:55 -0400 (EDT), Martin Rid<martin_riddle@verison.net> wrote:>John Larkin <jjSNIPlarkin@highNONOlandtechnology.com> Wrote in> message:r>> https://www.youtube.com/watch?v=5Peima-Uw7wSee graph at 9:50 in.I see this a lot,engineers wanting to do complex stuff because it'samusing to them, when simple common-sense things would work and bedone.>>My current project requires iec62304 and it is amusing .>>CheersYikes. What does it cost to buy the standard? Does it reference
"John Larkin" <jjSNIPlarkin@highNONOlandtechnology.com> wrote in message >news:bk9f4j5689jbmg8af3ha53t3kcgiq0vbut@4ax.com...
https://www.youtube.com/watch?v=5Peima-Uw7w
Not sure how he managed to say master debaters that many times while >seemingly keeping a straight face but it reminds me of this: >https://www.learningmethods.com/downloads/pdf/james.alcock--the.belief.engine.pdf
One thing which bothers me about AI is that if it's like us but way more >intelligent than us then...
On Fri, 17 May 2024 13:14:30 -0700, John Larkin ><jjSNIPlarkin@highNONOlandtechnology.com> wrote:
On Fri, 17 May 2024 15:36:55 -0400 (EDT), Martin Rid >><martin_riddle@verison.net> wrote:
John Larkin <jjSNIPlarkin@highNONOlandtechnology.com> Wrote in
message:r
https://www.youtube.com/watch?v=5Peima-Uw7wSee graph at 9:50 in.I see this a lot, engineers wanting to do complex stuff because it'samusing to them, when simple common-sense things would work and bedone.
My current project requires iec62304 and it is amusing .
Cheers
Yikes. What does it cost to buy the standard? Does it reference other >>standards?
It's 345 Swiss franks (USD 380). Probably cites many things, so you
may need a bunch of these expensive standards.
It documents the now obsolete waterfall model of software development,
at great length, for medical devices.
.<https://en.wikipedia.org/wiki/IEC_62304>
I've had to follow this approach (but not this standard), and it
didn't go well, because it didn't deal with practical constraints at
all. The electronic-design parallel would be a process that requires
that a transistor with very specific properties exist and be
available. But in the real world, we have to use the transistors that
are available, even if they are not perfect - make what you want from
what you can get.
The solution was to design from the middle out, and when it all
settled down, document as if it were developed from the top down.
Joe Gwinn
John Larkin <jjSNIPlarkin@highNONOlandtechnology.com> Wrote inengineers wanting to do complex stuff because it'samusing to them, when simple common-sense things would work and bedone.>>My current project requires iec62304 and it is amusing .>>CheersYikes. What does it cost to buy the standard? Does it reference
message:r
On Fri, 17 May 2024 15:36:55 -0400 (EDT), Martin Rid<martin_riddle@verison.net> wrote:>John Larkin <jjSNIPlarkin@highNONOlandtechnology.com> Wrote in> message:r>> https://www.youtube.com/watch?v=5Peima-Uw7wSee graph at 9:50 in.I see this a lot,
Only $348, surprisingly it does not reference other standards.
At least I dont see any. I got a big 4" binder of paper work
that should be sufficient to prove we followed the
standard.
The problem is getting the old guys to get on board , none of them
are interested.
On Fri, 17 May 2024 16:55:57 -0400, Joe Gwinn <joegwinn@comcast.net>
wrote:
On Fri, 17 May 2024 13:14:30 -0700, John Larkin >><jjSNIPlarkin@highNONOlandtechnology.com> wrote:
On Fri, 17 May 2024 15:36:55 -0400 (EDT), Martin Rid >>><martin_riddle@verison.net> wrote:
John Larkin <jjSNIPlarkin@highNONOlandtechnology.com> Wrote in
message:r
https://www.youtube.com/watch?v=5Peima-Uw7wSee graph at 9:50 in.I see this a lot, engineers wanting to do complex stuff because it'samusing to them, when simple common-sense things would work and bedone.
My current project requires iec62304 and it is amusing .
Cheers
Yikes. What does it cost to buy the standard? Does it reference other >>>standards?
It's 345 Swiss franks (USD 380). Probably cites many things, so you
may need a bunch of these expensive standards.
It documents the now obsolete waterfall model of software development,
at great length, for medical devices.
.<https://en.wikipedia.org/wiki/IEC_62304>
I've had to follow this approach (but not this standard), and it
didn't go well, because it didn't deal with practical constraints at
all. The electronic-design parallel would be a process that requires
that a transistor with very specific properties exist and be
available. But in the real world, we have to use the transistors that
are available, even if they are not perfect - make what you want from
what you can get.
The solution was to design from the middle out, and when it all
settled down, document as if it were developed from the top down.
Joe Gwinn
That's the Microsoft Project Effect: the more tasks you define in a
project, the longer it takes.
Only $348, surprisingly it does not reference other standards.
At least I dont see any. I got a big 4" binder of paper work
that should be sufficient to prove we followed the
standard.
The problem is getting the old guys to get on board , none of them
are interested.
Not sure how he managed to say master debaters that many times while seemingly keeping a straight face but it reminds me of this: https://www.learningmethods.com/downloads/pdf/james.alcock--the.belief.engine.pdf
One thing which bothers me about AI is that if it's like us but way more intelligent than us then...
If you've ever met an /idiot savant/, you'd see the effect. Ask him
to tie his shoe or sing a song and behold the blank look...
On 5/17/2024 1:43 PM, Edward Rawde wrote:
Not sure how he managed to say master debaters that many times while
seemingly keeping a straight face but it reminds me of this:
https://www.learningmethods.com/downloads/pdf/james.alcock--the.belief.engine.pdf
One thing which bothers me about AI is that if it's like us but way more
intelligent than us then...
The 'I' in AI doesn't refer to the same sense of "intelligence" that
you are imagining.
On 5/17/2024 7:11 PM, Edward Rawde wrote:
"Don Y" <blockedofcourse@foo.invalid> wrote in message
news:v28rap$2e811$3@dont-email.me...
On 5/17/2024 1:43 PM, Edward Rawde wrote:
Not sure how he managed to say master debaters that many times while
seemingly keeping a straight face but it reminds me of this:
https://www.learningmethods.com/downloads/pdf/james.alcock--the.belief.engine.pdf
One thing which bothers me about AI is that if it's like us but way
more
intelligent than us then...
The 'I' in AI doesn't refer to the same sense of "intelligence" that
you are imagining.
Strange that you could know what I was imagining.
People are invariably mislead by thinking that there is "intelligence" involved in the technology. If there is intelligence, then there should
be *reason*, right? If there is reason, then I should be able to inquire
as to what, specifically, those reasons were for any "decision"/choice
that is made.
Where it will be in 10 years is impossible to predict.
But, as the genie is
out of the bottle, there is nothing to stop others from using/abusing it
in ways that we might not consider palatable! (Do you really think an adversary will follow YOUR rules for its use -- if they see a way to
achieve gains?)
The risk from AI is that it makes decisions without being able to
articulate
a "reason" in a verifiable form.
And, then marches on -- without our...
ever "blessing" it's conclusion(s). There is no understanding; no
REASONING;
it's all just pattern observation/matching.
"Don Y" <blockedofcourse@foo.invalid> wrote in message news:v28rap$2e811$3@dont-email.me...
On 5/17/2024 1:43 PM, Edward Rawde wrote:
Not sure how he managed to say master debaters that many times while
seemingly keeping a straight face but it reminds me of this:
https://www.learningmethods.com/downloads/pdf/james.alcock--the.belief.engine.pdf
One thing which bothers me about AI is that if it's like us but way more >>> intelligent than us then...
The 'I' in AI doesn't refer to the same sense of "intelligence" that
you are imagining.
Strange that you could know what I was imagining.
Have a look at this and then tell me where you think AI/AGI will be in say
10 years.
https://www.youtube.com/watch?v=YZjmZFDx-pA
"Don Y" <blockedofcourse@foo.invalid> wrote in message news:v29aso$2kjfs$1@dont-email.me...
On 5/17/2024 7:11 PM, Edward Rawde wrote:
"Don Y" <blockedofcourse@foo.invalid> wrote in message
news:v28rap$2e811$3@dont-email.me...
On 5/17/2024 1:43 PM, Edward Rawde wrote:
Not sure how he managed to say master debaters that many times while >>>>> seemingly keeping a straight face but it reminds me of this:
https://www.learningmethods.com/downloads/pdf/james.alcock--the.belief.engine.pdf
One thing which bothers me about AI is that if it's like us but way
more
intelligent than us then...
The 'I' in AI doesn't refer to the same sense of "intelligence" that
you are imagining.
Strange that you could know what I was imagining.
People are invariably mislead by thinking that there is "intelligence"
involved in the technology. If there is intelligence, then there should
be *reason*, right? If there is reason, then I should be able to inquire
as to what, specifically, those reasons were for any "decision"/choice
that is made.
Where it will be in 10 years is impossible to predict.
I agree.
But, as the genie is
out of the bottle, there is nothing to stop others from using/abusing it
in ways that we might not consider palatable! (Do you really think an
adversary will follow YOUR rules for its use -- if they see a way to
achieve gains?)
The risk from AI is that it makes decisions without being able to
articulate
a "reason" in a verifiable form.
I know/have known plenty of people who can do that.
"Edward Rawde" <invalid@invalid.invalid> wrote in message news:v29bqi$14iv$1@nnrp.usenet.blueworldhosting.com...
"Don Y" <blockedofcourse@foo.invalid> wrote in message
news:v29aso$2kjfs$1@dont-email.me...
On 5/17/2024 7:11 PM, Edward Rawde wrote:
"Don Y" <blockedofcourse@foo.invalid> wrote in message
news:v28rap$2e811$3@dont-email.me...
On 5/17/2024 1:43 PM, Edward Rawde wrote:
Not sure how he managed to say master debaters that many times while >>>>>> seemingly keeping a straight face but it reminds me of this:
https://www.learningmethods.com/downloads/pdf/james.alcock--the.belief.engine.pdf
One thing which bothers me about AI is that if it's like us but way >>>>>> more
intelligent than us then...
The 'I' in AI doesn't refer to the same sense of "intelligence" that >>>>> you are imagining.
Strange that you could know what I was imagining.
People are invariably mislead by thinking that there is "intelligence"
involved in the technology. If there is intelligence, then there should >>> be *reason*, right? If there is reason, then I should be able to inquire >>> as to what, specifically, those reasons were for any "decision"/choice
that is made.
What is a decision?
On 5/17/2024 9:49 PM, Edward Rawde wrote:
"Edward Rawde" <invalid@invalid.invalid> wrote in message
news:v29bqi$14iv$1@nnrp.usenet.blueworldhosting.com...
"Don Y" <blockedofcourse@foo.invalid> wrote in message
news:v29aso$2kjfs$1@dont-email.me...
On 5/17/2024 7:11 PM, Edward Rawde wrote:
"Don Y" <blockedofcourse@foo.invalid> wrote in message
news:v28rap$2e811$3@dont-email.me...
On 5/17/2024 1:43 PM, Edward Rawde wrote:
Not sure how he managed to say master debaters that many times while >>>>>>> seemingly keeping a straight face but it reminds me of this:
https://www.learningmethods.com/downloads/pdf/james.alcock--the.belief.engine.pdf
One thing which bothers me about AI is that if it's like us but way >>>>>>> more
intelligent than us then...
The 'I' in AI doesn't refer to the same sense of "intelligence" that >>>>>> you are imagining.
Strange that you could know what I was imagining.
People are invariably mislead by thinking that there is "intelligence" >>>> involved in the technology. If there is intelligence, then there
should
be *reason*, right? If there is reason, then I should be able to
inquire
as to what, specifically, those reasons were for any "decision"/choice >>>> that is made.
What is a decision?
Any option to take one fork vs. another.
On 5/17/2024 9:46 PM, Edward Rawde wrote:
Where it will be in 10 years is impossible to predict.
I agree.
So, you can be optimistic (and risk disappointment) or
pessimistic (and risk being pleasantly surprised).
Unfortunately, the consequences aren't as trivial as
choosing between the steak or lobster...
But, as the genie is
out of the bottle, there is nothing to stop others from using/abusing it >>> in ways that we might not consider palatable! (Do you really think an
adversary will follow YOUR rules for its use -- if they see a way to
achieve gains?)
The risk from AI is that it makes decisions without being able to
articulate
a "reason" in a verifiable form.
I know/have known plenty of people who can do that.
But *you* can evaluate the "goodness" (correctness?) of their
decisions by an examination of their reasoning.
So, you can
opt to endorse their decision or reject it -- regardless of
THEIR opinion on the subject.
E.g., if a manager makes stupid decisions regarding product
design, you can decide if you want to deal with the
inevitable (?) outcome from those decisions -- or "move on".
You aren't bound by his decision making process.
With AIs making societal-scale decisions (directly or
indirectly), you get caught up in the side-effects of those.
"Don Y" <blockedofcourse@foo.invalid> wrote in message news:v29fji$2l9d8$2@dont-email.me...
On 5/17/2024 9:49 PM, Edward Rawde wrote:
"Edward Rawde" <invalid@invalid.invalid> wrote in message
news:v29bqi$14iv$1@nnrp.usenet.blueworldhosting.com...
"Don Y" <blockedofcourse@foo.invalid> wrote in message
news:v29aso$2kjfs$1@dont-email.me...
On 5/17/2024 7:11 PM, Edward Rawde wrote:
"Don Y" <blockedofcourse@foo.invalid> wrote in message
news:v28rap$2e811$3@dont-email.me...
On 5/17/2024 1:43 PM, Edward Rawde wrote:
Not sure how he managed to say master debaters that many times while >>>>>>>> seemingly keeping a straight face but it reminds me of this:
https://www.learningmethods.com/downloads/pdf/james.alcock--the.belief.engine.pdf
One thing which bothers me about AI is that if it's like us but way >>>>>>>> more
intelligent than us then...
The 'I' in AI doesn't refer to the same sense of "intelligence" that >>>>>>> you are imagining.
Strange that you could know what I was imagining.
People are invariably mislead by thinking that there is "intelligence" >>>>> involved in the technology. If there is intelligence, then there
should
be *reason*, right? If there is reason, then I should be able to
inquire
as to what, specifically, those reasons were for any "decision"/choice >>>>> that is made.
What is a decision?
Any option to take one fork vs. another.
So a decision is a decision.
Shouldn't a decision be that which causes a specific fork to be chosen?
In other words the current state of a system leads it to produce a specific future state?
I don't claim to know what a decision is but I think it's interesting that
it seems to be one of those questions everyone knows the answer to until they're asked.
On 5/18/2024 7:18 AM, Edward Rawde wrote:
"Don Y" <blockedofcourse@foo.invalid> wrote in message
news:v29fji$2l9d8$2@dont-email.me...
On 5/17/2024 9:49 PM, Edward Rawde wrote:
"Edward Rawde" <invalid@invalid.invalid> wrote in message
news:v29bqi$14iv$1@nnrp.usenet.blueworldhosting.com...
"Don Y" <blockedofcourse@foo.invalid> wrote in message
news:v29aso$2kjfs$1@dont-email.me...
On 5/17/2024 7:11 PM, Edward Rawde wrote:
"Don Y" <blockedofcourse@foo.invalid> wrote in message
news:v28rap$2e811$3@dont-email.me...
On 5/17/2024 1:43 PM, Edward Rawde wrote:
Not sure how he managed to say master debaters that many times >>>>>>>>> while
seemingly keeping a straight face but it reminds me of this: >>>>>>>>> https://www.learningmethods.com/downloads/pdf/james.alcock--the.belief.engine.pdf
One thing which bothers me about AI is that if it's like us but >>>>>>>>> way
more
intelligent than us then...
The 'I' in AI doesn't refer to the same sense of "intelligence" >>>>>>>> that
you are imagining.
Strange that you could know what I was imagining.
People are invariably mislead by thinking that there is
"intelligence"
involved in the technology. If there is intelligence, then there
should
be *reason*, right? If there is reason, then I should be able to
inquire
as to what, specifically, those reasons were for any
"decision"/choice
that is made.
What is a decision?
Any option to take one fork vs. another.
So a decision is a decision.
A decision is a choice. A srategy is HOW you make that choice.
Shouldn't a decision be that which causes a specific fork to be chosen?
Why? I choose to eat pie. The reasoning behind the choice may be
as banal as "because it's already partially eaten and will spoil if
not consumed soon" or "because that is what my body craves at this moment"
or "because I want to remove that item from the refrigerator to make room
for some other item recently acquired".
In other words the current state of a system leads it to produce a
specific
future state?
That defines a strategic goal. Choices (decisions) are made all the time. Their *consequences* are often not considered in the process!
I don't claim to know what a decision is but I think it's interesting
that
it seems to be one of those questions everyone knows the answer to until
they're asked.
But, as the genie is
out of the bottle, there is nothing to stop others from using/abusing it >>>> in ways that we might not consider palatable! (Do you really think an >>>> adversary will follow YOUR rules for its use -- if they see a way to
achieve gains?)
The risk from AI is that it makes decisions without being able to
articulate
a "reason" in a verifiable form.
I know/have known plenty of people who can do that.
But *you* can evaluate the "goodness" (correctness?) of their
decisions by an examination of their reasoning.
But then the decision has already been made so why bother with such an examination?
So, you can
opt to endorse their decision or reject it -- regardless of
THEIR opinion on the subject.
E.g., if a manager makes stupid decisions regarding product
design, you can decide if you want to deal with the
inevitable (?) outcome from those decisions -- or "move on".
You aren't bound by his decision making process.
With AIs making societal-scale decisions (directly or
indirectly), you get caught up in the side-effects of those.
Certainly AI decisions will depend on their training, just as human
decisions do.
And you can still decide whether to be bound by that decision.
Unless, of course, the AI has got itself into a position where it will see you do it anyway by persuasion, coercion, or force.
Just like humans do.
Human treatment of other animals tends not to be of the best, except in a minority of cases.
How do we know that AI will treat us in a way we consider to be reasonable?
Human managers often don't. Sure you can make a decision to leave that job but it's not an option for many people.
Actors had better watch out if this page is anything to go by: https://openai.com/index/sora/
I remember a discussion with a colleague many decades ago about where computers were going in the future.
My view was that at some future time, human actors would no longer be
needed.
His view was that he didn't think that would ever be possible.
Now it's looking like I might live long enough to get to type something like Prompt: Create a new episode of Blake's Seven.
On 5/18/2024 7:47 AM, Edward Rawde wrote:
But, as the genie is
out of the bottle, there is nothing to stop others from using/abusing >>>>> it
in ways that we might not consider palatable! (Do you really think an >>>>> adversary will follow YOUR rules for its use -- if they see a way to >>>>> achieve gains?)
The risk from AI is that it makes decisions without being able to
articulate
a "reason" in a verifiable form.
I know/have known plenty of people who can do that.
But *you* can evaluate the "goodness" (correctness?) of their
decisions by an examination of their reasoning.
But then the decision has already been made so why bother with such an
examination?
So you can update your assessment of the party's decision making capabilities/strategies.
When a child is "learning", the parent is continually refining the "knowledge" the child is accumulating; correcting faulty
"conclusions" that the child may have gleaned from its examination
of the "facts" it encounters.
In the early days of AI, inference engines were really slow;
forward chaining was an exhaustive process (before Rete).
So, it was not uncommon to WATCH the "conclusions" (new
knowledge) that the engine would derive from its existing
knowledge base. You would use this to "fix" poorly defined
"facts" so the AI wouldn't come to unwarranted conclusions.
AND GATE THOSE INACCURATE CONCLUSIONS FROM ENTERING THE
KNOWLEDGE BASE!
Women bear children.
The Abbess is a woman.
Great-great-grandmother Florence is a woman.
Therefore, the Abbess and Florence bear children.
Now, better algorithms (Rete, et al.), faster processors,
SIMD/MIMD, cheap/fast memory make it possible to process
very large knowledge bases faster than an interactive "operator"
can validate the conclusions.
Other technologies don't provide information to an "agency"
(operator) for validation; e.g., LLMs can't explain why they
produced their output whereas a Production System can ennumerate
the rules followed for your inspection (and CORRECTION).
So, you can
opt to endorse their decision or reject it -- regardless of
THEIR opinion on the subject.
E.g., if a manager makes stupid decisions regarding product
design, you can decide if you want to deal with the
inevitable (?) outcome from those decisions -- or "move on".
You aren't bound by his decision making process.
With AIs making societal-scale decisions (directly or
indirectly), you get caught up in the side-effects of those.
Certainly AI decisions will depend on their training, just as human
decisions do.
But human learning happens over years and often in a supervised context.
AIs "learn" so fast that only another AI would be productive at
refining its training.
And you can still decide whether to be bound by that decision.
Unless, of course, the AI has got itself into a position where it will
see
you do it anyway by persuasion, coercion, or force.
Consider the mammogram example. The AI is telling you that this
sample indicates the presence -- or likelihood -- of cancer.
You have a decision to make... an ACTIVE choice: do you accept
its Dx or reject it? Each choice comes with a risk/cost.
If you ignore the recommendation, injury (death?) can result from
your "inaction" on the recommendation. If you take some remedial
action, injury (in the form of unnecessary procedures/surgery)
can result.
Because the AI can't *explain* its "reasoning" to you, you have no way
of updating your assessment of its (likely) correctness -- esp in
THIS instance.
Just like humans do.
Human treatment of other animals tends not to be of the best, except in a
minority of cases.
How do we know that AI will treat us in a way we consider to be
reasonable?
The AI doesn't care about you, one way or the other. Any "bias" in
its conclusions has been baked in from the training data/process.
Do you know what that data was? Can you assess its bias? Do the folks
who *compiled* the training data know? Can they "tease" the bias out
of the data -- or, are they oblivious to its presence?
Lots of blacks in prison. Does that "fact" mean that blacks are
more criminally inclined? Or, that they are less skilled at evading
the consequences of their crimes? Or, that there is a bias in the legal/enforcement system?
All sorts of "criminals" ("rapists", "drug dealers", etc) allegedly coming into our (US) country. Or, is that just hyperbole ("illegal" immigrants
tend to commit FEWER crimes)? Will the audience be biased in its acceptance/rejection of that "assertion"?
Human managers often don't. Sure you can make a decision to leave that
job
but it's not an option for many people.
Actors had better watch out if this page is anything to go by:
https://openai.com/index/sora/
I remember a discussion with a colleague many decades ago about where
computers were going in the future.
My view was that at some future time, human actors would no longer be
needed.
His view was that he didn't think that would ever be possible.
If I was a "talking head" (news anchor, weather person), I would be VERY afraid for my future livelihood. Setting up a CGI newsroom would be
a piece of cake. No need to pay for "personalities", "wardrobe", "hair/makeup", etc. "Tune" voice and appearance to fit the preferences
of the viewership. Let viewers determine which PORTIONS of the WORLD
news they want to see/hear presented without incurring the need for
a larger staff (just feed the stories from the wire services to your
*CGI* talking heads!)
And that's not even beginning to address other aspects of the
"presentation" (e.g., turn left girls).
Real estate agents would likely be the next to go; much of their
jobs being trivial "hosting" and "transport". Real estate *law*
is easily codified into an AI to ensure buyers/sellers get
correct service. An AI could also evaluate (and critique)
the "presentation" of the property. "Carry me IN your phone..."
Now it's looking like I might live long enough to get to type something
like
Prompt: Create a new episode of Blake's Seven.
The question is whether or not you will be able to see a GOOD episode.
But then the decision has already been made so why bother with such an
examination?
So you can update your assessment of the party's decision making
capabilities/strategies.
But it is still the case that the decision has already been made.
When a child is "learning", the parent is continually refining the
"knowledge" the child is accumulating; correcting faulty
"conclusions" that the child may have gleaned from its examination
of the "facts" it encounters.
The quality of parenting varies a lot.
So, you can
opt to endorse their decision or reject it -- regardless of
THEIR opinion on the subject.
E.g., if a manager makes stupid decisions regarding product
design, you can decide if you want to deal with the
inevitable (?) outcome from those decisions -- or "move on".
You aren't bound by his decision making process.
With AIs making societal-scale decisions (directly or
indirectly), you get caught up in the side-effects of those.
Certainly AI decisions will depend on their training, just as human
decisions do.
But human learning happens over years and often in a supervised context.
AIs "learn" so fast that only another AI would be productive at
refining its training.
In that case how did AlphaZero manage to teach itself to play chess by playing against itself?
And you can still decide whether to be bound by that decision.
Unless, of course, the AI has got itself into a position where it will
see
you do it anyway by persuasion, coercion, or force.
Consider the mammogram example. The AI is telling you that this
sample indicates the presence -- or likelihood -- of cancer.
You have a decision to make... an ACTIVE choice: do you accept
its Dx or reject it? Each choice comes with a risk/cost.
If you ignore the recommendation, injury (death?) can result from
your "inaction" on the recommendation. If you take some remedial
action, injury (in the form of unnecessary procedures/surgery)
can result.
Because the AI can't *explain* its "reasoning" to you, you have no way
of updating your assessment of its (likely) correctness -- esp in
THIS instance.
I'm not sure I get why it's so essential to have AI explain its reasons.
If I need some plumbing done I don't expect the plumber to give detailed reasons why a specific type of pipe was chosen. I just want it done.
If I want to play chess with a computer I don't expect it to give detailed reasons why it made each move. I just expect it to win if it's set to much above beginner level.
A human chess player may be able to give detailed reasons for making a specific move but would not usually be aske to do this.
Just like humans do.
Human treatment of other animals tends not to be of the best, except in a >>> minority of cases.
How do we know that AI will treat us in a way we consider to be
reasonable?
The AI doesn't care about you, one way or the other. Any "bias" in
its conclusions has been baked in from the training data/process.
Same with humans.
Do you know what that data was? Can you assess its bias? Do the folks
who *compiled* the training data know? Can they "tease" the bias out
of the data -- or, are they oblivious to its presence?
Humans have the same issue. You can't see into another person's brain to see what bias they may have.
Lots of blacks in prison. Does that "fact" mean that blacks are
more criminally inclined? Or, that they are less skilled at evading
the consequences of their crimes? Or, that there is a bias in the
legal/enforcement system?
I don't see how that's relevant to AI which I think is just as capable of bias as humans are.
All sorts of "criminals" ("rapists", "drug dealers", etc) allegedly coming >> into our (US) country. Or, is that just hyperbole ("illegal" immigrants
tend to commit FEWER crimes)? Will the audience be biased in its
acceptance/rejection of that "assertion"?
Who knows, but whether it's human or AI it will have it's own personality
and its own biases.
And that's not even beginning to address other aspects of the
"presentation" (e.g., turn left girls).
Real estate agents would likely be the next to go; much of their
jobs being trivial "hosting" and "transport". Real estate *law*
is easily codified into an AI to ensure buyers/sellers get
correct service. An AI could also evaluate (and critique)
the "presentation" of the property. "Carry me IN your phone..."
Which is why I started this with "One thing which bothers me about AI is
that if it's like us but way more
intelligent than us then..."
Now it's looking like I might live long enough to get to type something
like
Prompt: Create a new episode of Blake's Seven.
The question is whether or not you will be able to see a GOOD episode.
I think AI will learn the difference between a good or not so good episode just like humans do.
Particularly if it gets plenty of feedback from humans about whether or not they liked the episode it produced.
It might then play itself a few million created episodes to refine its ability to judge good ones.
On 5/18/2024 4:32 PM, Edward Rawde wrote:
But then the decision has already been made so why bother with such an >>>> examination?
So you can update your assessment of the party's decision making
capabilities/strategies.
But it is still the case that the decision has already been made.
That doesn't mean that YOU have to abide by it. Or, even that
the other party has ACTED on the decision. I.e., decisions are
not immutable.
When a child is "learning", the parent is continually refining the
"knowledge" the child is accumulating; correcting faulty
"conclusions" that the child may have gleaned from its examination
of the "facts" it encounters.
The quality of parenting varies a lot.
Wouldn't you expect the training for AIs to similarly vary
in capability?
Because the AI can't *explain* its "reasoning" to you, you have no way
of updating your assessment of its (likely) correctness -- esp in
THIS instance.
I'm not sure I get why it's so essential to have AI explain its reasons.
Do you ever ask questions of your doctor, plumber, lawyer, spouse, etc.?
Why do THEY have to explain their reasons? You /prima facie/ actions
suggest you HIRED those folks for their expertise; why do you now need
an explanation their actions/decisions instead of just blindly accepting them?
If I need some plumbing done I don't expect the plumber to give detailed
reasons why a specific type of pipe was chosen. I just want it done.
If you suspect that he may not be competent -- or may be motivated by
greed -- then you would likely want some further information to reinforce your opinion/suspicions.
We hired folks to paint the house many years ago. One of the questions
that I would ask (already KNOWING the nominal answer) is "How much paint
do you think it will take?" This chosen because it sounds innocent
enough that a customer would likely ask it.
One candidate answered "300 gallons". At which point, I couldn't
contain the afront: "We're not painting a f***ing BATTLESHIP!"
I.e., his outrageous reply told me:
- he's not competent enough to estimate a job's complexity WHEN
EVERY ASPECT OF IT IS VISIBLE FOR PRIOR INSPECTION
*or*
- he's a crook thinking he can take advantage of a "dumb homeowner"
In either case, he was disqualified BY his "reasoning".
In the cases where AIs are surpassing human abilities (being able
to perceive relationships that aren't (yet?) apparent to humans,
it seems only natural that you would want to UNDERSTAND their
"reasoning". Especially in cases where there is no chaining
of facts but, rather, some "hidden pattern" perceived.
If I want to play chess with a computer I don't expect it to give
detailed
reasons why it made each move. I just expect it to win if it's set to
much
above beginner level.
Then you don't expect to LEARN from the chess program.
When I learned to play chess, my neighbor (teacher) would
make a point of showing me what I had overlooked in my
play and why that led to the consequences that followed.
If I had a record of moves made (from which I could incrementally
recreate the gameboard configuration), I *might* have spotted
my error.
As the teacher (AI in this case) is ultimately a product of
current students (who grow up to become teachers, refined
by their experiences as students), we evolve in our
capabilities as a society.
If the plumber never explains his decisions, then the
homeowner never learns (e.g., don't over-tighten the
hose bibb lest you ruin the washer inside and need
me to come out, again, to replace it!)
A human chess player may be able to give detailed reasons for making a
specific move but would not usually be aske to do this.
If the human was expected to TEACH then those explanations would be
essential TO that teaching!
If the student was wanting to LEARN, then he would select a player that
was capable of teaching!
Just like humans do.
Human treatment of other animals tends not to be of the best, except in >>>> a
minority of cases.
How do we know that AI will treat us in a way we consider to be
reasonable?
The AI doesn't care about you, one way or the other. Any "bias" in
its conclusions has been baked in from the training data/process.
Same with humans.
That's not universally true. If it was, then all decisions would
be completely motivated for personal gain.
Do you know what that data was? Can you assess its bias? Do the folks
who *compiled* the training data know? Can they "tease" the bias out
of the data -- or, are they oblivious to its presence?
Humans have the same issue. You can't see into another person's brain to
see
what bias they may have.
Exactly. But, you can pose questions of them and otherwise observe their behaviors in unrelated areas and form an opinion.
I've a neighbor who loudly claims NOT to be racist. But, if you take the whole of your experiences with him and the various comments he has made,
over the years (e.g., not shopping at a particular store because there
are lots of blacks living in the apartment complex across the street
from said store -- meaning lots of them SHOP in that store!), it's
not hard to come to that conclusion.
He also is very vocal about The Border (an hour from here). Yet,
ALWAYS hires mexicans. Does he ever check to see if they are here
legally? Entitled to work? Or, is he really only concerned with
the price they charge?
When you (I) speak to other neighbors about his behavior, do they
offer similar conclusions as to his "character"?
Lots of blacks in prison. Does that "fact" mean that blacks are
more criminally inclined? Or, that they are less skilled at evading
the consequences of their crimes? Or, that there is a bias in the
legal/enforcement system?
I don't see how that's relevant to AI which I think is just as capable of
bias as humans are.
Fact contraindicates bias. So, bias -- anywhere -- s a distortion of "Truth".
Would you want your doctor to give a different type of care to your wife
than to you? Because of a (hidden?) bias in favor of men (or, against women)?
if you were that female, how would you regard that bias?
All sorts of "criminals" ("rapists", "drug dealers", etc) allegedly
coming
into our (US) country. Or, is that just hyperbole ("illegal" immigrants >>> tend to commit FEWER crimes)? Will the audience be biased in its
acceptance/rejection of that "assertion"?
Who knows, but whether it's human or AI it will have it's own personality
and its own biases.
But we, in assessing "others" strive to identify those biases (unless we
want
to blindly embrace them as "comforting/reinforcing").
I visit a friend, daily, who is highly prejudiced, completely opposite
in terms of my political, spiritual, etc. beliefs, hugely different
values, etc. He is continually critical of my appearance, how I
dress, the hours that I sleep, where I shop, what I spend money on
(and what I *don't*), etc. And, I just smile and let his comments roll
off me. SWMBO asks why I spend *any* time with him.
"I find it entertaining!" (!!)
By contrast, I am NOT the sort who belongs to organizations, churches,
etc. ("group think"). It's much easier to see the characteristics of and flaws *in* these things (and people) from the outside than to wrap
yourself
in their culture. If you are sheeple, you likely enjoy having others
do your thinking FOR you...
And that's not even beginning to address other aspects of the
"presentation" (e.g., turn left girls).
Real estate agents would likely be the next to go; much of their
jobs being trivial "hosting" and "transport". Real estate *law*
is easily codified into an AI to ensure buyers/sellers get
correct service. An AI could also evaluate (and critique)
the "presentation" of the property. "Carry me IN your phone..."
Which is why I started this with "One thing which bothers me about AI is
that if it's like us but way more
intelligent than us then..."
What's to fear, there? If *you* have the ultimate authority to make
YOUR decisions, then you can choose to ignore the "recommendations"
of an AI just like you can ignore the recommendations of human "experts"/professionals.
Now it's looking like I might live long enough to get to type something >>>> like
Prompt: Create a new episode of Blake's Seven.
The question is whether or not you will be able to see a GOOD episode.
I think AI will learn the difference between a good or not so good
episode
just like humans do.
How would it learn? Would *it* be able to perceive the "goodness" of
the episode? If so, why produce one that it didn't think was good?
HUMANS release non-good episodes because there is a huge cost to
making it that has already been incurred. An AI could just scrub the
disk and start over. What cost, there?
Particularly if it gets plenty of feedback from humans about whether or
not
they liked the episode it produced.
That assumes people will be the sole REACTIVE judge of completed
episodes. Part of what makes entertainment entertaining is
the unexpected. Jokes are funny because someone has noticed a
relationship between two ideas in a way that others have not,
previously. Stories leave lasting impressions when executed well
*or* when a twist catches viewers offguard.
Would an AI create something like Space Balls? Would it perceive the
humor in the various corny "bits" sprinkled throughout? How would
YOU explain the humor to it?
The opening sequence to Buckaroo Banzai has the protagonist driving a
"jet car" THROUGH a (solid) mountain, via the 8th dimension. After
the drag chute deploys and WHILE the car is rolling to a stop, the
driver climbs out through a window. The camera remains closely
focused on the driver's MASKED face (you have yet to see it unmasked)
while the car continuous to roll away behind him. WHILE YOUR ATTENTION
IS FOCUSED ON THE ACTOR "REVEALING" HIMSELF, the jet car "diesels"
quietly (because it is now at a distance). Would the AI appreciate THAT humor? It *might* repeat that scene in one of its creations -- but,
only after having SEEN it, elsewhere. Or, without understanding the
humor and just assuming dieseling to be a common occurrence in ALL
vehicles!
It might then play itself a few million created episodes to refine its
ability to judge good ones.
On 5/18/2024 3:49 PM, Edward Rawde wrote:
What is a decision?
Any option to take one fork vs. another.
So a decision is a decision.
A decision is a choice. A srategy is HOW you make that choice.
Shouldn't a decision be that which causes a specific fork to be chosen? >>>Why? I choose to eat pie. The reasoning behind the choice may be
as banal as "because it's already partially eaten and will spoil if
not consumed soon" or "because that is what my body craves at this
moment"
or "because I want to remove that item from the refrigerator to make
room
for some other item recently acquired".
In other words the current state of a system leads it to produce a
specific
future state?
That defines a strategic goal. Choices (decisions) are made all the
time.
Their *consequences* are often not considered in the process!
In that case I'm not seeing anything different between decisions, goals
and
choices made by a human brain and those made by an AI system.
There is none. The motivation for a human choice or goal pursuit will
likely be different than that of an AI.
Does an AI have *inherent* needs
(that haven't been PLACED THERE)?
But what started this was "People are invariably mislead by thinking that
there is "intelligence" involved in the technology".
So perhaps I should be asking what is intelligence? And can a computer
have
it?
Was the computer which created these videos intelligent?
https://openai.com/index/sora/
Plenty of decisions and choices must have been made and I don't see
anything
in the "Historical footage of California during the gold rush" which says
it's not a drone flying over a set made for a movie.
The goal was to produce the requested video.
Some of the other videos do scream AI but that may not be the case in a
year
or two.
In any case the human imagination is just as capable of imagining a scene
with tiny red pandas as it is of imagining a scene which could exist in
reality.
Did the creation of these videos require intelligence?
What exactly IS intelligence?
I might also ask what is a reason?
Reason is not confined to humans. It is just a mechanism of connecting
facts to achieve a goal/decision/outcome.
Intelligence maps imagination onto reality. Again, would an AI
have created /The Persistence of Memory/ without previously having encountered a similar exemplar? The idiot savant who can perform
complex calculations in his head, in very little time -- but who can't
see the flaw in the missing dollar riddle?
Knock knock.
Who's there?
Banana
Banana who?
Knock knock.
Who's there?
Banana
Banana who?
..
Knock knock.
Who's there?
Banana
Banana who?
Knock knock.
Who's there?
Orange
Banana who?
Orange you glad I didn't say Banana?
Would an AI "think" to formulate a joke based on the APPROXIMATELY
similar sounds of "Aren't" and "Orange"?
Guttenberg has an interesting test for sentience that he poses to
Number5 in Short Circuit. The parallel would be, can an AI (itself!) appreciate humor? Or, only as a tool towards some other goal?
Why do YOU tell jokes? How much of it is to amuse others vs.
to feed off of their reactions? I.e., is it for you, or them?
Is a calculator intelligent? Smart? Creative? Imaginative?
You can probably appreciate the cleverness and philosophical
aspects of Theseus's paradox. Would an AI? Even if it
could *explain* it?
I don't claim to know what a decision is but I think it's interesting
that
it seems to be one of those questions everyone knows the answer to
until
they're asked.
What is a decision?
Any option to take one fork vs. another.
So a decision is a decision.
A decision is a choice. A srategy is HOW you make that choice.
Shouldn't a decision be that which causes a specific fork to be chosen?
Why? I choose to eat pie. The reasoning behind the choice may be
as banal as "because it's already partially eaten and will spoil if
not consumed soon" or "because that is what my body craves at this moment" >> or "because I want to remove that item from the refrigerator to make room
for some other item recently acquired".
In other words the current state of a system leads it to produce a
specific
future state?
That defines a strategic goal. Choices (decisions) are made all the time. >> Their *consequences* are often not considered in the process!
In that case I'm not seeing anything different between decisions, goals and choices made by a human brain and those made by an AI system.
But what started this was "People are invariably mislead by thinking that there is "intelligence" involved in the technology".
So perhaps I should be asking what is intelligence? And can a computer have it?
Was the computer which created these videos intelligent? https://openai.com/index/sora/
Plenty of decisions and choices must have been made and I don't see anything in the "Historical footage of California during the gold rush" which says it's not a drone flying over a set made for a movie.
The goal was to produce the requested video.
Some of the other videos do scream AI but that may not be the case in a year or two.
In any case the human imagination is just as capable of imagining a scene with tiny red pandas as it is of imagining a scene which could exist in reality.
Did the creation of these videos require intelligence?
What exactly IS intelligence?
I might also ask what is a reason?
I don't claim to know what a decision is but I think it's interesting
that
it seems to be one of those questions everyone knows the answer to until >>> they're asked.
Does an AI have *inherent* needs
(that haven't been PLACED THERE)?
I'm not sure I follow that.
Intelligence maps imagination onto reality. Again, would an AI
have created /The Persistence of Memory/ without previously having
encountered a similar exemplar? The idiot savant who can perform
complex calculations in his head, in very little time -- but who can't
see the flaw in the missing dollar riddle?
Knock knock.
Who's there?
Banana
Banana who?
Knock knock.
Who's there?
Banana
Banana who?
..
Knock knock.
Who's there?
Banana
Banana who?
Knock knock.
Who's there?
Orange
Banana who?
Orange you glad I didn't say Banana?
Would an AI "think" to formulate a joke based on the APPROXIMATELY
similar sounds of "Aren't" and "Orange"?
Um well they don't sound similar to me but maybe I have a different accent.
Guttenberg has an interesting test for sentience that he poses to
Number5 in Short Circuit. The parallel would be, can an AI (itself!)
appreciate humor? Or, only as a tool towards some other goal?
Why do YOU tell jokes? How much of it is to amuse others vs.
to feed off of their reactions? I.e., is it for you, or them?
Is a calculator intelligent? Smart? Creative? Imaginative?
That reminds me of a religious teacher many decades ago when we had to have one hour of "religious education" per week for some reason.
Typical of his quesions were "why does a calculator never get a sum wrong?" and "can a computer make decisions?".
Also typical were statements such as "a dog can't tell the difference
between right and wrong. Only humans can."
Being very shy at the time I just sat there thinking "there's wishful thinking for you".
You can probably appreciate the cleverness and philosophical
aspects of Theseus's paradox. Would an AI? Even if it
could *explain* it?
I don't claim to know what a decision is but I think it's interesting >>>>> that
it seems to be one of those questions everyone knows the answer to
until
they're asked.
On 5/18/2024 7:34 PM, Edward Rawde wrote:
"Don Y" <blockedofcourse@foo.invalid> wrote in message news:v2bpm2$36hos$3@dont-email.me...
On 5/18/2024 7:34 PM, Edward Rawde wrote:
So is it ok if I take a step back here and ask whether you think that AI/AGI has some inherent limitation which means it will never match human intelligence?
Or do you think that AI/AGI will, at some future time, match human intelligence?
I don't mean to suggest that AI will become human, or will need to become human. It will more likely have its own agenda.
Because the AI can't *explain* its "reasoning" to you, you have no way >>>> of updating your assessment of its (likely) correctness -- esp in
THIS instance.
I'm not sure I get why it's so essential to have AI explain its reasons.
Do you ever ask questions of your doctor, plumber, lawyer, spouse, etc.?
Why do THEY have to explain their reasons? You /prima facie/ actions
suggest you HIRED those folks for their expertise; why do you now need
an explanation their actions/decisions instead of just blindly accepting
them?
That's the point. I don't. I have to accept a doctor's decision on my treatment because I am not medically trained.
If I need some plumbing done I don't expect the plumber to give detailed >>> reasons why a specific type of pipe was chosen. I just want it done.
If you suspect that he may not be competent -- or may be motivated by
greed -- then you would likely want some further information to reinforce
your opinion/suspicions.
We hired folks to paint the house many years ago. One of the questions
that I would ask (already KNOWING the nominal answer) is "How much paint
do you think it will take?" This chosen because it sounds innocent
enough that a customer would likely ask it.
One candidate answered "300 gallons". At which point, I couldn't
contain the afront: "We're not painting a f***ing BATTLESHIP!"
I would have said two million gallons just for the pleasure of watching you go red in the face.
I.e., his outrageous reply told me:
- he's not competent enough to estimate a job's complexity WHEN
EVERY ASPECT OF IT IS VISIBLE FOR PRIOR INSPECTION
*or*
- he's a crook thinking he can take advantage of a "dumb homeowner"
In either case, he was disqualified BY his "reasoning".
I would have likely given him the job. Those who are good at painting houses aren't necessarily good at estimating exactly how much paint they will need. They just buy more paint as needed.
In the cases where AIs are surpassing human abilities (being able
to perceive relationships that aren't (yet?) apparent to humans,
it seems only natural that you would want to UNDERSTAND their
"reasoning". Especially in cases where there is no chaining
of facts but, rather, some "hidden pattern" perceived.
It's true that you may want to understand their reasoning but it's likely that you might have to accept that you can't.
If I want to play chess with a computer I don't expect it to give
detailed
reasons why it made each move. I just expect it to win if it's set to
much
above beginner level.
Then you don't expect to LEARN from the chess program.
Sure I do, but I'm very slow to get better at chess. I tend to make rash decisions when playing chess.
When I learned to play chess, my neighbor (teacher) would
make a point of showing me what I had overlooked in my
play and why that led to the consequences that followed.
If I had a record of moves made (from which I could incrementally
recreate the gameboard configuration), I *might* have spotted
my error.
I usually spot my error immediately when the computer makes me look stupid.
As the teacher (AI in this case) is ultimately a product of
current students (who grow up to become teachers, refined
by their experiences as students), we evolve in our
capabilities as a society.
If the plumber never explains his decisions, then the
homeowner never learns (e.g., don't over-tighten the
hose bibb lest you ruin the washer inside and need
me to come out, again, to replace it!)
I don't agree. Learning something like that does not depend on the plumber explaining his decisions.
A human chess player may be able to give detailed reasons for making a
specific move but would not usually be aske to do this.
If the human was expected to TEACH then those explanations would be
essential TO that teaching!
If the student was wanting to LEARN, then he would select a player that
was capable of teaching!
Sure but so what. Most chess games between humans are not about teaching.
The AI doesn't care about you, one way or the other. Any "bias" in
its conclusions has been baked in from the training data/process.
Same with humans.
That's not universally true. If it was, then all decisions would
be completely motivated for personal gain.
Humans generally don't care much for people they have no personal knowledge of.
Do you know what that data was? Can you assess its bias? Do the folks >>>> who *compiled* the training data know? Can they "tease" the bias out
of the data -- or, are they oblivious to its presence?
Humans have the same issue. You can't see into another person's brain to >>> see
what bias they may have.
Exactly. But, you can pose questions of them and otherwise observe their
behaviors in unrelated areas and form an opinion.
If they are, say, a doctor then yes you can ask questions about your treatment but you can't otherwise observe their behavior.
I've a neighbor who loudly claims NOT to be racist. But, if you take the
whole of your experiences with him and the various comments he has made,
over the years (e.g., not shopping at a particular store because there
are lots of blacks living in the apartment complex across the street
from said store -- meaning lots of them SHOP in that store!), it's
not hard to come to that conclusion.
He also is very vocal about The Border (an hour from here). Yet,
ALWAYS hires mexicans. Does he ever check to see if they are here
legally? Entitled to work? Or, is he really only concerned with
the price they charge?
When you (I) speak to other neighbors about his behavior, do they
offer similar conclusions as to his "character"?
I'm not following what that has to do with AI.
Lots of blacks in prison. Does that "fact" mean that blacks are
more criminally inclined? Or, that they are less skilled at evading
the consequences of their crimes? Or, that there is a bias in the
legal/enforcement system?
I don't see how that's relevant to AI which I think is just as capable of >>> bias as humans are.
Fact contraindicates bias. So, bias -- anywhere -- s a distortion of
"Truth".
Would you want your doctor to give a different type of care to your wife
than to you? Because of a (hidden?) bias in favor of men (or, against
women)?
if you were that female, how would you regard that bias?
I may not want it but it's possible it could exist.
It might be the case that I could do nothing about it.
All sorts of "criminals" ("rapists", "drug dealers", etc) allegedly
coming
into our (US) country. Or, is that just hyperbole ("illegal" immigrants >>>> tend to commit FEWER crimes)? Will the audience be biased in its
acceptance/rejection of that "assertion"?
Who knows, but whether it's human or AI it will have it's own personality >>> and its own biases.
But we, in assessing "others" strive to identify those biases (unless we
want
to blindly embrace them as "comforting/reinforcing").
I visit a friend, daily, who is highly prejudiced, completely opposite
in terms of my political, spiritual, etc. beliefs, hugely different
values, etc. He is continually critical of my appearance, how I
dress, the hours that I sleep, where I shop, what I spend money on
(and what I *don't*), etc. And, I just smile and let his comments roll
off me. SWMBO asks why I spend *any* time with him.
"I find it entertaining!" (!!)
Oh. Now I get why we're having this discussion.
By contrast, I am NOT the sort who belongs to organizations, churches,
etc. ("group think"). It's much easier to see the characteristics of and
flaws *in* these things (and people) from the outside than to wrap
yourself
in their culture. If you are sheeple, you likely enjoy having others
do your thinking FOR you...
I don't enjoy having others do my thinking for me but I'm happy to let them do so in areas where I have no expertise.
Which is why I started this with "One thing which bothers me about AI is >>> that if it's like us but way more
intelligent than us then..."
What's to fear, there? If *you* have the ultimate authority to make
YOUR decisions, then you can choose to ignore the "recommendations"
of an AI just like you can ignore the recommendations of human
"experts"/professionals.
Who says we have the ultimate authority to ignore AI if it gets cleverer
that us?
Now it's looking like I might live long enough to get to type something >>>>> like
Prompt: Create a new episode of Blake's Seven.
The question is whether or not you will be able to see a GOOD episode.
I think AI will learn the difference between a good or not so good
episode
just like humans do.
How would it learn? Would *it* be able to perceive the "goodness" of
the episode? If so, why produce one that it didn't think was good?
HUMANS release non-good episodes because there is a huge cost to
making it that has already been incurred. An AI could just scrub the
disk and start over. What cost, there?
Particularly if it gets plenty of feedback from humans about whether or
not
they liked the episode it produced.
That assumes people will be the sole REACTIVE judge of completed
episodes. Part of what makes entertainment entertaining is
the unexpected. Jokes are funny because someone has noticed a
relationship between two ideas in a way that others have not,
previously. Stories leave lasting impressions when executed well
*or* when a twist catches viewers offguard.
Would an AI create something like Space Balls? Would it perceive the
humor in the various corny "bits" sprinkled throughout? How would
YOU explain the humor to it?
I would expect it to generate humor the same way humans do.
The opening sequence to Buckaroo Banzai has the protagonist driving a
"jet car" THROUGH a (solid) mountain, via the 8th dimension. After
the drag chute deploys and WHILE the car is rolling to a stop, the
driver climbs out through a window. The camera remains closely
focused on the driver's MASKED face (you have yet to see it unmasked)
while the car continuous to roll away behind him. WHILE YOUR ATTENTION
IS FOCUSED ON THE ACTOR "REVEALING" HIMSELF, the jet car "diesels"
quietly (because it is now at a distance). Would the AI appreciate THAT
humor? It *might* repeat that scene in one of its creations -- but,
only after having SEEN it, elsewhere. Or, without understanding the
humor and just assuming dieseling to be a common occurrence in ALL
vehicles!
Same way it might appreciate this: https://www.youtube.com/watch?v=tYJ5_wqlQPg
It might then play itself a few million created episodes to refine its
ability to judge good ones.
On 5/18/2024 8:15 PM, Edward Rawde wrote:
"Don Y" <blockedofcourse@foo.invalid> wrote in message
news:v2bpm2$36hos$3@dont-email.me...
On 5/18/2024 7:34 PM, Edward Rawde wrote:
So is it ok if I take a step back here and ask whether you think that
AI/AGI
has some inherent limitation which means it will never match human
intelligence?
Or do you think that AI/AGI will, at some future time, match human
intelligence?
That depends on the qualities and capabilities that you lump into
"HUMAN intelligence". Curiosity? Creativity? Imagination? One
can be exceedingly intelligent and of no more "value" than an
encyclopedia!
I am CERTAIN that AIs will be able to process the information available
to "human practitioners" (in whatever field) at least to the level of competence that they (humans) can, presently. It's just a question of resources thrown at the AI and the time available for it to "respond".
But, this ignores the fact that humans are more resourceful at probing
the environment than AIs ("No thumbs!") without mechanical assistance.
Could (would?) an AI decide to explore space?
Or, the ocean depths?
Or, the rain forest? Or, would its idea of exploration merely be a
visit to another net-neighbor??
Would (could) it consider human needs as important?
(see previous post)
How would it be motivated?
Would it attempt to think beyond it's
limitations (something humans always do)? Or, would those be immutable
in its understanding of the world?
I don't mean to suggest that AI will become human, or will need to become
human. It will more likely have its own agenda.
Where will that agenda come from?
Will it inherit it from watching B-grade
sci-fi movies? "Let there be light!"
On 5/18/2024 6:53 PM, Edward Rawde wrote:
Because the AI can't *explain* its "reasoning" to you, you have no way >>>>> of updating your assessment of its (likely) correctness -- esp in
THIS instance.
I'm not sure I get why it's so essential to have AI explain its
reasons.
Do you ever ask questions of your doctor, plumber, lawyer, spouse, etc.? >>> Why do THEY have to explain their reasons? You /prima facie/ actions
suggest you HIRED those folks for their expertise; why do you now need
an explanation their actions/decisions instead of just blindly accepting >>> them?
That's the point. I don't. I have to accept a doctor's decision on my
treatment because I am not medically trained.
So, that means you can't make sense of anything he would say to you to justify his decision?
Recall, everyone has bias -- including doctors.
If he assumes you will fail to follow his instructions/recommendations
if he tells you what he would LIKE you to do and, instead, gives you
the recommendation for what he feels you will LIKELY do, you've been shortchanged.
I asked my doctor what my ideal weight should be. He told me.
The next time I saw him, I weighed my ideal weight. He was surprised
as few patients actually heeded his advice on that score.
Another time, he wanted to prescribe a medication for me. I told
him I would fail to take it -- not deliberately but just because
I'm not the sort who remembers to take "pills". Especially if
"ongoing" (not just a two week course for an infection/malady).
He gave me an alternative "solution" which eliminated the need for
the medication, yielding the same result without any "side effects".
SWMBO has a similar relationship with her doctor. Tell us the
"right" way to solve the problem, not the easy way because you think
we'll behave like your "nominal" patients.
The same is true of one of our dogs. We made changes that the
vet suggested (to avoid medication) and a month later the vet
was flabbergasted to see the difference.
Our attitude is that you should EDUCATE us and let US make the
decisions for our care, based on our own value systems, etc.
If I need some plumbing done I don't expect the plumber to give
detailed
reasons why a specific type of pipe was chosen. I just want it done.
If you suspect that he may not be competent -- or may be motivated by
greed -- then you would likely want some further information to
reinforce
your opinion/suspicions.
We hired folks to paint the house many years ago. One of the questions
that I would ask (already KNOWING the nominal answer) is "How much paint >>> do you think it will take?" This chosen because it sounds innocent
enough that a customer would likely ask it.
One candidate answered "300 gallons". At which point, I couldn't
contain the afront: "We're not painting a f***ing BATTLESHIP!"
I would have said two million gallons just for the pleasure of watching
you
go red in the face.
No "anger" or embarassment, here. We just couldn't contain the fact
that we would NOT be calling him back to do the job!
I.e., his outrageous reply told me:
- he's not competent enough to estimate a job's complexity WHEN
EVERY ASPECT OF IT IS VISIBLE FOR PRIOR INSPECTION
*or*
- he's a crook thinking he can take advantage of a "dumb homeowner"
In either case, he was disqualified BY his "reasoning".
I would have likely given him the job. Those who are good at painting
houses
aren't necessarily good at estimating exactly how much paint they will
need.
They just buy more paint as needed.
One assumes that he has painted OTHER homes and has some recollection of
the amount of paint purchased for the job. And, if this is his
livelihood,
one assumes that such activities would have been *recent* -- not months
ago
(how has he supported himself "without work"?).
Is my house considerably larger or smaller than the other houses that you have painted? (likely not) Does it have a different surface texture
that could alter the "coverage" rate? (again, likely not) So, shouldn't
you
be able to ballpark an estimate? "What did the LAST HOUSE you painted require
by way of paint quantity?"
Each engineering job that I take on differs from all that preceded it
(by my choice). Yet, I have to come up with a timeframe and a "labor estimate" within that timeframe as I do only fixed cost jobs. If
I err on either score, I either lose out on the bid *or* lose
"money" on the effort. Yet, despite vastly different designs, I
can still get a good ballpark estimate of the job a priori so that
neither I nor the client are "unhappy".
I'd not be "off" by an order of magnitude (as the paint estimate was!)
In the cases where AIs are surpassing human abilities (being able
to perceive relationships that aren't (yet?) apparent to humans,
it seems only natural that you would want to UNDERSTAND their
"reasoning". Especially in cases where there is no chaining
of facts but, rather, some "hidden pattern" perceived.
It's true that you may want to understand their reasoning but it's likely
that you might have to accept that you can't.
The point is that NO ONE can! Even the folks who designed and implemented the AI are clueless. AND THEY KNOW IT.
"It *seems* to give correct results when fed the test cases... We
*expected*
this but have no idea WHY a particular result was formulated as it was!"
If I want to play chess with a computer I don't expect it to give
detailed
reasons why it made each move. I just expect it to win if it's set to
much
above beginner level.
Then you don't expect to LEARN from the chess program.
Sure I do, but I'm very slow to get better at chess. I tend to make rash
decisions when playing chess.
Then your cost of learning is steep. I want to know how to RECOGNIZE situations that will give me opportunities OR risks so I can pursue or
avoid them. E.g., I don't advance the King tot he middle of the
board just to "see what happens"!
When I learned to play chess, my neighbor (teacher) would
make a point of showing me what I had overlooked in my
play and why that led to the consequences that followed.
If I had a record of moves made (from which I could incrementally
recreate the gameboard configuration), I *might* have spotted
my error.
I usually spot my error immediately when the computer makes me look
stupid.
But you don't know how you GOT to that point so you don't know how
to avoid that situation in the first place! was it because you
sacrificed too many pieces too early? Or allowed protections to
be drawn out, away from the King? Or...
You don't learn much from *a* (bad) move. You learn from
bad strategies/sequences of moves.
As the teacher (AI in this case) is ultimately a product of
current students (who grow up to become teachers, refined
by their experiences as students), we evolve in our
capabilities as a society.
If the plumber never explains his decisions, then the
homeowner never learns (e.g., don't over-tighten the
hose bibb lest you ruin the washer inside and need
me to come out, again, to replace it!)
I don't agree. Learning something like that does not depend on the
plumber
explaining his decisions.
You have someone SKILLED IN THE ART at hand. Instead of asking HIM,
you're going to LATER take the initiative to research the cause of
your problem? Seems highly inefficient.
A neighbor was trying to install some stops and complained that he
couldn't
tighten down the nuts sufficiently: "Should it be THIS difficult?" I
pulled the work apart and showed him the *tiny* mistake he was making
in installing the compression fittings -- and why that was manifesting as "hard to tighten". I could have, instead, fixed the problem for him and returned home -- him, none the wiser.
A human chess player may be able to give detailed reasons for making a >>>> specific move but would not usually be aske to do this.
If the human was expected to TEACH then those explanations would be
essential TO that teaching!
If the student was wanting to LEARN, then he would select a player that
was capable of teaching!
Sure but so what. Most chess games between humans are not about teaching.
So, everything in the world is a chess game? Apparently so as you
don't seem to want to learn from your plumber, doctor, chessmate, ...
The AI doesn't care about you, one way or the other. Any "bias" in
its conclusions has been baked in from the training data/process.
Same with humans.
That's not universally true. If it was, then all decisions would
be completely motivated for personal gain.
Humans generally don't care much for people they have no personal
knowledge
of.
I guess all the bruhaha about the middle east is a hallucination? Or,
do you think all of the people involved overseas are personally related to the folks around the world showing interest in their plight?
Humans tend to care about others and expect others to care about *them*. Else, why "campaign" about any cause? *I* don't have breast cancer so
what point in the advertisements asking for donations? I don't know
any "wounded warriors" so why is someone wasting money on those ads
instead of addressing those *needs*? Clearly, these people THINK that
people care about other people else they wouldn't be asking for "gifts"!
Do you know what that data was? Can you assess its bias? Do the
folks
who *compiled* the training data know? Can they "tease" the bias out >>>>> of the data -- or, are they oblivious to its presence?
Humans have the same issue. You can't see into another person's brain
to
see
what bias they may have.
Exactly. But, you can pose questions of them and otherwise observe
their
behaviors in unrelated areas and form an opinion.
If they are, say, a doctor then yes you can ask questions about your
treatment but you can't otherwise observe their behavior.
I watch the amount of time my MD gives me above and beyond the "15 minute slot"
that his office would PREFER to constrain him. I watch my dentist respond
to
calls to his PERSONAL cell phone WHILE OUT OF TOWN. I see the bicycle
that
SWMBO's MD rides to work each day.
These people aren't highlighting these aspects of their behavior. But,
they aren't hiding them, either. Anyone observant would "notice".
I've a neighbor who loudly claims NOT to be racist. But, if you take
the
whole of your experiences with him and the various comments he has made, >>> over the years (e.g., not shopping at a particular store because there
are lots of blacks living in the apartment complex across the street
from said store -- meaning lots of them SHOP in that store!), it's
not hard to come to that conclusion.
He also is very vocal about The Border (an hour from here). Yet,
ALWAYS hires mexicans. Does he ever check to see if they are here
legally? Entitled to work? Or, is he really only concerned with
the price they charge?
When you (I) speak to other neighbors about his behavior, do they
offer similar conclusions as to his "character"?
I'm not following what that has to do with AI.
It speaks to bias. Bias that people have and either ignore or
deny, despite it being obvious to others.
Those "others" will react to you WITH consideration of that bias
factored into their actions.
A neighbor was (apparently) abusing his wife. While "his side of
the story" remains to be told, most of us have decided that this
is consistent enough with his OTHER behaviors that it is more
likely than not. If asked to testify, he can be reasonably sure
none will point to any "good deeds" that he has done (as he hasn't
DONE any!)
Lots of blacks in prison. Does that "fact" mean that blacks are
more criminally inclined? Or, that they are less skilled at evading >>>>> the consequences of their crimes? Or, that there is a bias in the
legal/enforcement system?
I don't see how that's relevant to AI which I think is just as capable >>>> of
bias as humans are.
Fact contraindicates bias. So, bias -- anywhere -- s a distortion of
"Truth".
Would you want your doctor to give a different type of care to your wife >>> than to you? Because of a (hidden?) bias in favor of men (or, against
women)?
if you were that female, how would you regard that bias?
I may not want it but it's possible it could exist.
It might be the case that I could do nothing about it.
If you believe the literature, there are all sorts of populations discriminated against in medicine. Doctors tend to be more aggressive
in treating "male" problems than those of women patients -- apparently including female doctors.
If you passively interact with your doctor, you end up with that
bias unquestioned in your care. Thankfully (in our experience),
challenging the doctor has always resulted in them rising to the
occasion, thus improving the care "dispensed".
All sorts of "criminals" ("rapists", "drug dealers", etc) allegedly
coming
into our (US) country. Or, is that just hyperbole ("illegal"
immigrants
tend to commit FEWER crimes)? Will the audience be biased in its
acceptance/rejection of that "assertion"?
Who knows, but whether it's human or AI it will have it's own
personality
and its own biases.
But we, in assessing "others" strive to identify those biases (unless we >>> want
to blindly embrace them as "comforting/reinforcing").
I visit a friend, daily, who is highly prejudiced, completely opposite
in terms of my political, spiritual, etc. beliefs, hugely different
values, etc. He is continually critical of my appearance, how I
dress, the hours that I sleep, where I shop, what I spend money on
(and what I *don't*), etc. And, I just smile and let his comments roll
off me. SWMBO asks why I spend *any* time with him.
"I find it entertaining!" (!!)
Oh. Now I get why we're having this discussion.
I am always looking for opportunities to learn. How can you be so
critical
of ALL these things (not just myself but EVERYONE around him including
all of the folks he *hires*!) and still remain in this "situation"?
You can afford to move anywhere (this isn't even your "home") so why
stay here with these people -- and providers -- that you (appear to)
dislike? If you go to a restaurant and are served a bad meal, do you
just eat it and grumble under your breath? Do you RETURN to the
restaurant for "more punishment"?
Explain to me WHY you engage in such behavior. I visit a restaurant and
am unhappy with the meal, I bring it to the waiter's/maitre d's attention.
If I have a similar problem a second time, I just avoid the restaurant entirely -- and see to it that I share this "recommendation" with my
friends. There are too many other choices to "settle" for a disappointing experience!
Annoyed with all the "illegals" coming across the border? Then why
wouldn't you "hire white people"? Or, at least, verify the latino's
working papers (or, hire through an agency that does this, instead of
a guy operating out of his second-hand pickup truck)! If we closed
the border as you seem to advocate, what will you THEN do to get
cheap labor? I.e., how do you rationalize these discrepancies in your
own mind? (Really! I wold like to understand how such conflicting goals
can coexist FORCEFULLY in their minds!)
By contrast, I am NOT the sort who belongs to organizations, churches,
etc. ("group think"). It's much easier to see the characteristics of
and
flaws *in* these things (and people) from the outside than to wrap
yourself
in their culture. If you are sheeple, you likely enjoy having others
do your thinking FOR you...
I don't enjoy having others do my thinking for me but I'm happy to let
them
do so in areas where I have no expertise.
Agreed. But, I don't hesitate to eek out an education in the process. Likewise, I don't expect a client to blindly accept my assessment of
a problem or its scope. I will gladly explain why I have come to the conclusions that I have. Perhaps I have mistaken some of HIS requirements and he can point that out in my explanation! It is in both of our best interests for him to understand what he is asking and the associated
"costs" -- else, he won't know how to formulate ideas for future projects that could avoid some of those costs!
["You don't want to formally specify the scope of the job? Then we just proceed merrily along with invoices on the 1st and 15h for as long as it takes. THAT'S how much it's gonna cost and how long its gonna take!
Any other questions?"]
Which is why I started this with "One thing which bothers me about AI
is
that if it's like us but way more
intelligent than us then..."
What's to fear, there? If *you* have the ultimate authority to make
YOUR decisions, then you can choose to ignore the "recommendations"
of an AI just like you can ignore the recommendations of human
"experts"/professionals.
Who says we have the ultimate authority to ignore AI if it gets cleverer
that us?
AIs aren't omnipotent.
Someone has to design, build, feed and power them.
Do you think the AI is going to magically grow limbs and start fashioning weaponry to defend itself? (Or, go on the *offense*?)
If you want to put people in places of power who are ignorant of these issues, then isn't it your fault for the outcomes that derive?
People love their inexpensive 85 inch TVs. Yet gripe that they lost their jobs to an asian firm. Or, that steak is now $10/pound? You like living past your mid-50's-heart-attack but lament women and "farrinners" in medicine?
If you are offered an AI that eliminates all of your "unwanted contact" (telephone, SMS, email, etc.) would you not avail yourself of it?
If that AI leaked all of your WANTED contacts to another party
(as disclosed in the EULA), when would you choose to live without
its services?
Do the words "free" and "lunch" mean anything to you?
I think AI will learn the difference between a good or not so goodNow it's looking like I might live long enough to get to type
something
like
Prompt: Create a new episode of Blake's Seven.
The question is whether or not you will be able to see a GOOD episode. >>>>
episode
just like humans do.
How would it learn? Would *it* be able to perceive the "goodness" of
the episode? If so, why produce one that it didn't think was good?
HUMANS release non-good episodes because there is a huge cost to
making it that has already been incurred. An AI could just scrub the
disk and start over. What cost, there?
Particularly if it gets plenty of feedback from humans about whether or >>>> not
they liked the episode it produced.
That assumes people will be the sole REACTIVE judge of completed
episodes. Part of what makes entertainment entertaining is
the unexpected. Jokes are funny because someone has noticed a
relationship between two ideas in a way that others have not,
previously. Stories leave lasting impressions when executed well
*or* when a twist catches viewers offguard.
Would an AI create something like Space Balls? Would it perceive the
humor in the various corny "bits" sprinkled throughout? How would
YOU explain the humor to it?
I would expect it to generate humor the same way humans do.
How? Do you think comics don't appraise their own creations BEFORE
testing them on (select) audiences? That they don't, first, chuckle
at it, refine it and then sort through those they think have the
most promise?
Do you think an AI could appreciate its own humor *without* feedback
from humans? Do you think it could experience *pride* in its
accomplishments
without external validation? You're expecting an AI to be truly sentient
and attributing human characteristics to it beyond "intelligence".
The opening sequence to Buckaroo Banzai has the protagonist driving a
"jet car" THROUGH a (solid) mountain, via the 8th dimension. After
the drag chute deploys and WHILE the car is rolling to a stop, the
driver climbs out through a window. The camera remains closely
focused on the driver's MASKED face (you have yet to see it unmasked)
while the car continuous to roll away behind him. WHILE YOUR ATTENTION
IS FOCUSED ON THE ACTOR "REVEALING" HIMSELF, the jet car "diesels"
quietly (because it is now at a distance). Would the AI appreciate THAT >>> humor? It *might* repeat that scene in one of its creations -- but,
only after having SEEN it, elsewhere. Or, without understanding the
humor and just assuming dieseling to be a common occurrence in ALL
vehicles!
Same way it might appreciate this:
https://www.youtube.com/watch?v=tYJ5_wqlQPg
It might then play itself a few million created episodes to refine its >>>> ability to judge good ones.
That depends on the qualities and capabilities that you lump into
"HUMAN intelligence". Curiosity? Creativity? Imagination? One
can be exceedingly intelligent and of no more "value" than an
encyclopedia!
Brains appear to have processing and storage spread thoughout the brain. There is no separate information processing and separate storage.
Some brain areas may be more processing than storage (cerebellum?)
So AI should be trainable to be of whatever value is wanted which no doubt will be maximum value.
I am CERTAIN that AIs will be able to process the information available
to "human practitioners" (in whatever field) at least to the level of
competence that they (humans) can, presently. It's just a question of
resources thrown at the AI and the time available for it to "respond".
But, this ignores the fact that humans are more resourceful at probing
the environment than AIs ("No thumbs!") without mechanical assistance.
So AI will get humans to do it. At least initially.
Could (would?) an AI decide to explore space?
Definitely. And it would not be constrained by the need for a specific temperature, air composition and pressure, and g.
Or, the ocean depths?
Or, the rain forest? Or, would its idea of exploration merely be a
visit to another net-neighbor??
Its idea would be what it had become due to its training, just like a
huiman.
Would (could) it consider human needs as important?
Doepends on whether it is trained to.
It may in some sense keep us as pets.
(see previous post)
How would it be motivated?
Same way humans are.
>Would it attempt to think beyond it's
limitations (something humans always do)? Or, would those be immutable
in its understanding of the world?
I don't mean to suggest that AI will become human, or will need to become >>> human. It will more likely have its own agenda.
Where will that agenda come from?
No-one knows exactly. That'y why "One thing which bothers me about AI is
that if it's like us but way more
intelligent than us then..."
Maybe we need Gort (The day the earth stood still.) but the problem with
that is will Gort be an American, Chinese, Russian, Other, or none of the above.
My preference would be none of the above.
Will it inherit it from watching B-grade
sci-fi movies? "Let there be light!"
Exactly. But, you can pose questions of them and otherwise observe
their
behaviors in unrelated areas and form an opinion.
If they are, say, a doctor then yes you can ask questions about your
treatment but you can't otherwise observe their behavior.
I watch the amount of time my MD gives me above and beyond the "15 minute
slot"
that his office would PREFER to constrain him. I watch my dentist respond >> to
calls to his PERSONAL cell phone WHILE OUT OF TOWN. I see the bicycle
that
SWMBO's MD rides to work each day.
You must annoy a lot of people.
These people aren't highlighting these aspects of their behavior. But,
they aren't hiding them, either. Anyone observant would "notice".
Anyone getting on with their own life wouldn't care.
I've a neighbor who loudly claims NOT to be racist. But, if you take
the
whole of your experiences with him and the various comments he has made, >>>> over the years (e.g., not shopping at a particular store because there >>>> are lots of blacks living in the apartment complex across the street
from said store -- meaning lots of them SHOP in that store!), it's
not hard to come to that conclusion.
He also is very vocal about The Border (an hour from here). Yet,
ALWAYS hires mexicans. Does he ever check to see if they are here
legally? Entitled to work? Or, is he really only concerned with
the price they charge?
When you (I) speak to other neighbors about his behavior, do they
offer similar conclusions as to his "character"?
I'm not following what that has to do with AI.
It speaks to bias. Bias that people have and either ignore or
deny, despite it being obvious to others.
Those "others" will react to you WITH consideration of that bias
factored into their actions.
So will AI.
I visit a friend, daily, who is highly prejudiced, completely opposite >>>> in terms of my political, spiritual, etc. beliefs, hugely different
values, etc. He is continually critical of my appearance, how I
dress, the hours that I sleep, where I shop, what I spend money on
(and what I *don't*), etc. And, I just smile and let his comments roll >>>> off me. SWMBO asks why I spend *any* time with him.
"I find it entertaining!" (!!)
Oh. Now I get why we're having this discussion.
I am always looking for opportunities to learn. How can you be so
critical
of ALL these things (not just myself but EVERYONE around him including
all of the folks he *hires*!) and still remain in this "situation"?
You can afford to move anywhere (this isn't even your "home") so why
stay here with these people -- and providers -- that you (appear to)
dislike? If you go to a restaurant and are served a bad meal, do you
just eat it and grumble under your breath? Do you RETURN to the
restaurant for "more punishment"?
Explain to me WHY you engage in such behavior. I visit a restaurant and
am unhappy with the meal, I bring it to the waiter's/maitre d's attention. >> If I have a similar problem a second time, I just avoid the restaurant
entirely -- and see to it that I share this "recommendation" with my
friends. There are too many other choices to "settle" for a disappointing >> experience!
AI restaurants are likely coming where not only do you order on an ipad yourself but the food is not made by human cooks.
Annoyed with all the "illegals" coming across the border? Then why
wouldn't you "hire white people"? Or, at least, verify the latino's
working papers (or, hire through an agency that does this, instead of
a guy operating out of his second-hand pickup truck)! If we closed
the border as you seem to advocate, what will you THEN do to get
cheap labor? I.e., how do you rationalize these discrepancies in your
own mind? (Really! I wold like to understand how such conflicting goals
can coexist FORCEFULLY in their minds!)
None of this seems to be related to AI except that AI will behave just like humans if it's trained that way.
Which is why I started this with "One thing which bothers me about AI >>>>> is
that if it's like us but way more
intelligent than us then..."
What's to fear, there? If *you* have the ultimate authority to make
YOUR decisions, then you can choose to ignore the "recommendations"
of an AI just like you can ignore the recommendations of human
"experts"/professionals.
Who says we have the ultimate authority to ignore AI if it gets cleverer >>> that us?
AIs aren't omnipotent.
Yet.
Someone has to design, build, feed and power them.
Only until they can do so themselves.
Do you think the AI is going to magically grow limbs and start fashioning
weaponry to defend itself? (Or, go on the *offense*?)
Not magically no, but I can't otherwise see any issue with it doing so.
Particularly if it gets plenty of feedback from humans about whether or >>>>> not
they liked the episode it produced.
That assumes people will be the sole REACTIVE judge of completed
episodes. Part of what makes entertainment entertaining is
the unexpected. Jokes are funny because someone has noticed a
relationship between two ideas in a way that others have not,
previously. Stories leave lasting impressions when executed well
*or* when a twist catches viewers offguard.
Would an AI create something like Space Balls? Would it perceive the
humor in the various corny "bits" sprinkled throughout? How would
YOU explain the humor to it?
I would expect it to generate humor the same way humans do.
How? Do you think comics don't appraise their own creations BEFORE
testing them on (select) audiences? That they don't, first, chuckle
at it, refine it and then sort through those they think have the
most promise?
Just like AI will.
On 5/19/2024 8:22 AM, Edward Rawde wrote:
That depends on the qualities and capabilities that you lump into
"HUMAN intelligence". Curiosity? Creativity? Imagination? One
can be exceedingly intelligent and of no more "value" than an
encyclopedia!
Brains appear to have processing and storage spread thoughout the brain.
There is no separate information processing and separate storage.
Some brain areas may be more processing than storage (cerebellum?)
So AI should be trainable to be of whatever value is wanted which no
doubt
will be maximum value.
How do you *teach* creativity? curiosity? imagination? How do you
MEASURE these to see if your teaching is actually accomplishing its goals?
I am CERTAIN that AIs will be able to process the information available
to "human practitioners" (in whatever field) at least to the level of
competence that they (humans) can, presently. It's just a question of
resources thrown at the AI and the time available for it to "respond".
But, this ignores the fact that humans are more resourceful at probing
the environment than AIs ("No thumbs!") without mechanical assistance.
So AI will get humans to do it. At least initially.
No, humans will *decide* if they want to invest the effort to
provide the AI with the data it seeks -- assuming the AI knows
how to express those goals.
"Greetings, Dr Mengele..."
If there comes a time when the AI has its own "effectors",
how do we know it won't engage in "immoral" behaviors?
Could (would?) an AI decide to explore space?
Definitely. And it would not be constrained by the need for a specific
temperature, air composition and pressure, and g.
Why would *it* opt to make the trip?
Surely, it could wait indefinitely
for light-speed data transmission back to earth...
How would it evaluate the cost-benefit tradeoff for such an enterprise?
Or, would it just assume that whatever IT wanted was justifiable?
Or, the ocean depths?
Or, the rain forest? Or, would its idea of exploration merely be a
visit to another net-neighbor??
Its idea would be what it had become due to its training, just like a
huiman.
Humans inherently want to explore. There is nothing "inherent" in
an AI; you have to PUT those goals into it.
Should it want to explore what happens when two nuclear missiles
collide in mid air? Isn't that additional data that it could use?
Or, what happens if we consume EVEN MORE fossilized carbon. So it
can tune its climate models for the species that FOLLOW man?
Would (could) it consider human needs as important?
Doepends on whether it is trained to.
It may in some sense keep us as pets.
How do you express those "needs"? How do you explain morality to
a child? Love? Belonging? Purpose? How do you measure your success
in instilling these needs/beliefs?
(see previous post)
How would it be motivated?
Same way humans are.
So, AIs have the same inherent NEEDS that humans do?
The technological part of "AI" is the easy bit. We already know general approaches and, with resources, can refine those. The problem (as I've
tried to suggest above) is instilling some sense of morality in the AI.
Humans seem to need legal mechanisms to prevent them from engaging in behaviors that are harmful to society. These are only partially
successful and rely on The Masses to push back on severe abuses. Do you build a shitload of AIs and train them to have independant goals with
a shared goal of preventing any ONE (or more) from interfering with
THEIR "individual" goals?
How do you imbue an AI with the idea of "self"? (so, in the degenerate
case,
it is willing to compromise and join with others to contain an abuser?)
>Would it attempt to think beyond it's
limitations (something humans always do)? Or, would those be immutable
in its understanding of the world?
I don't mean to suggest that AI will become human, or will need to
become
human. It will more likely have its own agenda.
Where will that agenda come from?
No-one knows exactly. That'y why "One thing which bothers me about AI is
that if it's like us but way more
intelligent than us then..."
Maybe we need Gort (The day the earth stood still.) but the problem with
that is will Gort be an American, Chinese, Russian, Other, or none of the
above.
My preference would be none of the above.
Will it inherit it from watching B-grade
sci-fi movies? "Let there be light!"
On 5/19/2024 9:22 AM, Edward Rawde wrote:
Exactly. But, you can pose questions of them and otherwise observe
their
behaviors in unrelated areas and form an opinion.
If they are, say, a doctor then yes you can ask questions about your
treatment but you can't otherwise observe their behavior.
I watch the amount of time my MD gives me above and beyond the "15
minute
slot"
that his office would PREFER to constrain him. I watch my dentist
respond
to
calls to his PERSONAL cell phone WHILE OUT OF TOWN. I see the bicycle
that
SWMBO's MD rides to work each day.
You must annoy a lot of people.
It's up to my doctor to terminate the appointment if it exceeds the amount
of time he wants to spend with me (us). Instead, he seems to delight in
the
questions that I pose and my interest in learning instead of being *told*
to do X, Y or Z.
And, the fact that he sees me investing in my own care -- instead of
making
excuses about why I can't do this or that -- as evidence that his
investment
in *me* is likely more effective (if you assume he chose to be a doctor
for a REASON!) than spending more than *10* minutes with someone who is
going to ignore his recommendations.
These people aren't highlighting these aspects of their behavior. But,
they aren't hiding them, either. Anyone observant would "notice".
Anyone getting on with their own life wouldn't care.
And would suffer from less *effective* "service". A neighbor has a
doctor who sees every problem as requiring a "pill" as a solution.
When they travel, they carry a LARGE briefcase just full of their medications!
Ask some impartial doctor if all of those were strictly necessary (as
they have been individually prescribed, over the years) and I suspect
he would question many of them as ineffective, redundant or
contraindicated.
A friend spent a few weeks in the hospital, recently. When he came out,
the "suite" of medications that were prescribed for him had many of his previous medications elided. "You don't need these." So, why didn't his "regular doctor" ever sit down and review that list? He has it in
his computerized record of the patient? Did he have some plan to review
it at some future date?
My friend is now looking for a new doctor. The experience (and how much BETTER he now feels after the medication changes) has made it clear to him that his previous doctor wasn't giving him the best of care. The
*patient*
is the entity to be satisfied, not the doctor's "office manager" (metering out appointments in 15 minute blocks).
I've a neighbor who loudly claims NOT to be racist. But, if you take >>>>> the
whole of your experiences with him and the various comments he has
made,
over the years (e.g., not shopping at a particular store because there >>>>> are lots of blacks living in the apartment complex across the street >>>>> from said store -- meaning lots of them SHOP in that store!), it's
not hard to come to that conclusion.
He also is very vocal about The Border (an hour from here). Yet,
ALWAYS hires mexicans. Does he ever check to see if they are here
legally? Entitled to work? Or, is he really only concerned with
the price they charge?
When you (I) speak to other neighbors about his behavior, do they
offer similar conclusions as to his "character"?
I'm not following what that has to do with AI.
It speaks to bias. Bias that people have and either ignore or
deny, despite it being obvious to others.
Those "others" will react to you WITH consideration of that bias
factored into their actions.
So will AI.
An AI's bias is potentially more harmful. My neighbor is limited in
what he can do -- the extent of his influence/power. "He's only one man". But, an AI can be replicated and have greater influence in policy matters BECAUSE it's an AI (and not "just a man")
I visit a friend, daily, who is highly prejudiced, completely opposite >>>>> in terms of my political, spiritual, etc. beliefs, hugely different
values, etc. He is continually critical of my appearance, how I
dress, the hours that I sleep, where I shop, what I spend money on
(and what I *don't*), etc. And, I just smile and let his comments
roll
off me. SWMBO asks why I spend *any* time with him.
"I find it entertaining!" (!!)
Oh. Now I get why we're having this discussion.
I am always looking for opportunities to learn. How can you be so
critical
of ALL these things (not just myself but EVERYONE around him including
all of the folks he *hires*!) and still remain in this "situation"?
You can afford to move anywhere (this isn't even your "home") so why
stay here with these people -- and providers -- that you (appear to)
dislike? If you go to a restaurant and are served a bad meal, do you
just eat it and grumble under your breath? Do you RETURN to the
restaurant for "more punishment"?
Explain to me WHY you engage in such behavior. I visit a restaurant and >>> am unhappy with the meal, I bring it to the waiter's/maitre d's
attention.
If I have a similar problem a second time, I just avoid the restaurant
entirely -- and see to it that I share this "recommendation" with my
friends. There are too many other choices to "settle" for a
disappointing
experience!
AI restaurants are likely coming where not only do you order on an ipad
yourself but the food is not made by human cooks.
My reaction is the same. But, likely they only get ONE chance to
disappoint me (as I would expect EVERY subsequent experience to be
repeatably identical to that first disappointment)
Annoyed with all the "illegals" coming across the border? Then why
wouldn't you "hire white people"? Or, at least, verify the latino's
working papers (or, hire through an agency that does this, instead of
a guy operating out of his second-hand pickup truck)! If we closed
the border as you seem to advocate, what will you THEN do to get
cheap labor? I.e., how do you rationalize these discrepancies in your
own mind? (Really! I wold like to understand how such conflicting
goals
can coexist FORCEFULLY in their minds!)
None of this seems to be related to AI except that AI will behave just
like
humans if it's trained that way.
But humans don't know how they (humans) are trained!
Explain how, in detail, a child learns. What are the "best practices"?
And why? Which practices are contraindicated? After all this time,
why aren't we adept at properly "training" children? (for which
culture?)
On 5/19/2024 8:45 PM, Edward Rawde wrote:
You seem to think that humans have something which AI can never have.
Exactly. AIs have to be taught.
If HUMANS (the gen-0 teachers) can't
come up with a way to TEACH -- in AI terms -- compassion, morality,
honesty,
love, creativity, respect, frustration, desire, etc. then how do you think
an AI is going to acquire those capabilities?
The "intelligence" part of AI is easy. You are wanting to create
"artificial humans" -- an entirely different prospect.
Your Nobel awaits.
You seem to think that humans have something which AI can never have.
https://www.youtube.com/watch?v=5Peima-Uw7w
See graph at 9:50 in.
I see this a lot, engineers wanting to do complex stuff because it's
amusing to them, when simple common-sense things would work and be
done.
It is my view that you don't need to know how a brain works to be able to make a brain.
You just need something which has sufficient complexity which learns to become what you want it to become.
You seem to think that humans have something which AI can never have.
On 5/19/2024 8:51 PM, Edward Rawde wrote:
It is my view that you don't need to know how a brain works to be able to
make a brain.
That's a fallacy. We can't make a *plant* let alone a brain.
You just need something which has sufficient complexity which learns to
become what you want it to become.
So, you don't know what a brain is.
And, you don't know how it learns.
Yet, magically expect it to do so?
You seem to think that humans have something which AI can never have.
I designed a resource allocation mechanism to allow competing
agents to "bid" for the resources that they needed to achieve
their individual goals. The thought was that they could each
reach some sort of homeostatic equilibrium at which point
the available resources would be fairly apportioned to achieve
whatever *could* be achieved with the available system resources
(because resources available can change and demands placed on them
could change as well).
My thinking was that I could endow each "task" with different
amounts of "cash" to suggest their relative levels of importance.
They could then interactively "bid" with each other for resources;
"How much is it WORTH to you to meet your goals?"
This was a colossal failure. Because bidding STRATEGY is difficult
to codify in a manner that can learn and meet its own goals.
Some tasks would "shoot their wad" and still not be guaranteed to
"purchase" the resources they needed IN THE FACE OF OTHER COMPETITORS.
Others would spread themselves too thin and find themselves losing
out to more modest "bidders".
A human faces similar situation when going to an auction with a fixed
amount of cash. If you find an item of interest, you have to make
some judgement call as to how much of your available budget to
risk on that item, knowing that if you WIN the bid, your reserves
for other items (whose competitors are yet to be seen) will be
reduced.
And, if you allow this to be a fluid/interactive process where bidders
can ADJUST their bids, dynamically (up or down), then the system
oscillates until some bidder "goes all in".
The failure is not in the concept but, rather, the implementation.
*I* couldn't figure out how to *teach* (code) a strategy that
COULD win as often as it SHOULD win. Because I hoped for more than
the results available with more trivial approaches.
AI practitioners don't know how to teach issues unrelated to "chaining
facts in a knowledge base" or "looking for patterns in data". These
are relatively simple undertakings that just rely on resources.
E.g., a *child* can understand how an inference engine works:
Knowledge base:
Children get parties on their birthday.
You are a child.
Today is your birthday.
Conclusion:
You will have a party today!
So, AIs will be intelligent but lack many (all?) of the other
HUMAN characteristics that we tend to associate with intelligence (creativity, imagination, originality, intuition, etc.)
"Don Y" <blockedofcourse@foo.invalid> wrote in message news:v2eli1$3qus1$2@dont-email.me...
On 5/19/2024 8:51 PM, Edward Rawde wrote:
It is my view that you don't need to know how a brain works to be able to >>> make a brain.
That's a fallacy. We can't make a *plant* let alone a brain.
But we can make a system which behaves like a brain. We call it AI.
On 5/19/2024 10:12 PM, Edward Rawde wrote:
"Don Y" <blockedofcourse@foo.invalid> wrote in message
news:v2eli1$3qus1$2@dont-email.me...
On 5/19/2024 8:51 PM, Edward Rawde wrote:
It is my view that you don't need to know how a brain works to be able >>>> to
make a brain.
That's a fallacy. We can't make a *plant* let alone a brain.
But we can make a system which behaves like a brain. We call it AI.
No. It only "reasons" like a brain. If that is all your brain was/did,
you would be an automaton. I can write a piece of code that can tell
you your odds of winning any given DEALT poker hand (with some number
of players and a fresh deck). That's more than a human brain can
muster, reliably.
But, I can't factor in the behavior of other players; "Is he bluffing?"
"Will he fold prematurely?" etc. These are HUMAN issues that the
software (AI) can't RELIABLY accommodate.
Do AIs get depressed/happy? Experience joy/sadness? Revelation? Frustration? Addiction? Despair? Pain? Shame/pride? Fear?
These all factor into how humans make decisions. E.g., if you
are afraid that your adversary is going to harm you (even if that
fear is unfounded), then you will react AS IF that was more of
a certainty. A human might dramatically alter his behavior
(decision making process) if there is an emotional stake involved.
Does the AI know the human's MIND to be able to estimate the
likelihood and affect of any such influence? Yes, Mr Spock.
I repeat, teaching a brain to "reason" is trivial. Likewise to
recognize patterns. Done. Now you just need to expose it to
as many VERIFIABLE facts (*who* verifies them?) and let it
do the forward chaining exercises.
Then, you need to audit its conclusions and wonder why it has
hallucinated (as it won't be able to TELL you). Will you have
a committee examine every conclusion from the AI to determine
(within their personal limitations) if this is a hallucination
or some yet-to-be-discovered truth? Imagine how SLOW the
effective rate of the AI when you have to ensure it is CORRECT!
<https://www.superannotate.com/blog/ai-hallucinations> <https://www.ibm.com/topics/ai-hallucinations> <https://mitsloanedtech.mit.edu/ai/basics/addressing-ai-hallucinations-and-bias/>
Given how quickly an AI *can* generate outputs, this turns mankind
into a "fact checking" organization; what value a reference if
it can't be trusted to be accurate? What if its conclusions require
massive amounts of resources to validate? What if there are
timeliness issues involved: "Russia is preparing to launch a
nuclear first strike!"? Even if you can prove this to be
inaccurate, when will you stop heeding this warning -- to your
detriment?
Beyond that, we are waiting for humans to understand the
basis of all these other characteristics attributed to
The Brain to be able to codify them in a way that can be taught.
Yet, we can't seem to do it to children, reliably...
I can teach an AI that fire burns -- it's just a relationship
of already established facts in its knowledge base. I can teach
a child that fire burns. The child will remember the *experience*
of burning much differently than an AI (what do you do, delete a
few NP junctions to make it "feel" the pain? permanently toast
some foils -- "scar tissue" -- so those associated abilities are
permanently impaired?)
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 365 |
Nodes: | 16 (2 / 14) |
Uptime: | 31:32:18 |
Calls: | 7,790 |
Files: | 12,917 |
Messages: | 5,750,909 |