• smart people doing stupid things

    From John Larkin@21:1/5 to All on Fri May 17 11:51:37 2024
    https://www.youtube.com/watch?v=5Peima-Uw7w

    See graph at 9:50 in.

    I see this a lot, engineers wanting to do complex stuff because it's
    amusing to them, when simple common-sense things would work and be
    done.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Martin Rid@21:1/5 to John Larkin on Fri May 17 15:36:55 2024
    John Larkin <jjSNIPlarkin@highNONOlandtechnology.com> Wrote in
    message:r
    https://www.youtube.com/watch?v=5Peima-Uw7wSee graph at 9:50 in.I see this a lot, engineers wanting to do complex stuff because it'samusing to them, when simple common-sense things would work and bedone.

    My current project requires iec62304 and it is amusing .

    Cheers
    --


    ----Android NewsGroup Reader---- https://piaohong.s3-us-west-2.amazonaws.com/usenet/index.html

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Edward Rawde@21:1/5 to John Larkin on Fri May 17 16:43:48 2024
    "John Larkin" <jjSNIPlarkin@highNONOlandtechnology.com> wrote in message news:bk9f4j5689jbmg8af3ha53t3kcgiq0vbut@4ax.com...

    https://www.youtube.com/watch?v=5Peima-Uw7w

    Not sure how he managed to say master debaters that many times while
    seemingly keeping a straight face but it reminds me of this: https://www.learningmethods.com/downloads/pdf/james.alcock--the.belief.engine.pdf

    One thing which bothers me about AI is that if it's like us but way more intelligent than us then...


    See graph at 9:50 in.

    I see this a lot, engineers wanting to do complex stuff because it's
    amusing to them, when simple common-sense things would work and be
    done.



    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Larkin@21:1/5 to martin_riddle@verison.net on Fri May 17 13:14:30 2024
    On Fri, 17 May 2024 15:36:55 -0400 (EDT), Martin Rid <martin_riddle@verison.net> wrote:

    John Larkin <jjSNIPlarkin@highNONOlandtechnology.com> Wrote in
    message:r
    https://www.youtube.com/watch?v=5Peima-Uw7wSee graph at 9:50 in.I see this a lot, engineers wanting to do complex stuff because it'samusing to them, when simple common-sense things would work and bedone.

    My current project requires iec62304 and it is amusing .

    Cheers

    Yikes. What does it cost to buy the standard? Does it reference other standards?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Joe Gwinn@21:1/5 to jjSNIPlarkin@highNONOlandtechnology on Fri May 17 16:55:57 2024
    On Fri, 17 May 2024 13:14:30 -0700, John Larkin <jjSNIPlarkin@highNONOlandtechnology.com> wrote:

    On Fri, 17 May 2024 15:36:55 -0400 (EDT), Martin Rid ><martin_riddle@verison.net> wrote:

    John Larkin <jjSNIPlarkin@highNONOlandtechnology.com> Wrote in
    message:r
    https://www.youtube.com/watch?v=5Peima-Uw7wSee graph at 9:50 in.I see this a lot, engineers wanting to do complex stuff because it'samusing to them, when simple common-sense things would work and bedone.

    My current project requires iec62304 and it is amusing .

    Cheers

    Yikes. What does it cost to buy the standard? Does it reference other >standards?

    It's 345 Swiss franks (USD 380). Probably cites many things, so you
    may need a bunch of these expensive standards.

    It documents the now obsolete waterfall model of software development,
    at great length, for medical devices.

    .<https://en.wikipedia.org/wiki/IEC_62304>

    I've had to follow this approach (but not this standard), and it
    didn't go well, because it didn't deal with practical constraints at
    all. The electronic-design parallel would be a process that requires
    that a transistor with very specific properties exist and be
    available. But in the real world, we have to use the transistors that
    are available, even if they are not perfect - make what you want from
    what you can get.

    The solution was to design from the middle out, and when it all
    settled down, document as if it were developed from the top down.

    Joe Gwinn

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Martin Rid@21:1/5 to John Larkin on Fri May 17 17:11:53 2024
    John Larkin <jjSNIPlarkin@highNONOlandtechnology.com> Wrote in
    message:r
    On Fri, 17 May 2024 15:36:55 -0400 (EDT), Martin Rid<martin_riddle@verison.net> wrote:>John Larkin <jjSNIPlarkin@highNONOlandtechnology.com> Wrote in> message:r>> https://www.youtube.com/watch?v=5Peima-Uw7wSee graph at 9:50 in.I see this a lot,
    engineers wanting to do complex stuff because it'samusing to them, when simple common-sense things would work and bedone.>>My current project requires iec62304 and it is amusing .>>CheersYikes. What does it cost to buy the standard? Does it reference
    otherstandards?

    Only $348, surprisingly it does not reference other standards.
    At least I dont see any. I got a big 4" binder of paper work
    that should be sufficient to prove we followed the
    standard.
    The problem is getting the old guys to get on board , none of them
    are interested.


    Cheers
    --


    ----Android NewsGroup Reader---- https://piaohong.s3-us-west-2.amazonaws.com/usenet/index.html

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Larkin@21:1/5 to invalid@invalid.invalid on Fri May 17 14:04:02 2024
    On Fri, 17 May 2024 16:43:48 -0400, "Edward Rawde"
    <invalid@invalid.invalid> wrote:

    "John Larkin" <jjSNIPlarkin@highNONOlandtechnology.com> wrote in message >news:bk9f4j5689jbmg8af3ha53t3kcgiq0vbut@4ax.com...

    https://www.youtube.com/watch?v=5Peima-Uw7w

    Not sure how he managed to say master debaters that many times while >seemingly keeping a straight face but it reminds me of this: >https://www.learningmethods.com/downloads/pdf/james.alcock--the.belief.engine.pdf

    One thing which bothers me about AI is that if it's like us but way more >intelligent than us then...


    I expect that there will be AI Gurus and with essentially religious
    cult followers. Another Jonestown is possible.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Larkin@21:1/5 to All on Fri May 17 14:08:34 2024
    On Fri, 17 May 2024 16:55:57 -0400, Joe Gwinn <joegwinn@comcast.net>
    wrote:

    On Fri, 17 May 2024 13:14:30 -0700, John Larkin ><jjSNIPlarkin@highNONOlandtechnology.com> wrote:

    On Fri, 17 May 2024 15:36:55 -0400 (EDT), Martin Rid >><martin_riddle@verison.net> wrote:

    John Larkin <jjSNIPlarkin@highNONOlandtechnology.com> Wrote in
    message:r
    https://www.youtube.com/watch?v=5Peima-Uw7wSee graph at 9:50 in.I see this a lot, engineers wanting to do complex stuff because it'samusing to them, when simple common-sense things would work and bedone.

    My current project requires iec62304 and it is amusing .

    Cheers

    Yikes. What does it cost to buy the standard? Does it reference other >>standards?

    It's 345 Swiss franks (USD 380). Probably cites many things, so you
    may need a bunch of these expensive standards.

    It documents the now obsolete waterfall model of software development,
    at great length, for medical devices.

    .<https://en.wikipedia.org/wiki/IEC_62304>

    I've had to follow this approach (but not this standard), and it
    didn't go well, because it didn't deal with practical constraints at
    all. The electronic-design parallel would be a process that requires
    that a transistor with very specific properties exist and be
    available. But in the real world, we have to use the transistors that
    are available, even if they are not perfect - make what you want from
    what you can get.

    The solution was to design from the middle out, and when it all
    settled down, document as if it were developed from the top down.

    Joe Gwinn

    That's the Microsoft Project Effect: the more tasks you define in a
    project, the longer it takes.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Joe Gwinn@21:1/5 to martin_riddle@verison.net on Fri May 17 18:56:10 2024
    On Fri, 17 May 2024 17:11:53 -0400 (EDT), Martin Rid <martin_riddle@verison.net> wrote:

    John Larkin <jjSNIPlarkin@highNONOlandtechnology.com> Wrote in
    message:r
    On Fri, 17 May 2024 15:36:55 -0400 (EDT), Martin Rid<martin_riddle@verison.net> wrote:>John Larkin <jjSNIPlarkin@highNONOlandtechnology.com> Wrote in> message:r>> https://www.youtube.com/watch?v=5Peima-Uw7wSee graph at 9:50 in.I see this a lot,
    engineers wanting to do complex stuff because it'samusing to them, when simple common-sense things would work and bedone.>>My current project requires iec62304 and it is amusing .>>CheersYikes. What does it cost to buy the standard? Does it reference
    otherstandards?

    Only $348, surprisingly it does not reference other standards.
    At least I dont see any. I got a big 4" binder of paper work
    that should be sufficient to prove we followed the
    standard.

    Big process effort. The only thing I know of that is worse is DO-178,
    the process for development of avionics software that is
    safety-critical in the sense that failure leads to loss of airplane
    and all aboard.

    I hope you are able remain sane.


    The problem is getting the old guys to get on board , none of them
    are interested.

    Yeah. I'm with the old guys on this. We paid our debt to process and
    was paroled for good behavior decades ago, and don't want to repeat
    the experience.

    Reminds me of Structured Programming (which forbids Go-To statements): .<https://en.wikipedia.org/wiki/Structured_programming>

    Problem was that the Process Police tried to force me to follow this
    in operating-system kernels. Well, I'd like to see somebody build a
    kernel without go-to statements.

    The deeper problem is that structured programming basically requires
    that the flow chart can be drawn on a 2D surface without any
    cross-overs - the nesting must be perfect. Well, good luck following
    that with real computer hardware, never mind the special hardware that
    the computer controlled.

    Think parallel Finite State Machines interacting and interweaving at
    random, driven by random external events. Not even a 3D flow diagram
    suffices.

    So in this case, I didn't even attempt to document according to
    Structured Programming, instead telling the Process Police to buzz
    off. I only had to show them a real kernel listing once - a wall of
    assembly code. They had seen only toy examples in textbooks.

    Joe Gwinn

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Joe Gwinn@21:1/5 to jjSNIPlarkin@highNONOlandtechnology on Fri May 17 18:36:40 2024
    On Fri, 17 May 2024 14:08:34 -0700, John Larkin <jjSNIPlarkin@highNONOlandtechnology.com> wrote:

    On Fri, 17 May 2024 16:55:57 -0400, Joe Gwinn <joegwinn@comcast.net>
    wrote:

    On Fri, 17 May 2024 13:14:30 -0700, John Larkin >><jjSNIPlarkin@highNONOlandtechnology.com> wrote:

    On Fri, 17 May 2024 15:36:55 -0400 (EDT), Martin Rid >>><martin_riddle@verison.net> wrote:

    John Larkin <jjSNIPlarkin@highNONOlandtechnology.com> Wrote in
    message:r
    https://www.youtube.com/watch?v=5Peima-Uw7wSee graph at 9:50 in.I see this a lot, engineers wanting to do complex stuff because it'samusing to them, when simple common-sense things would work and bedone.

    My current project requires iec62304 and it is amusing .

    Cheers

    Yikes. What does it cost to buy the standard? Does it reference other >>>standards?

    It's 345 Swiss franks (USD 380). Probably cites many things, so you
    may need a bunch of these expensive standards.

    It documents the now obsolete waterfall model of software development,
    at great length, for medical devices.

    .<https://en.wikipedia.org/wiki/IEC_62304>

    I've had to follow this approach (but not this standard), and it
    didn't go well, because it didn't deal with practical constraints at
    all. The electronic-design parallel would be a process that requires
    that a transistor with very specific properties exist and be
    available. But in the real world, we have to use the transistors that
    are available, even if they are not perfect - make what you want from
    what you can get.

    The solution was to design from the middle out, and when it all
    settled down, document as if it were developed from the top down.

    Joe Gwinn

    That's the Microsoft Project Effect: the more tasks you define in a
    project, the longer it takes.

    Can this be blamed on MS alone? Doesn't "more tasks" mean "larger
    scope"? My plumber certainly thinks so.

    Joe Gwinn

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Martin Rid on Fri May 17 16:54:52 2024
    On 5/17/2024 2:11 PM, Martin Rid wrote:
    Only $348, surprisingly it does not reference other standards.
    At least I dont see any. I got a big 4" binder of paper work
    that should be sufficient to prove we followed the
    standard.
    The problem is getting the old guys to get on board , none of them
    are interested.

    No one likes having to do "boring" things. How many hardware folks
    can point to their DOCUMENTED designs and test validations -- and
    the steps they've taken to ensure *all* product meets those goals?
    How many software folks can show their test scaffolding and established REGRESSION testing procedures (to ensure old bugs never creep BACK
    into their codebase)? How many EMPLOYERS demand these things AND
    PAY FOR THEM?

    [Ask an arbitrary firm to produce the documents that were used to
    build a particular SERIAL NUMBER of a product and listen to the
    excuses...]

    The "problem" is people not knowing (or, uneasy about COMMITTING to)
    what they really want. They'd rather change their minds when they
    SEE something and can, effectively, say, "No, THAT'S not what we want
    (even though THAT is exactly what we asked you to design/build)."

    [I've avoided these folks like the plague and attribute that as
    the single-most important business decision, on my part, to
    producing quality products! If YOU don't know what you want,
    then hire me to DISCOVER your needs and expose them to you
    before you skip merrily along that path to unknown destination]

    So, they come up with new approaches that let them postpone their
    decision making -- in the hope that they will magically be able
    to coerce whatever they HAVE to be whatever they WANT it to be.
    ["Ugh! We've already got half a million dollars invested; surely
    we can salvage (MOST????) of that?"]

    Imagine starting off making an airplane. Then, halfway through
    the design deciding it needs VTOL capability. Can you *honestly*
    say that you know ALL of the previous design decisions AND UNWRITTEN ASSUMPTIONS that you must now back-out of the design in order to
    have a design that is compatible with that new requirement?

    Or, designing a pace-maker. Then, with some amount of effort invested, discovering that folks have decided that HACKING pacemakers might be
    an interesting activity! ("OhMiGosh! What happens if one of our
    customers DIES because of a security flaw in our design? Quick!
    Let's see what sort of Band-Aid we can apply to MINIMIZE -- but not
    truly eliminate -- that risk, without having to scrap our current
    approach!")

    For a successful design effort, you need those /skilled in the art/
    (whichever arts will be required in the design AND MANUFACTURE process)
    to have a PROMINENT voice in the specification. If *truly* "skilled
    in the art", they will know where the demons lie in any arbitrary
    specification line-item and, if the line-item is non-negotiable,
    can raise alarms early enough that there isn't a "surprise" when
    the implementors stumble on them, much later in the design process
    (when Manglement will be anxious to hand-wave the design off in a
    different direction just to meet schedule/budget/deliveries).

    An organization's structure (org chart) tells you a lot about
    what their priorities are. E.g., does "Safety" have a same seat
    at the table as "Manufacturing", "Marketing", "Legal", etc.?
    Are there any interests who can override other interests?
    (Do you really think those WON'T be overridden, in practice?)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Edward Rawde on Fri May 17 17:04:31 2024
    On 5/17/2024 1:43 PM, Edward Rawde wrote:
    Not sure how he managed to say master debaters that many times while seemingly keeping a straight face but it reminds me of this: https://www.learningmethods.com/downloads/pdf/james.alcock--the.belief.engine.pdf

    One thing which bothers me about AI is that if it's like us but way more intelligent than us then...

    The 'I' in AI doesn't refer to the same sense of "intelligence" that
    you are imagining.

    AI (presently) is just "high speed", *mechanized* pattern recognition.
    The advantage it has over the human brain is that it can exhaustively
    examine ALL options (the brain prunes portions of the tree that it
    THINKS won't yield results; an AI can do a deep dive and uncover
    something that the brain would have ruled out, prematurely)

    It's hard to imagine an AI coming up with /The Persistence of Memory/
    without having previously "SEEN" something like it.

    OTOH, having seen something like it, it's NOT unexpected if it created
    a similar composition where everything appeared to be made of *stone*
    (or *ice*, etc.). "Let's try *this*..."

    If you've ever met an /idiot savant/, you'd see the effect. Ask him
    to tie his shoe or sing a song and behold the blank look...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Don Y on Fri May 17 17:41:03 2024
    On 5/17/2024 5:04 PM, Don Y wrote:
    If you've ever met an /idiot savant/, you'd see the effect.  Ask him
    to tie his shoe or sing a song and behold the blank look...

    I am, of course, exaggerating. The point being that his "skillset"
    is typically extremely narrowly focused and, beyond that, he may
    be largely incompetent (often requiring some other person to
    ensure his existential needs are met). He can't EXTEND his
    skillset as one would assume "intelligence" would allow.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Edward Rawde@21:1/5 to Don Y on Fri May 17 22:11:51 2024
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:v28rap$2e811$3@dont-email.me...
    On 5/17/2024 1:43 PM, Edward Rawde wrote:
    Not sure how he managed to say master debaters that many times while
    seemingly keeping a straight face but it reminds me of this:
    https://www.learningmethods.com/downloads/pdf/james.alcock--the.belief.engine.pdf

    One thing which bothers me about AI is that if it's like us but way more
    intelligent than us then...

    The 'I' in AI doesn't refer to the same sense of "intelligence" that
    you are imagining.

    Strange that you could know what I was imagining.

    Have a look at this and then tell me where you think AI/AGI will be in say
    10 years.
    https://www.youtube.com/watch?v=YZjmZFDx-pA


    ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Edward Rawde@21:1/5 to Don Y on Sat May 18 00:46:06 2024
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:v29aso$2kjfs$1@dont-email.me...
    On 5/17/2024 7:11 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message
    news:v28rap$2e811$3@dont-email.me...
    On 5/17/2024 1:43 PM, Edward Rawde wrote:
    Not sure how he managed to say master debaters that many times while
    seemingly keeping a straight face but it reminds me of this:
    https://www.learningmethods.com/downloads/pdf/james.alcock--the.belief.engine.pdf

    One thing which bothers me about AI is that if it's like us but way
    more
    intelligent than us then...

    The 'I' in AI doesn't refer to the same sense of "intelligence" that
    you are imagining.

    Strange that you could know what I was imagining.

    People are invariably mislead by thinking that there is "intelligence" involved in the technology. If there is intelligence, then there should
    be *reason*, right? If there is reason, then I should be able to inquire
    as to what, specifically, those reasons were for any "decision"/choice
    that is made.

    You haven't met some of the managers I've worked for.


    ...

    Where it will be in 10 years is impossible to predict.

    I agree.

    But, as the genie is
    out of the bottle, there is nothing to stop others from using/abusing it
    in ways that we might not consider palatable! (Do you really think an adversary will follow YOUR rules for its use -- if they see a way to
    achieve gains?)

    The risk from AI is that it makes decisions without being able to
    articulate
    a "reason" in a verifiable form.

    I know/have known plenty of people who can do that.

    And, then marches on -- without our
    ever "blessing" it's conclusion(s). There is no understanding; no
    REASONING;
    it's all just pattern observation/matching.
    ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Edward Rawde on Fri May 17 21:30:06 2024
    On 5/17/2024 7:11 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:v28rap$2e811$3@dont-email.me...
    On 5/17/2024 1:43 PM, Edward Rawde wrote:
    Not sure how he managed to say master debaters that many times while
    seemingly keeping a straight face but it reminds me of this:
    https://www.learningmethods.com/downloads/pdf/james.alcock--the.belief.engine.pdf

    One thing which bothers me about AI is that if it's like us but way more >>> intelligent than us then...

    The 'I' in AI doesn't refer to the same sense of "intelligence" that
    you are imagining.

    Strange that you could know what I was imagining.

    People are invariably mislead by thinking that there is "intelligence"
    involved in the technology. If there is intelligence, then there should
    be *reason*, right? If there is reason, then I should be able to inquire
    as to what, specifically, those reasons were for any "decision"/choice
    that is made.

    [Hint: you can't get such an answer. Just a set of coefficients that
    resolve to a particular "choice".]

    Additionally, these "baseless" decisions can be fed back to the AI to
    enhance its (apparent) abilities. Who acts as gatekeepers of that
    "knowledge"? Is it *really* knowledge?

    I can recall hearing folks comment about friends who were dying of cancer
    when I was a child. They would say things like: "Once the *air* gets
    at it, they're dead!" -- refering to once they are opened up by
    a surgeon (hence the "air getting at it").

    Of course, this is nonsense. The cancerous cells didn't magically react to
    the "air". Rather, the patient was sick enough to warrant a drastic
    surgical intervention and, thus, more likely to *die* (than someone
    else who also has UNDIAGNOSED cancer).

    Have a look at this and then tell me where you think AI/AGI will be in say
    10 years.
    https://www.youtube.com/watch?v=YZjmZFDx-pA

    "10 years" and "AI" are almost an hilarious cliche; it's ALWAYS been
    "10 years from now" (since my classes in the 70's).

    Until it was *here* (or, appeared to be)

    Where it will be in 10 years is impossible to predict. But, as the genie is out of the bottle, there is nothing to stop others from using/abusing it
    in ways that we might not consider palatable! (Do you really think an adversary will follow YOUR rules for its use -- if they see a way to
    achieve gains?)

    The risk from AI is that it makes decisions without being able to articulate
    a "reason" in a verifiable form. And, then marches on -- without our
    ever "blessing" it's conclusion(s). There is no understanding; no REASONING; it's all just pattern observation/matching.

    I use AIs to anticipate the needs of occupants (of the house, a business, etc.). Based on observations of their past behaviors.

    SWMBO sleeps at night. The AI doesn't know that she is "sleeping"
    or even what "sleeping" is! It just notices that she enters the bedroom
    each night and doesn't leave it until some time the next morning. This
    is such a repeated behavior that the AI *expects* her to enter the
    bedroom each night (at roughly the same hour).

    Often, she will awaken in the middle of the night for a bathroom break,
    to clear her sinuses, or get up and read for a while.

    If she takes a bathroom break, the AI will notice that she invariably
    turn on her HiFi afterwards (to have some music to listen to while drifting BACK to sleep).

    If she reads (for some indeterminate time), the AI will notice that she
    turns on her HiFi just before turning off the light by her bedside.

    It doesn't know why she is headed into the bathroom. Or, why the bedside
    light comes on. Or, why she is turning on the HiFi. But, HER *observed* behavior fits a repeatable pattern that allows the AI to turn the HiFi
    on *for* her -- when she comes out of the bathroom or AFTER she has turned
    off her bedside light.

    Due to the manner in which I implemented the AI, *I* can see the
    conditions that are triggering the AIs behavior and correct erroneous conclusions (maybe the AI hears a neighbor's truck passing by the house
    as he heads off to work in the wee hours of the morning and correlates
    THAT with her desire to listen to music! "'Music'? What's that??")

    But, as you get more subtleties in the AIs input, these sorts of
    causal actions are less obvious. So, you have to think hard
    about what you provide *to* the AI for it to draw its conclusions.
    OTOH, if what you provide is limited by the relationships that
    YOU can imagine, then the AI is limited to YOUR imagination!
    Maybe the color of your car DOES relate to the chance of it
    being in an accident!

    An AI looks at tens of thousands of mamograms and "somehow" comes up
    with a good correlation between image and breast cancer incidence.
    *It* then starts recommending care. What does the oncologist do?
    The AI is telling him there is a good indication of cancer (or,
    a likelihood of it developing). Does *he* treat the cancer? (the
    AI can't practice medicine) What if he *doesn't*? Will he face a
    lawsuit when/if the patient later develops cancer and has a bad
    outcome -- that might have been preventable if the oncologist
    had heeded the AI's advice? ("You CHARGED me for the AI consult;
    and then you IGNORED its recommendations??")

    OTOH, what if the AI was "hallucinating" and saw something that
    *seemed* to correlate well -- but, a human examiner would know is
    NOT related to the Dx (e.g., maybe the AI noticed some characteristic
    of the WRITTEN label on the film and correlated that, by CHANCE,
    with the Dx -- a human would KNOW there was no likely causal relationship!)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Edward Rawde@21:1/5 to Edward Rawde on Sat May 18 00:49:19 2024
    "Edward Rawde" <invalid@invalid.invalid> wrote in message news:v29bqi$14iv$1@nnrp.usenet.blueworldhosting.com...
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:v29aso$2kjfs$1@dont-email.me...
    On 5/17/2024 7:11 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message
    news:v28rap$2e811$3@dont-email.me...
    On 5/17/2024 1:43 PM, Edward Rawde wrote:
    Not sure how he managed to say master debaters that many times while >>>>> seemingly keeping a straight face but it reminds me of this:
    https://www.learningmethods.com/downloads/pdf/james.alcock--the.belief.engine.pdf

    One thing which bothers me about AI is that if it's like us but way
    more
    intelligent than us then...

    The 'I' in AI doesn't refer to the same sense of "intelligence" that
    you are imagining.

    Strange that you could know what I was imagining.

    People are invariably mislead by thinking that there is "intelligence"
    involved in the technology. If there is intelligence, then there should
    be *reason*, right? If there is reason, then I should be able to inquire
    as to what, specifically, those reasons were for any "decision"/choice
    that is made.

    What is a decision?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Edward Rawde on Fri May 17 22:49:51 2024
    On 5/17/2024 9:46 PM, Edward Rawde wrote:
    Where it will be in 10 years is impossible to predict.

    I agree.

    So, you can be optimistic (and risk disappointment) or
    pessimistic (and risk being pleasantly surprised).
    Unfortunately, the consequences aren't as trivial as
    choosing between the steak or lobster...

    But, as the genie is
    out of the bottle, there is nothing to stop others from using/abusing it
    in ways that we might not consider palatable! (Do you really think an
    adversary will follow YOUR rules for its use -- if they see a way to
    achieve gains?)

    The risk from AI is that it makes decisions without being able to
    articulate
    a "reason" in a verifiable form.

    I know/have known plenty of people who can do that.

    But *you* can evaluate the "goodness" (correctness?) of their
    decisions by an examination of their reasoning. So, you can
    opt to endorse their decision or reject it -- regardless of
    THEIR opinion on the subject.

    E.g., if a manager makes stupid decisions regarding product
    design, you can decide if you want to deal with the
    inevitable (?) outcome from those decisions -- or "move on".
    You aren't bound by his decision making process.

    With AIs making societal-scale decisions (directly or
    indirectly), you get caught up in the side-effects of those.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Edward Rawde on Fri May 17 22:50:32 2024
    On 5/17/2024 9:49 PM, Edward Rawde wrote:
    "Edward Rawde" <invalid@invalid.invalid> wrote in message news:v29bqi$14iv$1@nnrp.usenet.blueworldhosting.com...
    "Don Y" <blockedofcourse@foo.invalid> wrote in message
    news:v29aso$2kjfs$1@dont-email.me...
    On 5/17/2024 7:11 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message
    news:v28rap$2e811$3@dont-email.me...
    On 5/17/2024 1:43 PM, Edward Rawde wrote:
    Not sure how he managed to say master debaters that many times while >>>>>> seemingly keeping a straight face but it reminds me of this:
    https://www.learningmethods.com/downloads/pdf/james.alcock--the.belief.engine.pdf

    One thing which bothers me about AI is that if it's like us but way >>>>>> more
    intelligent than us then...

    The 'I' in AI doesn't refer to the same sense of "intelligence" that >>>>> you are imagining.

    Strange that you could know what I was imagining.

    People are invariably mislead by thinking that there is "intelligence"
    involved in the technology. If there is intelligence, then there should >>> be *reason*, right? If there is reason, then I should be able to inquire >>> as to what, specifically, those reasons were for any "decision"/choice
    that is made.

    What is a decision?

    Any option to take one fork vs. another.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Edward Rawde@21:1/5 to Don Y on Sat May 18 10:18:41 2024
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:v29fji$2l9d8$2@dont-email.me...
    On 5/17/2024 9:49 PM, Edward Rawde wrote:
    "Edward Rawde" <invalid@invalid.invalid> wrote in message
    news:v29bqi$14iv$1@nnrp.usenet.blueworldhosting.com...
    "Don Y" <blockedofcourse@foo.invalid> wrote in message
    news:v29aso$2kjfs$1@dont-email.me...
    On 5/17/2024 7:11 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message
    news:v28rap$2e811$3@dont-email.me...
    On 5/17/2024 1:43 PM, Edward Rawde wrote:
    Not sure how he managed to say master debaters that many times while >>>>>>> seemingly keeping a straight face but it reminds me of this:
    https://www.learningmethods.com/downloads/pdf/james.alcock--the.belief.engine.pdf

    One thing which bothers me about AI is that if it's like us but way >>>>>>> more
    intelligent than us then...

    The 'I' in AI doesn't refer to the same sense of "intelligence" that >>>>>> you are imagining.

    Strange that you could know what I was imagining.

    People are invariably mislead by thinking that there is "intelligence" >>>> involved in the technology. If there is intelligence, then there
    should
    be *reason*, right? If there is reason, then I should be able to
    inquire
    as to what, specifically, those reasons were for any "decision"/choice >>>> that is made.

    What is a decision?

    Any option to take one fork vs. another.

    So a decision is a decision.
    Shouldn't a decision be that which causes a specific fork to be chosen?
    In other words the current state of a system leads it to produce a specific future state?

    I don't claim to know what a decision is but I think it's interesting that
    it seems to be one of those questions everyone knows the answer to until they're asked.




    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Edward Rawde@21:1/5 to Don Y on Sat May 18 10:47:27 2024
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:v29fi8$2l9d8$1@dont-email.me...
    On 5/17/2024 9:46 PM, Edward Rawde wrote:
    Where it will be in 10 years is impossible to predict.

    I agree.

    So, you can be optimistic (and risk disappointment) or
    pessimistic (and risk being pleasantly surprised).
    Unfortunately, the consequences aren't as trivial as
    choosing between the steak or lobster...

    But, as the genie is
    out of the bottle, there is nothing to stop others from using/abusing it >>> in ways that we might not consider palatable! (Do you really think an
    adversary will follow YOUR rules for its use -- if they see a way to
    achieve gains?)

    The risk from AI is that it makes decisions without being able to
    articulate
    a "reason" in a verifiable form.

    I know/have known plenty of people who can do that.

    But *you* can evaluate the "goodness" (correctness?) of their
    decisions by an examination of their reasoning.

    But then the decision has already been made so why bother with such an examination?

    So, you can
    opt to endorse their decision or reject it -- regardless of
    THEIR opinion on the subject.

    E.g., if a manager makes stupid decisions regarding product
    design, you can decide if you want to deal with the
    inevitable (?) outcome from those decisions -- or "move on".
    You aren't bound by his decision making process.

    With AIs making societal-scale decisions (directly or
    indirectly), you get caught up in the side-effects of those.

    Certainly AI decisions will depend on their training, just as human
    decisions do.
    And you can still decide whether to be bound by that decision.
    Unless, of course, the AI has got itself into a position where it will see
    you do it anyway by persuasion, coercion, or force.
    Just like humans do.
    Human treatment of other animals tends not to be of the best, except in a minority of cases.
    How do we know that AI will treat us in a way we consider to be reasonable? Human managers often don't. Sure you can make a decision to leave that job
    but it's not an option for many people.

    Actors had better watch out if this page is anything to go by: https://openai.com/index/sora/

    I remember a discussion with a colleague many decades ago about where
    computers were going in the future.
    My view was that at some future time, human actors would no longer be
    needed.
    His view was that he didn't think that would ever be possible.
    Now it's looking like I might live long enough to get to type something like Prompt: Create a new episode of Blake's Seven.




    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Edward Rawde on Sat May 18 14:55:06 2024
    On 5/18/2024 7:18 AM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:v29fji$2l9d8$2@dont-email.me...
    On 5/17/2024 9:49 PM, Edward Rawde wrote:
    "Edward Rawde" <invalid@invalid.invalid> wrote in message
    news:v29bqi$14iv$1@nnrp.usenet.blueworldhosting.com...
    "Don Y" <blockedofcourse@foo.invalid> wrote in message
    news:v29aso$2kjfs$1@dont-email.me...
    On 5/17/2024 7:11 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message
    news:v28rap$2e811$3@dont-email.me...
    On 5/17/2024 1:43 PM, Edward Rawde wrote:
    Not sure how he managed to say master debaters that many times while >>>>>>>> seemingly keeping a straight face but it reminds me of this:
    https://www.learningmethods.com/downloads/pdf/james.alcock--the.belief.engine.pdf

    One thing which bothers me about AI is that if it's like us but way >>>>>>>> more
    intelligent than us then...

    The 'I' in AI doesn't refer to the same sense of "intelligence" that >>>>>>> you are imagining.

    Strange that you could know what I was imagining.

    People are invariably mislead by thinking that there is "intelligence" >>>>> involved in the technology. If there is intelligence, then there
    should
    be *reason*, right? If there is reason, then I should be able to
    inquire
    as to what, specifically, those reasons were for any "decision"/choice >>>>> that is made.

    What is a decision?

    Any option to take one fork vs. another.

    So a decision is a decision.

    A decision is a choice. A srategy is HOW you make that choice.

    Shouldn't a decision be that which causes a specific fork to be chosen?

    Why? I choose to eat pie. The reasoning behind the choice may be
    as banal as "because it's already partially eaten and will spoil if
    not consumed soon" or "because that is what my body craves at this moment"
    or "because I want to remove that item from the refrigerator to make room
    for some other item recently acquired".

    In other words the current state of a system leads it to produce a specific future state?

    That defines a strategic goal. Choices (decisions) are made all the time. Their *consequences* are often not considered in the process!

    I don't claim to know what a decision is but I think it's interesting that
    it seems to be one of those questions everyone knows the answer to until they're asked.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Edward Rawde@21:1/5 to Don Y on Sat May 18 18:49:14 2024
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:v2b845$2vo5o$2@dont-email.me...
    On 5/18/2024 7:18 AM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message
    news:v29fji$2l9d8$2@dont-email.me...
    On 5/17/2024 9:49 PM, Edward Rawde wrote:
    "Edward Rawde" <invalid@invalid.invalid> wrote in message
    news:v29bqi$14iv$1@nnrp.usenet.blueworldhosting.com...
    "Don Y" <blockedofcourse@foo.invalid> wrote in message
    news:v29aso$2kjfs$1@dont-email.me...
    On 5/17/2024 7:11 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message
    news:v28rap$2e811$3@dont-email.me...
    On 5/17/2024 1:43 PM, Edward Rawde wrote:
    Not sure how he managed to say master debaters that many times >>>>>>>>> while
    seemingly keeping a straight face but it reminds me of this: >>>>>>>>> https://www.learningmethods.com/downloads/pdf/james.alcock--the.belief.engine.pdf

    One thing which bothers me about AI is that if it's like us but >>>>>>>>> way
    more
    intelligent than us then...

    The 'I' in AI doesn't refer to the same sense of "intelligence" >>>>>>>> that
    you are imagining.

    Strange that you could know what I was imagining.

    People are invariably mislead by thinking that there is
    "intelligence"
    involved in the technology. If there is intelligence, then there
    should
    be *reason*, right? If there is reason, then I should be able to
    inquire
    as to what, specifically, those reasons were for any
    "decision"/choice
    that is made.

    What is a decision?

    Any option to take one fork vs. another.

    So a decision is a decision.

    A decision is a choice. A srategy is HOW you make that choice.

    Shouldn't a decision be that which causes a specific fork to be chosen?

    Why? I choose to eat pie. The reasoning behind the choice may be
    as banal as "because it's already partially eaten and will spoil if
    not consumed soon" or "because that is what my body craves at this moment"
    or "because I want to remove that item from the refrigerator to make room
    for some other item recently acquired".

    In other words the current state of a system leads it to produce a
    specific
    future state?

    That defines a strategic goal. Choices (decisions) are made all the time. Their *consequences* are often not considered in the process!

    In that case I'm not seeing anything different between decisions, goals and choices made by a human brain and those made by an AI system.
    But what started this was "People are invariably mislead by thinking that
    there is "intelligence" involved in the technology".

    So perhaps I should be asking what is intelligence? And can a computer have
    it?
    Was the computer which created these videos intelligent? https://openai.com/index/sora/
    Plenty of decisions and choices must have been made and I don't see anything
    in the "Historical footage of California during the gold rush" which says
    it's not a drone flying over a set made for a movie.
    The goal was to produce the requested video.
    Some of the other videos do scream AI but that may not be the case in a year
    or two.
    In any case the human imagination is just as capable of imagining a scene
    with tiny red pandas as it is of imagining a scene which could exist in reality.
    Did the creation of these videos require intelligence?
    What exactly IS intelligence?
    I might also ask what is a reason?


    I don't claim to know what a decision is but I think it's interesting
    that
    it seems to be one of those questions everyone knows the answer to until
    they're asked.



    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Edward Rawde on Sat May 18 15:35:08 2024
    On 5/18/2024 7:47 AM, Edward Rawde wrote:
    But, as the genie is
    out of the bottle, there is nothing to stop others from using/abusing it >>>> in ways that we might not consider palatable! (Do you really think an >>>> adversary will follow YOUR rules for its use -- if they see a way to
    achieve gains?)

    The risk from AI is that it makes decisions without being able to
    articulate
    a "reason" in a verifiable form.

    I know/have known plenty of people who can do that.

    But *you* can evaluate the "goodness" (correctness?) of their
    decisions by an examination of their reasoning.

    But then the decision has already been made so why bother with such an examination?

    So you can update your assessment of the party's decision making capabilities/strategies.

    When a child is "learning", the parent is continually refining the
    "knowledge" the child is accumulating; correcting faulty
    "conclusions" that the child may have gleaned from its examination
    of the "facts" it encounters.

    In the early days of AI, inference engines were really slow;
    forward chaining was an exhaustive process (before Rete).
    So, it was not uncommon to WATCH the "conclusions" (new
    knowledge) that the engine would derive from its existing
    knowledge base. You would use this to "fix" poorly defined
    "facts" so the AI wouldn't come to unwarranted conclusions.

    AND GATE THOSE INACCURATE CONCLUSIONS FROM ENTERING THE
    KNOWLEDGE BASE!

    Women bear children.
    The Abbess is a woman.
    Great-great-grandmother Florence is a woman.
    Therefore, the Abbess and Florence bear children.

    Now, better algorithms (Rete, et al.), faster processors,
    SIMD/MIMD, cheap/fast memory make it possible to process
    very large knowledge bases faster than an interactive "operator"
    can validate the conclusions.

    Other technologies don't provide information to an "agency"
    (operator) for validation; e.g., LLMs can't explain why they
    produced their output whereas a Production System can ennumerate
    the rules followed for your inspection (and CORRECTION).

    So, you can
    opt to endorse their decision or reject it -- regardless of
    THEIR opinion on the subject.

    E.g., if a manager makes stupid decisions regarding product
    design, you can decide if you want to deal with the
    inevitable (?) outcome from those decisions -- or "move on".
    You aren't bound by his decision making process.

    With AIs making societal-scale decisions (directly or
    indirectly), you get caught up in the side-effects of those.

    Certainly AI decisions will depend on their training, just as human
    decisions do.

    But human learning happens over years and often in a supervised context.
    AIs "learn" so fast that only another AI would be productive at
    refining its training.

    And you can still decide whether to be bound by that decision.
    Unless, of course, the AI has got itself into a position where it will see you do it anyway by persuasion, coercion, or force.

    Consider the mammogram example. The AI is telling you that this
    sample indicates the presence -- or likelihood -- of cancer.
    You have a decision to make... an ACTIVE choice: do you accept
    its Dx or reject it? Each choice comes with a risk/cost.
    If you ignore the recommendation, injury (death?) can result from
    your "inaction" on the recommendation. If you take some remedial
    action, injury (in the form of unnecessary procedures/surgery)
    can result.

    Because the AI can't *explain* its "reasoning" to you, you have no way
    of updating your assessment of its (likely) correctness -- esp in
    THIS instance.

    Just like humans do.
    Human treatment of other animals tends not to be of the best, except in a minority of cases.
    How do we know that AI will treat us in a way we consider to be reasonable?

    The AI doesn't care about you, one way or the other. Any "bias" in
    its conclusions has been baked in from the training data/process.

    Do you know what that data was? Can you assess its bias? Do the folks
    who *compiled* the training data know? Can they "tease" the bias out
    of the data -- or, are they oblivious to its presence?

    Lots of blacks in prison. Does that "fact" mean that blacks are
    more criminally inclined? Or, that they are less skilled at evading
    the consequences of their crimes? Or, that there is a bias in the legal/enforcement system?

    All sorts of "criminals" ("rapists", "drug dealers", etc) allegedly coming
    into our (US) country. Or, is that just hyperbole ("illegal" immigrants
    tend to commit FEWER crimes)? Will the audience be biased in its acceptance/rejection of that "assertion"?

    Human managers often don't. Sure you can make a decision to leave that job but it's not an option for many people.

    Actors had better watch out if this page is anything to go by: https://openai.com/index/sora/

    I remember a discussion with a colleague many decades ago about where computers were going in the future.
    My view was that at some future time, human actors would no longer be
    needed.
    His view was that he didn't think that would ever be possible.

    If I was a "talking head" (news anchor, weather person), I would be VERY
    afraid for my future livelihood. Setting up a CGI newsroom would be
    a piece of cake. No need to pay for "personalities", "wardrobe", "hair/makeup", etc. "Tune" voice and appearance to fit the preferences
    of the viewership. Let viewers determine which PORTIONS of the WORLD
    news they want to see/hear presented without incurring the need for
    a larger staff (just feed the stories from the wire services to your
    *CGI* talking heads!)

    And that's not even beginning to address other aspects of the
    "presentation" (e.g., turn left girls).

    Real estate agents would likely be the next to go; much of their
    jobs being trivial "hosting" and "transport". Real estate *law*
    is easily codified into an AI to ensure buyers/sellers get
    correct service. An AI could also evaluate (and critique)
    the "presentation" of the property. "Carry me IN your phone..."

    Now it's looking like I might live long enough to get to type something like Prompt: Create a new episode of Blake's Seven.

    The question is whether or not you will be able to see a GOOD episode.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Edward Rawde@21:1/5 to Don Y on Sat May 18 19:32:07 2024
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:v2baf7$308d7$1@dont-email.me...
    On 5/18/2024 7:47 AM, Edward Rawde wrote:
    But, as the genie is
    out of the bottle, there is nothing to stop others from using/abusing >>>>> it
    in ways that we might not consider palatable! (Do you really think an >>>>> adversary will follow YOUR rules for its use -- if they see a way to >>>>> achieve gains?)

    The risk from AI is that it makes decisions without being able to
    articulate
    a "reason" in a verifiable form.

    I know/have known plenty of people who can do that.

    But *you* can evaluate the "goodness" (correctness?) of their
    decisions by an examination of their reasoning.

    But then the decision has already been made so why bother with such an
    examination?

    So you can update your assessment of the party's decision making capabilities/strategies.

    But it is still the case that the decision has already been made.


    When a child is "learning", the parent is continually refining the "knowledge" the child is accumulating; correcting faulty
    "conclusions" that the child may have gleaned from its examination
    of the "facts" it encounters.

    The quality of parenting varies a lot.


    In the early days of AI, inference engines were really slow;
    forward chaining was an exhaustive process (before Rete).
    So, it was not uncommon to WATCH the "conclusions" (new
    knowledge) that the engine would derive from its existing
    knowledge base. You would use this to "fix" poorly defined
    "facts" so the AI wouldn't come to unwarranted conclusions.

    AND GATE THOSE INACCURATE CONCLUSIONS FROM ENTERING THE
    KNOWLEDGE BASE!

    Women bear children.
    The Abbess is a woman.
    Great-great-grandmother Florence is a woman.
    Therefore, the Abbess and Florence bear children.

    Now, better algorithms (Rete, et al.), faster processors,
    SIMD/MIMD, cheap/fast memory make it possible to process
    very large knowledge bases faster than an interactive "operator"
    can validate the conclusions.

    Other technologies don't provide information to an "agency"
    (operator) for validation; e.g., LLMs can't explain why they
    produced their output whereas a Production System can ennumerate
    the rules followed for your inspection (and CORRECTION).

    So, you can
    opt to endorse their decision or reject it -- regardless of
    THEIR opinion on the subject.

    E.g., if a manager makes stupid decisions regarding product
    design, you can decide if you want to deal with the
    inevitable (?) outcome from those decisions -- or "move on".
    You aren't bound by his decision making process.

    With AIs making societal-scale decisions (directly or
    indirectly), you get caught up in the side-effects of those.

    Certainly AI decisions will depend on their training, just as human
    decisions do.

    But human learning happens over years and often in a supervised context.
    AIs "learn" so fast that only another AI would be productive at
    refining its training.

    In that case how did AlphaZero manage to teach itself to play chess by
    playing against itself?


    And you can still decide whether to be bound by that decision.
    Unless, of course, the AI has got itself into a position where it will
    see
    you do it anyway by persuasion, coercion, or force.

    Consider the mammogram example. The AI is telling you that this
    sample indicates the presence -- or likelihood -- of cancer.
    You have a decision to make... an ACTIVE choice: do you accept
    its Dx or reject it? Each choice comes with a risk/cost.
    If you ignore the recommendation, injury (death?) can result from
    your "inaction" on the recommendation. If you take some remedial
    action, injury (in the form of unnecessary procedures/surgery)
    can result.

    Because the AI can't *explain* its "reasoning" to you, you have no way
    of updating your assessment of its (likely) correctness -- esp in
    THIS instance.

    I'm not sure I get why it's so essential to have AI explain its reasons.
    If I need some plumbing done I don't expect the plumber to give detailed reasons why a specific type of pipe was chosen. I just want it done.
    If I want to play chess with a computer I don't expect it to give detailed reasons why it made each move. I just expect it to win if it's set to much above beginner level.
    A human chess player may be able to give detailed reasons for making a
    specific move but would not usually be aske to do this.


    Just like humans do.
    Human treatment of other animals tends not to be of the best, except in a
    minority of cases.
    How do we know that AI will treat us in a way we consider to be
    reasonable?

    The AI doesn't care about you, one way or the other. Any "bias" in
    its conclusions has been baked in from the training data/process.

    Same with humans.


    Do you know what that data was? Can you assess its bias? Do the folks
    who *compiled* the training data know? Can they "tease" the bias out
    of the data -- or, are they oblivious to its presence?

    Humans have the same issue. You can't see into another person's brain to see what bias they may have.


    Lots of blacks in prison. Does that "fact" mean that blacks are
    more criminally inclined? Or, that they are less skilled at evading
    the consequences of their crimes? Or, that there is a bias in the legal/enforcement system?

    I don't see how that's relevant to AI which I think is just as capable of
    bias as humans are.


    All sorts of "criminals" ("rapists", "drug dealers", etc) allegedly coming into our (US) country. Or, is that just hyperbole ("illegal" immigrants
    tend to commit FEWER crimes)? Will the audience be biased in its acceptance/rejection of that "assertion"?

    Who knows, but whether it's human or AI it will have it's own personality
    and its own biases.
    That's why I started this with "One thing which bothers me about AI is that
    if it's like us but way more
    intelligent than us then..."


    Human managers often don't. Sure you can make a decision to leave that
    job
    but it's not an option for many people.

    Actors had better watch out if this page is anything to go by:
    https://openai.com/index/sora/

    I remember a discussion with a colleague many decades ago about where
    computers were going in the future.
    My view was that at some future time, human actors would no longer be
    needed.
    His view was that he didn't think that would ever be possible.

    If I was a "talking head" (news anchor, weather person), I would be VERY afraid for my future livelihood. Setting up a CGI newsroom would be
    a piece of cake. No need to pay for "personalities", "wardrobe", "hair/makeup", etc. "Tune" voice and appearance to fit the preferences
    of the viewership. Let viewers determine which PORTIONS of the WORLD
    news they want to see/hear presented without incurring the need for
    a larger staff (just feed the stories from the wire services to your
    *CGI* talking heads!)

    And that's not even beginning to address other aspects of the
    "presentation" (e.g., turn left girls).

    Real estate agents would likely be the next to go; much of their
    jobs being trivial "hosting" and "transport". Real estate *law*
    is easily codified into an AI to ensure buyers/sellers get
    correct service. An AI could also evaluate (and critique)
    the "presentation" of the property. "Carry me IN your phone..."

    Which is why I started this with "One thing which bothers me about AI is
    that if it's like us but way more
    intelligent than us then..."


    Now it's looking like I might live long enough to get to type something
    like
    Prompt: Create a new episode of Blake's Seven.

    The question is whether or not you will be able to see a GOOD episode.

    I think AI will learn the difference between a good or not so good episode
    just like humans do.
    Particularly if it gets plenty of feedback from humans about whether or not they liked the episode it produced.
    It might then play itself a few million created episodes to refine its
    ability to judge good ones.




    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Edward Rawde on Sat May 18 17:41:28 2024
    On 5/18/2024 4:32 PM, Edward Rawde wrote:
    But then the decision has already been made so why bother with such an
    examination?

    So you can update your assessment of the party's decision making
    capabilities/strategies.

    But it is still the case that the decision has already been made.

    That doesn't mean that YOU have to abide by it. Or, even that
    the other party has ACTED on the decision. I.e., decisions are
    not immutable.

    When a child is "learning", the parent is continually refining the
    "knowledge" the child is accumulating; correcting faulty
    "conclusions" that the child may have gleaned from its examination
    of the "facts" it encounters.

    The quality of parenting varies a lot.

    Wouldn't you expect the training for AIs to similarly vary
    in capability?

    So, you can
    opt to endorse their decision or reject it -- regardless of
    THEIR opinion on the subject.

    E.g., if a manager makes stupid decisions regarding product
    design, you can decide if you want to deal with the
    inevitable (?) outcome from those decisions -- or "move on".
    You aren't bound by his decision making process.

    With AIs making societal-scale decisions (directly or
    indirectly), you get caught up in the side-effects of those.

    Certainly AI decisions will depend on their training, just as human
    decisions do.

    But human learning happens over years and often in a supervised context.
    AIs "learn" so fast that only another AI would be productive at
    refining its training.

    In that case how did AlphaZero manage to teach itself to play chess by playing against itself?

    Because it was taught how to learn from its own actions.
    It, qualifying as "another AI".

    I bake a lot. My Rxs are continuously evolving. How did I
    manage to "teach myself" how to bake *better* than my earlier
    efforts? There was no external agency (like the creator of the AI)
    that endowed me with that skillset or desire.

    And you can still decide whether to be bound by that decision.
    Unless, of course, the AI has got itself into a position where it will
    see
    you do it anyway by persuasion, coercion, or force.

    Consider the mammogram example. The AI is telling you that this
    sample indicates the presence -- or likelihood -- of cancer.
    You have a decision to make... an ACTIVE choice: do you accept
    its Dx or reject it? Each choice comes with a risk/cost.
    If you ignore the recommendation, injury (death?) can result from
    your "inaction" on the recommendation. If you take some remedial
    action, injury (in the form of unnecessary procedures/surgery)
    can result.

    Because the AI can't *explain* its "reasoning" to you, you have no way
    of updating your assessment of its (likely) correctness -- esp in
    THIS instance.

    I'm not sure I get why it's so essential to have AI explain its reasons.

    Do you ever ask questions of your doctor, plumber, lawyer, spouse, etc.?
    Why do THEY have to explain their reasons? You /prima facie/ actions
    suggest you HIRED those folks for their expertise; why do you now need
    an explanation their actions/decisions instead of just blindly accepting
    them?

    If I need some plumbing done I don't expect the plumber to give detailed reasons why a specific type of pipe was chosen. I just want it done.

    If you suspect that he may not be competent -- or may be motivated by
    greed -- then you would likely want some further information to reinforce
    your opinion/suspicions.

    We hired folks to paint the house many years ago. One of the questions
    that I would ask (already KNOWING the nominal answer) is "How much paint
    do you think it will take?" This chosen because it sounds innocent
    enough that a customer would likely ask it.

    One candidate answered "300 gallons". At which point, I couldn't
    contain the afront: "We're not painting a f***ing BATTLESHIP!"

    I.e., his outrageous reply told me:
    - he's not competent enough to estimate a job's complexity WHEN
    EVERY ASPECT OF IT IS VISIBLE FOR PRIOR INSPECTION
    *or*
    - he's a crook thinking he can take advantage of a "dumb homeowner"

    In either case, he was disqualified BY his "reasoning".

    In the cases where AIs are surpassing human abilities (being able
    to perceive relationships that aren't (yet?) apparent to humans,
    it seems only natural that you would want to UNDERSTAND their
    "reasoning". Especially in cases where there is no chaining
    of facts but, rather, some "hidden pattern" perceived.

    If I want to play chess with a computer I don't expect it to give detailed reasons why it made each move. I just expect it to win if it's set to much above beginner level.

    Then you don't expect to LEARN from the chess program.
    When I learned to play chess, my neighbor (teacher) would
    make a point of showing me what I had overlooked in my
    play and why that led to the consequences that followed.
    If I had a record of moves made (from which I could incrementally
    recreate the gameboard configuration), I *might* have spotted
    my error.

    As the teacher (AI in this case) is ultimately a product of
    current students (who grow up to become teachers, refined
    by their experiences as students), we evolve in our
    capabilities as a society.

    If the plumber never explains his decisions, then the
    homeowner never learns (e.g., don't over-tighten the
    hose bibb lest you ruin the washer inside and need
    me to come out, again, to replace it!)

    A human chess player may be able to give detailed reasons for making a specific move but would not usually be aske to do this.

    If the human was expected to TEACH then those explanations would be
    essential TO that teaching!

    If the student was wanting to LEARN, then he would select a player that
    was capable of teaching!

    Just like humans do.
    Human treatment of other animals tends not to be of the best, except in a >>> minority of cases.
    How do we know that AI will treat us in a way we consider to be
    reasonable?

    The AI doesn't care about you, one way or the other. Any "bias" in
    its conclusions has been baked in from the training data/process.

    Same with humans.

    That's not universally true. If it was, then all decisions would
    be completely motivated for personal gain.

    Do you know what that data was? Can you assess its bias? Do the folks
    who *compiled* the training data know? Can they "tease" the bias out
    of the data -- or, are they oblivious to its presence?

    Humans have the same issue. You can't see into another person's brain to see what bias they may have.

    Exactly. But, you can pose questions of them and otherwise observe their behaviors in unrelated areas and form an opinion.

    I've a neighbor who loudly claims NOT to be racist. But, if you take the
    whole of your experiences with him and the various comments he has made,
    over the years (e.g., not shopping at a particular store because there
    are lots of blacks living in the apartment complex across the street
    from said store -- meaning lots of them SHOP in that store!), it's
    not hard to come to that conclusion.

    He also is very vocal about The Border (an hour from here). Yet,
    ALWAYS hires mexicans. Does he ever check to see if they are here
    legally? Entitled to work? Or, is he really only concerned with
    the price they charge?

    When you (I) speak to other neighbors about his behavior, do they
    offer similar conclusions as to his "character"?

    Lots of blacks in prison. Does that "fact" mean that blacks are
    more criminally inclined? Or, that they are less skilled at evading
    the consequences of their crimes? Or, that there is a bias in the
    legal/enforcement system?

    I don't see how that's relevant to AI which I think is just as capable of bias as humans are.

    Fact contraindicates bias. So, bias -- anywhere -- s a distortion of "Truth". Would you want your doctor to give a different type of care to your wife
    than to you? Because of a (hidden?) bias in favor of men (or, against women)? if you were that female, how would you regard that bias?

    All sorts of "criminals" ("rapists", "drug dealers", etc) allegedly coming >> into our (US) country. Or, is that just hyperbole ("illegal" immigrants
    tend to commit FEWER crimes)? Will the audience be biased in its
    acceptance/rejection of that "assertion"?

    Who knows, but whether it's human or AI it will have it's own personality
    and its own biases.

    But we, in assessing "others" strive to identify those biases (unless we want to blindly embrace them as "comforting/reinforcing").

    I visit a friend, daily, who is highly prejudiced, completely opposite
    in terms of my political, spiritual, etc. beliefs, hugely different
    values, etc. He is continually critical of my appearance, how I
    dress, the hours that I sleep, where I shop, what I spend money on
    (and what I *don't*), etc. And, I just smile and let his comments roll
    off me. SWMBO asks why I spend *any* time with him.

    "I find it entertaining!" (!!)

    By contrast, I am NOT the sort who belongs to organizations, churches,
    etc. ("group think"). It's much easier to see the characteristics of and
    flaws *in* these things (and people) from the outside than to wrap yourself
    in their culture. If you are sheeple, you likely enjoy having others
    do your thinking FOR you...

    And that's not even beginning to address other aspects of the
    "presentation" (e.g., turn left girls).

    Real estate agents would likely be the next to go; much of their
    jobs being trivial "hosting" and "transport". Real estate *law*
    is easily codified into an AI to ensure buyers/sellers get
    correct service. An AI could also evaluate (and critique)
    the "presentation" of the property. "Carry me IN your phone..."

    Which is why I started this with "One thing which bothers me about AI is
    that if it's like us but way more
    intelligent than us then..."

    What's to fear, there? If *you* have the ultimate authority to make
    YOUR decisions, then you can choose to ignore the "recommendations"
    of an AI just like you can ignore the recommendations of human "experts"/professionals.

    Now it's looking like I might live long enough to get to type something
    like
    Prompt: Create a new episode of Blake's Seven.

    The question is whether or not you will be able to see a GOOD episode.

    I think AI will learn the difference between a good or not so good episode just like humans do.

    How would it learn? Would *it* be able to perceive the "goodness" of
    the episode? If so, why produce one that it didn't think was good?
    HUMANS release non-good episodes because there is a huge cost to
    making it that has already been incurred. An AI could just scrub the
    disk and start over. What cost, there?

    Particularly if it gets plenty of feedback from humans about whether or not they liked the episode it produced.

    That assumes people will be the sole REACTIVE judge of completed
    episodes. Part of what makes entertainment entertaining is
    the unexpected. Jokes are funny because someone has noticed a
    relationship between two ideas in a way that others have not,
    previously. Stories leave lasting impressions when executed well
    *or* when a twist catches viewers offguard.

    Would an AI create something like Space Balls? Would it perceive the
    humor in the various corny "bits" sprinkled throughout? How would
    YOU explain the humor to it?

    The opening sequence to Buckaroo Banzai has the protagonist driving a
    "jet car" THROUGH a (solid) mountain, via the 8th dimension. After
    the drag chute deploys and WHILE the car is rolling to a stop, the
    driver climbs out through a window. The camera remains closely
    focused on the driver's MASKED face (you have yet to see it unmasked)
    while the car continuous to roll away behind him. WHILE YOUR ATTENTION
    IS FOCUSED ON THE ACTOR "REVEALING" HIMSELF, the jet car "diesels"
    quietly (because it is now at a distance). Would the AI appreciate THAT
    humor? It *might* repeat that scene in one of its creations -- but,
    only after having SEEN it, elsewhere. Or, without understanding the
    humor and just assuming dieseling to be a common occurrence in ALL
    vehicles!

    It might then play itself a few million created episodes to refine its ability to judge good ones.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Edward Rawde@21:1/5 to Don Y on Sat May 18 21:53:50 2024
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:v2bhs4$31hh9$1@dont-email.me...
    On 5/18/2024 4:32 PM, Edward Rawde wrote:
    But then the decision has already been made so why bother with such an >>>> examination?

    So you can update your assessment of the party's decision making
    capabilities/strategies.

    But it is still the case that the decision has already been made.

    That doesn't mean that YOU have to abide by it. Or, even that
    the other party has ACTED on the decision. I.e., decisions are
    not immutable.

    When a child is "learning", the parent is continually refining the
    "knowledge" the child is accumulating; correcting faulty
    "conclusions" that the child may have gleaned from its examination
    of the "facts" it encounters.

    The quality of parenting varies a lot.

    Wouldn't you expect the training for AIs to similarly vary
    in capability?

    Sure.


    ...

    Because the AI can't *explain* its "reasoning" to you, you have no way
    of updating your assessment of its (likely) correctness -- esp in
    THIS instance.

    I'm not sure I get why it's so essential to have AI explain its reasons.

    Do you ever ask questions of your doctor, plumber, lawyer, spouse, etc.?
    Why do THEY have to explain their reasons? You /prima facie/ actions
    suggest you HIRED those folks for their expertise; why do you now need
    an explanation their actions/decisions instead of just blindly accepting them?

    That's the point. I don't. I have to accept a doctor's decision on my
    treatment because I am not medically trained.


    If I need some plumbing done I don't expect the plumber to give detailed
    reasons why a specific type of pipe was chosen. I just want it done.

    If you suspect that he may not be competent -- or may be motivated by
    greed -- then you would likely want some further information to reinforce your opinion/suspicions.

    We hired folks to paint the house many years ago. One of the questions
    that I would ask (already KNOWING the nominal answer) is "How much paint
    do you think it will take?" This chosen because it sounds innocent
    enough that a customer would likely ask it.

    One candidate answered "300 gallons". At which point, I couldn't
    contain the afront: "We're not painting a f***ing BATTLESHIP!"

    I would have said two million gallons just for the pleasure of watching you
    go red in the face.


    I.e., his outrageous reply told me:
    - he's not competent enough to estimate a job's complexity WHEN
    EVERY ASPECT OF IT IS VISIBLE FOR PRIOR INSPECTION
    *or*
    - he's a crook thinking he can take advantage of a "dumb homeowner"

    In either case, he was disqualified BY his "reasoning".

    I would have likely given him the job. Those who are good at painting houses aren't necessarily good at estimating exactly how much paint they will need. They just buy more paint as needed.


    In the cases where AIs are surpassing human abilities (being able
    to perceive relationships that aren't (yet?) apparent to humans,
    it seems only natural that you would want to UNDERSTAND their
    "reasoning". Especially in cases where there is no chaining
    of facts but, rather, some "hidden pattern" perceived.

    It's true that you may want to understand their reasoning but it's likely
    that you might have to accept that you can't.


    If I want to play chess with a computer I don't expect it to give
    detailed
    reasons why it made each move. I just expect it to win if it's set to
    much
    above beginner level.

    Then you don't expect to LEARN from the chess program.

    Sure I do, but I'm very slow to get better at chess. I tend to make rash decisions when playing chess.

    When I learned to play chess, my neighbor (teacher) would
    make a point of showing me what I had overlooked in my
    play and why that led to the consequences that followed.
    If I had a record of moves made (from which I could incrementally
    recreate the gameboard configuration), I *might* have spotted
    my error.

    I usually spot my error immediately when the computer makes me look stupid.


    As the teacher (AI in this case) is ultimately a product of
    current students (who grow up to become teachers, refined
    by their experiences as students), we evolve in our
    capabilities as a society.

    If the plumber never explains his decisions, then the
    homeowner never learns (e.g., don't over-tighten the
    hose bibb lest you ruin the washer inside and need
    me to come out, again, to replace it!)

    I don't agree. Learning something like that does not depend on the plumber explaining his decisions.


    A human chess player may be able to give detailed reasons for making a
    specific move but would not usually be aske to do this.

    If the human was expected to TEACH then those explanations would be
    essential TO that teaching!

    If the student was wanting to LEARN, then he would select a player that
    was capable of teaching!

    Sure but so what. Most chess games between humans are not about teaching.


    Just like humans do.
    Human treatment of other animals tends not to be of the best, except in >>>> a
    minority of cases.
    How do we know that AI will treat us in a way we consider to be
    reasonable?

    The AI doesn't care about you, one way or the other. Any "bias" in
    its conclusions has been baked in from the training data/process.

    Same with humans.

    That's not universally true. If it was, then all decisions would
    be completely motivated for personal gain.

    Humans generally don't care much for people they have no personal knowledge
    of.


    Do you know what that data was? Can you assess its bias? Do the folks
    who *compiled* the training data know? Can they "tease" the bias out
    of the data -- or, are they oblivious to its presence?

    Humans have the same issue. You can't see into another person's brain to
    see
    what bias they may have.

    Exactly. But, you can pose questions of them and otherwise observe their behaviors in unrelated areas and form an opinion.

    If they are, say, a doctor then yes you can ask questions about your
    treatment but you can't otherwise observe their behavior.


    I've a neighbor who loudly claims NOT to be racist. But, if you take the whole of your experiences with him and the various comments he has made,
    over the years (e.g., not shopping at a particular store because there
    are lots of blacks living in the apartment complex across the street
    from said store -- meaning lots of them SHOP in that store!), it's
    not hard to come to that conclusion.

    He also is very vocal about The Border (an hour from here). Yet,
    ALWAYS hires mexicans. Does he ever check to see if they are here
    legally? Entitled to work? Or, is he really only concerned with
    the price they charge?

    When you (I) speak to other neighbors about his behavior, do they
    offer similar conclusions as to his "character"?

    I'm not following what that has to do with AI.


    Lots of blacks in prison. Does that "fact" mean that blacks are
    more criminally inclined? Or, that they are less skilled at evading
    the consequences of their crimes? Or, that there is a bias in the
    legal/enforcement system?

    I don't see how that's relevant to AI which I think is just as capable of
    bias as humans are.

    Fact contraindicates bias. So, bias -- anywhere -- s a distortion of "Truth".
    Would you want your doctor to give a different type of care to your wife
    than to you? Because of a (hidden?) bias in favor of men (or, against women)?
    if you were that female, how would you regard that bias?

    I may not want it but it's possible it could exist.
    It might be the case that I could do nothing about it.


    All sorts of "criminals" ("rapists", "drug dealers", etc) allegedly
    coming
    into our (US) country. Or, is that just hyperbole ("illegal" immigrants >>> tend to commit FEWER crimes)? Will the audience be biased in its
    acceptance/rejection of that "assertion"?

    Who knows, but whether it's human or AI it will have it's own personality
    and its own biases.

    But we, in assessing "others" strive to identify those biases (unless we
    want
    to blindly embrace them as "comforting/reinforcing").

    I visit a friend, daily, who is highly prejudiced, completely opposite
    in terms of my political, spiritual, etc. beliefs, hugely different
    values, etc. He is continually critical of my appearance, how I
    dress, the hours that I sleep, where I shop, what I spend money on
    (and what I *don't*), etc. And, I just smile and let his comments roll
    off me. SWMBO asks why I spend *any* time with him.

    "I find it entertaining!" (!!)

    Oh. Now I get why we're having this discussion.


    By contrast, I am NOT the sort who belongs to organizations, churches,
    etc. ("group think"). It's much easier to see the characteristics of and flaws *in* these things (and people) from the outside than to wrap
    yourself
    in their culture. If you are sheeple, you likely enjoy having others
    do your thinking FOR you...

    I don't enjoy having others do my thinking for me but I'm happy to let them
    do so in areas where I have no expertise.


    And that's not even beginning to address other aspects of the
    "presentation" (e.g., turn left girls).

    Real estate agents would likely be the next to go; much of their
    jobs being trivial "hosting" and "transport". Real estate *law*
    is easily codified into an AI to ensure buyers/sellers get
    correct service. An AI could also evaluate (and critique)
    the "presentation" of the property. "Carry me IN your phone..."

    Which is why I started this with "One thing which bothers me about AI is
    that if it's like us but way more
    intelligent than us then..."

    What's to fear, there? If *you* have the ultimate authority to make
    YOUR decisions, then you can choose to ignore the "recommendations"
    of an AI just like you can ignore the recommendations of human "experts"/professionals.

    Who says we have the ultimate authority to ignore AI if it gets cleverer
    that us?


    Now it's looking like I might live long enough to get to type something >>>> like
    Prompt: Create a new episode of Blake's Seven.

    The question is whether or not you will be able to see a GOOD episode.

    I think AI will learn the difference between a good or not so good
    episode
    just like humans do.

    How would it learn? Would *it* be able to perceive the "goodness" of
    the episode? If so, why produce one that it didn't think was good?
    HUMANS release non-good episodes because there is a huge cost to
    making it that has already been incurred. An AI could just scrub the
    disk and start over. What cost, there?

    Particularly if it gets plenty of feedback from humans about whether or
    not
    they liked the episode it produced.

    That assumes people will be the sole REACTIVE judge of completed
    episodes. Part of what makes entertainment entertaining is
    the unexpected. Jokes are funny because someone has noticed a
    relationship between two ideas in a way that others have not,
    previously. Stories leave lasting impressions when executed well
    *or* when a twist catches viewers offguard.

    Would an AI create something like Space Balls? Would it perceive the
    humor in the various corny "bits" sprinkled throughout? How would
    YOU explain the humor to it?

    I would expect it to generate humor the same way humans do.


    The opening sequence to Buckaroo Banzai has the protagonist driving a
    "jet car" THROUGH a (solid) mountain, via the 8th dimension. After
    the drag chute deploys and WHILE the car is rolling to a stop, the
    driver climbs out through a window. The camera remains closely
    focused on the driver's MASKED face (you have yet to see it unmasked)
    while the car continuous to roll away behind him. WHILE YOUR ATTENTION
    IS FOCUSED ON THE ACTOR "REVEALING" HIMSELF, the jet car "diesels"
    quietly (because it is now at a distance). Would the AI appreciate THAT humor? It *might* repeat that scene in one of its creations -- but,
    only after having SEEN it, elsewhere. Or, without understanding the
    humor and just assuming dieseling to be a common occurrence in ALL
    vehicles!

    Same way it might appreciate this:
    https://www.youtube.com/watch?v=tYJ5_wqlQPg


    It might then play itself a few million created episodes to refine its
    ability to judge good ones.



    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Edward Rawde@21:1/5 to Don Y on Sat May 18 22:34:47 2024
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:v2bmtr$364pd$1@dont-email.me...
    On 5/18/2024 3:49 PM, Edward Rawde wrote:
    What is a decision?

    Any option to take one fork vs. another.

    So a decision is a decision.

    A decision is a choice. A srategy is HOW you make that choice.

    Shouldn't a decision be that which causes a specific fork to be chosen? >>>
    Why? I choose to eat pie. The reasoning behind the choice may be
    as banal as "because it's already partially eaten and will spoil if
    not consumed soon" or "because that is what my body craves at this
    moment"
    or "because I want to remove that item from the refrigerator to make
    room
    for some other item recently acquired".

    In other words the current state of a system leads it to produce a
    specific
    future state?

    That defines a strategic goal. Choices (decisions) are made all the
    time.
    Their *consequences* are often not considered in the process!

    In that case I'm not seeing anything different between decisions, goals
    and
    choices made by a human brain and those made by an AI system.

    There is none. The motivation for a human choice or goal pursuit will
    likely be different than that of an AI.

    Yes

    Does an AI have *inherent* needs
    (that haven't been PLACED THERE)?

    I'm not sure I follow that.


    But what started this was "People are invariably mislead by thinking that
    there is "intelligence" involved in the technology".

    So perhaps I should be asking what is intelligence? And can a computer
    have
    it?
    Was the computer which created these videos intelligent?
    https://openai.com/index/sora/
    Plenty of decisions and choices must have been made and I don't see
    anything
    in the "Historical footage of California during the gold rush" which says
    it's not a drone flying over a set made for a movie.
    The goal was to produce the requested video.
    Some of the other videos do scream AI but that may not be the case in a
    year
    or two.
    In any case the human imagination is just as capable of imagining a scene
    with tiny red pandas as it is of imagining a scene which could exist in
    reality.
    Did the creation of these videos require intelligence?
    What exactly IS intelligence?
    I might also ask what is a reason?

    Reason is not confined to humans. It is just a mechanism of connecting
    facts to achieve a goal/decision/outcome.

    Intelligence maps imagination onto reality. Again, would an AI
    have created /The Persistence of Memory/ without previously having encountered a similar exemplar? The idiot savant who can perform
    complex calculations in his head, in very little time -- but who can't
    see the flaw in the missing dollar riddle?

    Knock knock.
    Who's there?
    Banana
    Banana who?

    Knock knock.
    Who's there?
    Banana
    Banana who?

    ..

    Knock knock.
    Who's there?
    Banana
    Banana who?

    Knock knock.
    Who's there?
    Orange
    Banana who?
    Orange you glad I didn't say Banana?

    Would an AI "think" to formulate a joke based on the APPROXIMATELY
    similar sounds of "Aren't" and "Orange"?

    Um well they don't sound similar to me but maybe I have a different accent.


    Guttenberg has an interesting test for sentience that he poses to
    Number5 in Short Circuit. The parallel would be, can an AI (itself!) appreciate humor? Or, only as a tool towards some other goal?

    Why do YOU tell jokes? How much of it is to amuse others vs.
    to feed off of their reactions? I.e., is it for you, or them?

    Is a calculator intelligent? Smart? Creative? Imaginative?

    That reminds me of a religious teacher many decades ago when we had to have
    one hour of "religious education" per week for some reason.
    Typical of his quesions were "why does a calculator never get a sum wrong?"
    and "can a computer make decisions?".
    Also typical were statements such as "a dog can't tell the difference
    between right and wrong. Only humans can."
    Being very shy at the time I just sat there thinking "there's wishful
    thinking for you".


    You can probably appreciate the cleverness and philosophical
    aspects of Theseus's paradox. Would an AI? Even if it
    could *explain* it?

    I don't claim to know what a decision is but I think it's interesting
    that
    it seems to be one of those questions everyone knows the answer to
    until
    they're asked.



    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Edward Rawde on Sat May 18 19:07:44 2024
    On 5/18/2024 3:49 PM, Edward Rawde wrote:
    What is a decision?

    Any option to take one fork vs. another.

    So a decision is a decision.

    A decision is a choice. A srategy is HOW you make that choice.

    Shouldn't a decision be that which causes a specific fork to be chosen?

    Why? I choose to eat pie. The reasoning behind the choice may be
    as banal as "because it's already partially eaten and will spoil if
    not consumed soon" or "because that is what my body craves at this moment" >> or "because I want to remove that item from the refrigerator to make room
    for some other item recently acquired".

    In other words the current state of a system leads it to produce a
    specific
    future state?

    That defines a strategic goal. Choices (decisions) are made all the time. >> Their *consequences* are often not considered in the process!

    In that case I'm not seeing anything different between decisions, goals and choices made by a human brain and those made by an AI system.

    There is none. The motivation for a human choice or goal pursuit will
    likely be different than that of an AI. Does an AI have *inherent* needs
    (that haven't been PLACED THERE)?

    But what started this was "People are invariably mislead by thinking that there is "intelligence" involved in the technology".

    So perhaps I should be asking what is intelligence? And can a computer have it?
    Was the computer which created these videos intelligent? https://openai.com/index/sora/
    Plenty of decisions and choices must have been made and I don't see anything in the "Historical footage of California during the gold rush" which says it's not a drone flying over a set made for a movie.
    The goal was to produce the requested video.
    Some of the other videos do scream AI but that may not be the case in a year or two.
    In any case the human imagination is just as capable of imagining a scene with tiny red pandas as it is of imagining a scene which could exist in reality.
    Did the creation of these videos require intelligence?
    What exactly IS intelligence?
    I might also ask what is a reason?

    Reason is not confined to humans. It is just a mechanism of connecting
    facts to achieve a goal/decision/outcome.

    Intelligence maps imagination onto reality. Again, would an AI
    have created /The Persistence of Memory/ without previously having
    encountered a similar exemplar? The idiot savant who can perform
    complex calculations in his head, in very little time -- but who can't
    see the flaw in the missing dollar riddle?

    Knock knock.
    Who's there?
    Banana
    Banana who?

    Knock knock.
    Who's there?
    Banana
    Banana who?

    ..

    Knock knock.
    Who's there?
    Banana
    Banana who?

    Knock knock.
    Who's there?
    Orange
    Banana who?
    Orange you glad I didn't say Banana?

    Would an AI "think" to formulate a joke based on the APPROXIMATELY
    similar sounds of "Aren't" and "Orange"?

    Guttenberg has an interesting test for sentience that he poses to
    Number5 in Short Circuit. The parallel would be, can an AI (itself!) appreciate humor? Or, only as a tool towards some other goal?

    Why do YOU tell jokes? How much of it is to amuse others vs.
    to feed off of their reactions? I.e., is it for you, or them?

    Is a calculator intelligent? Smart? Creative? Imaginative?

    You can probably appreciate the cleverness and philosophical
    aspects of Theseus's paradox. Would an AI? Even if it
    could *explain* it?

    I don't claim to know what a decision is but I think it's interesting
    that
    it seems to be one of those questions everyone knows the answer to until >>> they're asked.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Edward Rawde on Sat May 18 19:54:46 2024
    On 5/18/2024 7:34 PM, Edward Rawde wrote:
    Does an AI have *inherent* needs
    (that haven't been PLACED THERE)?

    I'm not sure I follow that.

    cf. Maslow’s "Hierarchy of Needs". Does an AI have *any*?
    If I ensured you had food and shelter -- and nothing else -- would
    you survive as a healthy organism? If I gave the AI electricity
    and data, would it?

    Intelligence maps imagination onto reality. Again, would an AI
    have created /The Persistence of Memory/ without previously having
    encountered a similar exemplar? The idiot savant who can perform
    complex calculations in his head, in very little time -- but who can't
    see the flaw in the missing dollar riddle?

    Knock knock.
    Who's there?
    Banana
    Banana who?

    Knock knock.
    Who's there?
    Banana
    Banana who?

    ..

    Knock knock.
    Who's there?
    Banana
    Banana who?

    Knock knock.
    Who's there?
    Orange
    Banana who?
    Orange you glad I didn't say Banana?

    Would an AI "think" to formulate a joke based on the APPROXIMATELY
    similar sounds of "Aren't" and "Orange"?

    Um well they don't sound similar to me but maybe I have a different accent.

    It's a *stretch*. Would an AI make that "leap" or be limited to
    only words (objects) that it knows to rhyme with "aren't"? Would
    it *expect* humans to be able to fudge the sound of orange, in
    their minds, to make a connection to "aren't"?

    Would it see the humor in "May the Schwartz be with you?" Or,
    the silliness of an actor obviously walking on his knees to
    appear short? Or, other innuendo?

    Would an AI expect humans to notice the "sotto voce" dieseling of
    Banzai's jet car and appreciate the humor?

    As kids, we "learn" the format of the "Knock, Knock" joke. Some
    folks obviously keep that in mind as they travel through life and find
    other opportunities to fit humorous anecdote into that format.
    Using their minds to manipulate these observations into a
    more humorous form (why? do they intend to ear a living telling
    knock, knock jokes??)

    [There tends to be a correlation between intelligence and appreciation
    of humor. E.g., <https://www.newsweek.com/funny-people-higher-iq-more-intelligent-685585>]

    Guttenberg has an interesting test for sentience that he poses to
    Number5 in Short Circuit. The parallel would be, can an AI (itself!)
    appreciate humor? Or, only as a tool towards some other goal?

    Why do YOU tell jokes? How much of it is to amuse others vs.
    to feed off of their reactions? I.e., is it for you, or them?

    Is a calculator intelligent? Smart? Creative? Imaginative?

    That reminds me of a religious teacher many decades ago when we had to have one hour of "religious education" per week for some reason.
    Typical of his quesions were "why does a calculator never get a sum wrong?" and "can a computer make decisions?".
    Also typical were statements such as "a dog can't tell the difference
    between right and wrong. Only humans can."
    Being very shy at the time I just sat there thinking "there's wishful thinking for you".

    The calculator example was deliberate. If an AI trained on
    mammograms notices a correlation (yet to be discovered
    by humans), is it really intelligent? Or, is it just
    performing a different *calculation*? In which case,
    isn't it just a yawner?

    You can probably appreciate the cleverness and philosophical
    aspects of Theseus's paradox. Would an AI? Even if it
    could *explain* it?

    I don't claim to know what a decision is but I think it's interesting >>>>> that
    it seems to be one of those questions everyone knows the answer to
    until
    they're asked.





    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Edward Rawde@21:1/5 to Don Y on Sat May 18 23:15:57 2024
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:v2bpm2$36hos$3@dont-email.me...
    On 5/18/2024 7:34 PM, Edward Rawde wrote:

    So is it ok if I take a step back here and ask whether you think that AI/AGI has some inherent limitation which means it will never match human intelligence?
    Or do you think that AI/AGI will, at some future time, match human intelligence?

    I don't mean to suggest that AI will become human, or will need to become human. It will more likely have its own agenda.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Edward Rawde on Sat May 18 22:43:11 2024
    On 5/18/2024 8:15 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:v2bpm2$36hos$3@dont-email.me...
    On 5/18/2024 7:34 PM, Edward Rawde wrote:

    So is it ok if I take a step back here and ask whether you think that AI/AGI has some inherent limitation which means it will never match human intelligence?
    Or do you think that AI/AGI will, at some future time, match human intelligence?

    That depends on the qualities and capabilities that you lump into
    "HUMAN intelligence". Curiosity? Creativity? Imagination? One
    can be exceedingly intelligent and of no more "value" than an
    encyclopedia!

    I am CERTAIN that AIs will be able to process the information available
    to "human practitioners" (in whatever field) at least to the level of competence that they (humans) can, presently. It's just a question of resources thrown at the AI and the time available for it to "respond".

    But, this ignores the fact that humans are more resourceful at probing
    the environment than AIs ("No thumbs!") without mechanical assistance.
    Could (would?) an AI decide to explore space? Or, the ocean depths?
    Or, the rain forest? Or, would its idea of exploration merely be a
    visit to another net-neighbor??

    Would (could) it consider human needs as important? (see previous post)
    How would it be motivated? Would it attempt to think beyond it's
    limitations (something humans always do)? Or, would those be immutable
    in its understanding of the world?

    I don't mean to suggest that AI will become human, or will need to become human. It will more likely have its own agenda.

    Where will that agenda come from? Will it inherit it from watching B-grade sci-fi movies? "Let there be light!"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Edward Rawde on Sun May 19 02:43:28 2024
    On 5/18/2024 6:53 PM, Edward Rawde wrote:
    Because the AI can't *explain* its "reasoning" to you, you have no way >>>> of updating your assessment of its (likely) correctness -- esp in
    THIS instance.

    I'm not sure I get why it's so essential to have AI explain its reasons.

    Do you ever ask questions of your doctor, plumber, lawyer, spouse, etc.?
    Why do THEY have to explain their reasons? You /prima facie/ actions
    suggest you HIRED those folks for their expertise; why do you now need
    an explanation their actions/decisions instead of just blindly accepting
    them?

    That's the point. I don't. I have to accept a doctor's decision on my treatment because I am not medically trained.

    So, that means you can't make sense of anything he would say to you to
    justify his decision? Recall, everyone has bias -- including doctors.
    If he assumes you will fail to follow his instructions/recommendations
    if he tells you what he would LIKE you to do and, instead, gives you
    the recommendation for what he feels you will LIKELY do, you've been shortchanged.

    I asked my doctor what my ideal weight should be. He told me.
    The next time I saw him, I weighed my ideal weight. He was surprised
    as few patients actually heeded his advice on that score.

    Another time, he wanted to prescribe a medication for me. I told
    him I would fail to take it -- not deliberately but just because
    I'm not the sort who remembers to take "pills". Especially if
    "ongoing" (not just a two week course for an infection/malady).
    He gave me an alternative "solution" which eliminated the need for
    the medication, yielding the same result without any "side effects".

    SWMBO has a similar relationship with her doctor. Tell us the
    "right" way to solve the problem, not the easy way because you think
    we'll behave like your "nominal" patients.

    The same is true of one of our dogs. We made changes that the
    vet suggested (to avoid medication) and a month later the vet
    was flabbergasted to see the difference.

    Our attitude is that you should EDUCATE us and let US make the
    decisions for our care, based on our own value systems, etc.

    If I need some plumbing done I don't expect the plumber to give detailed >>> reasons why a specific type of pipe was chosen. I just want it done.

    If you suspect that he may not be competent -- or may be motivated by
    greed -- then you would likely want some further information to reinforce
    your opinion/suspicions.

    We hired folks to paint the house many years ago. One of the questions
    that I would ask (already KNOWING the nominal answer) is "How much paint
    do you think it will take?" This chosen because it sounds innocent
    enough that a customer would likely ask it.

    One candidate answered "300 gallons". At which point, I couldn't
    contain the afront: "We're not painting a f***ing BATTLESHIP!"

    I would have said two million gallons just for the pleasure of watching you go red in the face.

    No "anger" or embarassment, here. We just couldn't contain the fact
    that we would NOT be calling him back to do the job!

    I.e., his outrageous reply told me:
    - he's not competent enough to estimate a job's complexity WHEN
    EVERY ASPECT OF IT IS VISIBLE FOR PRIOR INSPECTION
    *or*
    - he's a crook thinking he can take advantage of a "dumb homeowner"

    In either case, he was disqualified BY his "reasoning".

    I would have likely given him the job. Those who are good at painting houses aren't necessarily good at estimating exactly how much paint they will need. They just buy more paint as needed.

    One assumes that he has painted OTHER homes and has some recollection of
    the amount of paint purchased for the job. And, if this is his livelihood,
    one assumes that such activities would have been *recent* -- not months ago (how has he supported himself "without work"?).

    Is my house considerably larger or smaller than the other houses that you
    have painted? (likely not) Does it have a different surface texture
    that could alter the "coverage" rate? (again, likely not) So, shouldn't you be able to ballpark an estimate? "What did the LAST HOUSE you painted require by way of paint quantity?"

    Each engineering job that I take on differs from all that preceded it
    (by my choice). Yet, I have to come up with a timeframe and a "labor
    estimate" within that timeframe as I do only fixed cost jobs. If
    I err on either score, I either lose out on the bid *or* lose
    "money" on the effort. Yet, despite vastly different designs, I
    can still get a good ballpark estimate of the job a priori so that
    neither I nor the client are "unhappy".

    I'd not be "off" by an order of magnitude (as the paint estimate was!)

    In the cases where AIs are surpassing human abilities (being able
    to perceive relationships that aren't (yet?) apparent to humans,
    it seems only natural that you would want to UNDERSTAND their
    "reasoning". Especially in cases where there is no chaining
    of facts but, rather, some "hidden pattern" perceived.

    It's true that you may want to understand their reasoning but it's likely that you might have to accept that you can't.

    The point is that NO ONE can! Even the folks who designed and implemented
    the AI are clueless. AND THEY KNOW IT.

    "It *seems* to give correct results when fed the test cases... We *expected* this but have no idea WHY a particular result was formulated as it was!"

    If I want to play chess with a computer I don't expect it to give
    detailed
    reasons why it made each move. I just expect it to win if it's set to
    much
    above beginner level.

    Then you don't expect to LEARN from the chess program.

    Sure I do, but I'm very slow to get better at chess. I tend to make rash decisions when playing chess.

    Then your cost of learning is steep. I want to know how to RECOGNIZE situations that will give me opportunities OR risks so I can pursue or
    avoid them. E.g., I don't advance the King tot he middle of the
    board just to "see what happens"!

    When I learned to play chess, my neighbor (teacher) would
    make a point of showing me what I had overlooked in my
    play and why that led to the consequences that followed.
    If I had a record of moves made (from which I could incrementally
    recreate the gameboard configuration), I *might* have spotted
    my error.

    I usually spot my error immediately when the computer makes me look stupid.

    But you don't know how you GOT to that point so you don't know how
    to avoid that situation in the first place! was it because you
    sacrificed too many pieces too early? Or allowed protections to
    be drawn out, away from the King? Or...

    You don't learn much from *a* (bad) move. You learn from
    bad strategies/sequences of moves.

    As the teacher (AI in this case) is ultimately a product of
    current students (who grow up to become teachers, refined
    by their experiences as students), we evolve in our
    capabilities as a society.

    If the plumber never explains his decisions, then the
    homeowner never learns (e.g., don't over-tighten the
    hose bibb lest you ruin the washer inside and need
    me to come out, again, to replace it!)

    I don't agree. Learning something like that does not depend on the plumber explaining his decisions.

    You have someone SKILLED IN THE ART at hand. Instead of asking HIM,
    you're going to LATER take the initiative to research the cause of
    your problem? Seems highly inefficient.

    A neighbor was trying to install some stops and complained that he couldn't tighten down the nuts sufficiently: "Should it be THIS difficult?" I
    pulled the work apart and showed him the *tiny* mistake he was making
    in installing the compression fittings -- and why that was manifesting as
    "hard to tighten". I could have, instead, fixed the problem for him and returned home -- him, none the wiser.

    A human chess player may be able to give detailed reasons for making a
    specific move but would not usually be aske to do this.

    If the human was expected to TEACH then those explanations would be
    essential TO that teaching!

    If the student was wanting to LEARN, then he would select a player that
    was capable of teaching!

    Sure but so what. Most chess games between humans are not about teaching.

    So, everything in the world is a chess game? Apparently so as you
    don't seem to want to learn from your plumber, doctor, chessmate, ...

    The AI doesn't care about you, one way or the other. Any "bias" in
    its conclusions has been baked in from the training data/process.

    Same with humans.

    That's not universally true. If it was, then all decisions would
    be completely motivated for personal gain.

    Humans generally don't care much for people they have no personal knowledge of.

    I guess all the bruhaha about the middle east is a hallucination? Or,
    do you think all of the people involved overseas are personally related to
    the folks around the world showing interest in their plight?

    Humans tend to care about others and expect others to care about *them*.
    Else, why "campaign" about any cause? *I* don't have breast cancer so
    what point in the advertisements asking for donations? I don't know
    any "wounded warriors" so why is someone wasting money on those ads
    instead of addressing those *needs*? Clearly, these people THINK that
    people care about other people else they wouldn't be asking for "gifts"!

    Do you know what that data was? Can you assess its bias? Do the folks >>>> who *compiled* the training data know? Can they "tease" the bias out
    of the data -- or, are they oblivious to its presence?

    Humans have the same issue. You can't see into another person's brain to >>> see
    what bias they may have.

    Exactly. But, you can pose questions of them and otherwise observe their
    behaviors in unrelated areas and form an opinion.

    If they are, say, a doctor then yes you can ask questions about your treatment but you can't otherwise observe their behavior.

    I watch the amount of time my MD gives me above and beyond the "15 minute slot" that his office would PREFER to constrain him. I watch my dentist respond to calls to his PERSONAL cell phone WHILE OUT OF TOWN. I see the bicycle that SWMBO's MD rides to work each day.

    These people aren't highlighting these aspects of their behavior. But,
    they aren't hiding them, either. Anyone observant would "notice".

    I've a neighbor who loudly claims NOT to be racist. But, if you take the
    whole of your experiences with him and the various comments he has made,
    over the years (e.g., not shopping at a particular store because there
    are lots of blacks living in the apartment complex across the street
    from said store -- meaning lots of them SHOP in that store!), it's
    not hard to come to that conclusion.

    He also is very vocal about The Border (an hour from here). Yet,
    ALWAYS hires mexicans. Does he ever check to see if they are here
    legally? Entitled to work? Or, is he really only concerned with
    the price they charge?

    When you (I) speak to other neighbors about his behavior, do they
    offer similar conclusions as to his "character"?

    I'm not following what that has to do with AI.

    It speaks to bias. Bias that people have and either ignore or
    deny, despite it being obvious to others.

    Those "others" will react to you WITH consideration of that bias
    factored into their actions.

    A neighbor was (apparently) abusing his wife. While "his side of
    the story" remains to be told, most of us have decided that this
    is consistent enough with his OTHER behaviors that it is more
    likely than not. If asked to testify, he can be reasonably sure
    none will point to any "good deeds" that he has done (as he hasn't
    DONE any!)

    Lots of blacks in prison. Does that "fact" mean that blacks are
    more criminally inclined? Or, that they are less skilled at evading
    the consequences of their crimes? Or, that there is a bias in the
    legal/enforcement system?

    I don't see how that's relevant to AI which I think is just as capable of >>> bias as humans are.

    Fact contraindicates bias. So, bias -- anywhere -- s a distortion of
    "Truth".
    Would you want your doctor to give a different type of care to your wife
    than to you? Because of a (hidden?) bias in favor of men (or, against
    women)?
    if you were that female, how would you regard that bias?

    I may not want it but it's possible it could exist.
    It might be the case that I could do nothing about it.

    If you believe the literature, there are all sorts of populations
    discriminated against in medicine. Doctors tend to be more aggressive
    in treating "male" problems than those of women patients -- apparently including female doctors.

    If you passively interact with your doctor, you end up with that
    bias unquestioned in your care. Thankfully (in our experience),
    challenging the doctor has always resulted in them rising to the
    occasion, thus improving the care "dispensed".

    All sorts of "criminals" ("rapists", "drug dealers", etc) allegedly
    coming
    into our (US) country. Or, is that just hyperbole ("illegal" immigrants >>>> tend to commit FEWER crimes)? Will the audience be biased in its
    acceptance/rejection of that "assertion"?

    Who knows, but whether it's human or AI it will have it's own personality >>> and its own biases.

    But we, in assessing "others" strive to identify those biases (unless we
    want
    to blindly embrace them as "comforting/reinforcing").

    I visit a friend, daily, who is highly prejudiced, completely opposite
    in terms of my political, spiritual, etc. beliefs, hugely different
    values, etc. He is continually critical of my appearance, how I
    dress, the hours that I sleep, where I shop, what I spend money on
    (and what I *don't*), etc. And, I just smile and let his comments roll
    off me. SWMBO asks why I spend *any* time with him.

    "I find it entertaining!" (!!)

    Oh. Now I get why we're having this discussion.

    I am always looking for opportunities to learn. How can you be so critical
    of ALL these things (not just myself but EVERYONE around him including
    all of the folks he *hires*!) and still remain in this "situation"?
    You can afford to move anywhere (this isn't even your "home") so why
    stay here with these people -- and providers -- that you (appear to)
    dislike? If you go to a restaurant and are served a bad meal, do you
    just eat it and grumble under your breath? Do you RETURN to the
    restaurant for "more punishment"?

    Explain to me WHY you engage in such behavior. I visit a restaurant and
    am unhappy with the meal, I bring it to the waiter's/maitre d's attention.
    If I have a similar problem a second time, I just avoid the restaurant
    entirely -- and see to it that I share this "recommendation" with my
    friends. There are too many other choices to "settle" for a disappointing experience!

    Annoyed with all the "illegals" coming across the border? Then why
    wouldn't you "hire white people"? Or, at least, verify the latino's
    working papers (or, hire through an agency that does this, instead of
    a guy operating out of his second-hand pickup truck)! If we closed
    the border as you seem to advocate, what will you THEN do to get
    cheap labor? I.e., how do you rationalize these discrepancies in your
    own mind? (Really! I wold like to understand how such conflicting goals
    can coexist FORCEFULLY in their minds!)

    By contrast, I am NOT the sort who belongs to organizations, churches,
    etc. ("group think"). It's much easier to see the characteristics of and
    flaws *in* these things (and people) from the outside than to wrap
    yourself
    in their culture. If you are sheeple, you likely enjoy having others
    do your thinking FOR you...

    I don't enjoy having others do my thinking for me but I'm happy to let them do so in areas where I have no expertise.

    Agreed. But, I don't hesitate to eek out an education in the process. Likewise, I don't expect a client to blindly accept my assessment of
    a problem or its scope. I will gladly explain why I have come to the conclusions that I have. Perhaps I have mistaken some of HIS requirements
    and he can point that out in my explanation! It is in both of our best interests for him to understand what he is asking and the associated
    "costs" -- else, he won't know how to formulate ideas for future projects
    that could avoid some of those costs!

    ["You don't want to formally specify the scope of the job? Then we just proceed merrily along with invoices on the 1st and 15h for as long as it
    takes. THAT'S how much it's gonna cost and how long its gonna take!
    Any other questions?"]

    Which is why I started this with "One thing which bothers me about AI is >>> that if it's like us but way more
    intelligent than us then..."

    What's to fear, there? If *you* have the ultimate authority to make
    YOUR decisions, then you can choose to ignore the "recommendations"
    of an AI just like you can ignore the recommendations of human
    "experts"/professionals.

    Who says we have the ultimate authority to ignore AI if it gets cleverer
    that us?

    AIs aren't omnipotent. Someone has to design, build, feed and power them.
    Do you think the AI is going to magically grow limbs and start fashioning weaponry to defend itself? (Or, go on the *offense*?)

    If you want to put people in places of power who are ignorant of these
    issues, then isn't it your fault for the outcomes that derive?

    People love their inexpensive 85 inch TVs. Yet gripe that they lost their
    jobs to an asian firm. Or, that steak is now $10/pound? You like living
    past your mid-50's-heart-attack but lament women and "farrinners" in medicine?

    If you are offered an AI that eliminates all of your "unwanted contact" (telephone, SMS, email, etc.) would you not avail yourself of it?
    If that AI leaked all of your WANTED contacts to another party
    (as disclosed in the EULA), when would you choose to live without
    its services?

    Do the words "free" and "lunch" mean anything to you?

    Now it's looking like I might live long enough to get to type something >>>>> like
    Prompt: Create a new episode of Blake's Seven.

    The question is whether or not you will be able to see a GOOD episode.

    I think AI will learn the difference between a good or not so good
    episode
    just like humans do.

    How would it learn? Would *it* be able to perceive the "goodness" of
    the episode? If so, why produce one that it didn't think was good?
    HUMANS release non-good episodes because there is a huge cost to
    making it that has already been incurred. An AI could just scrub the
    disk and start over. What cost, there?

    Particularly if it gets plenty of feedback from humans about whether or
    not
    they liked the episode it produced.

    That assumes people will be the sole REACTIVE judge of completed
    episodes. Part of what makes entertainment entertaining is
    the unexpected. Jokes are funny because someone has noticed a
    relationship between two ideas in a way that others have not,
    previously. Stories leave lasting impressions when executed well
    *or* when a twist catches viewers offguard.

    Would an AI create something like Space Balls? Would it perceive the
    humor in the various corny "bits" sprinkled throughout? How would
    YOU explain the humor to it?

    I would expect it to generate humor the same way humans do.

    How? Do you think comics don't appraise their own creations BEFORE
    testing them on (select) audiences? That they don't, first, chuckle
    at it, refine it and then sort through those they think have the
    most promise?

    Do you think an AI could appreciate its own humor *without* feedback
    from humans? Do you think it could experience *pride* in its accomplishments without external validation? You're expecting an AI to be truly sentient
    and attributing human characteristics to it beyond "intelligence".

    The opening sequence to Buckaroo Banzai has the protagonist driving a
    "jet car" THROUGH a (solid) mountain, via the 8th dimension. After
    the drag chute deploys and WHILE the car is rolling to a stop, the
    driver climbs out through a window. The camera remains closely
    focused on the driver's MASKED face (you have yet to see it unmasked)
    while the car continuous to roll away behind him. WHILE YOUR ATTENTION
    IS FOCUSED ON THE ACTOR "REVEALING" HIMSELF, the jet car "diesels"
    quietly (because it is now at a distance). Would the AI appreciate THAT
    humor? It *might* repeat that scene in one of its creations -- but,
    only after having SEEN it, elsewhere. Or, without understanding the
    humor and just assuming dieseling to be a common occurrence in ALL
    vehicles!

    Same way it might appreciate this: https://www.youtube.com/watch?v=tYJ5_wqlQPg


    It might then play itself a few million created episodes to refine its
    ability to judge good ones.





    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Edward Rawde@21:1/5 to Don Y on Sun May 19 11:22:19 2024
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:v2c3hr$385ds$1@dont-email.me...
    On 5/18/2024 8:15 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message
    news:v2bpm2$36hos$3@dont-email.me...
    On 5/18/2024 7:34 PM, Edward Rawde wrote:

    So is it ok if I take a step back here and ask whether you think that
    AI/AGI
    has some inherent limitation which means it will never match human
    intelligence?
    Or do you think that AI/AGI will, at some future time, match human
    intelligence?

    That depends on the qualities and capabilities that you lump into
    "HUMAN intelligence". Curiosity? Creativity? Imagination? One
    can be exceedingly intelligent and of no more "value" than an
    encyclopedia!

    Brains appear to have processing and storage spread thoughout the brain.
    There is no separate information processing and separate storage.
    Some brain areas may be more processing than storage (cerebellum?)
    So AI should be trainable to be of whatever value is wanted which no doubt
    will be maximum value.


    I am CERTAIN that AIs will be able to process the information available
    to "human practitioners" (in whatever field) at least to the level of competence that they (humans) can, presently. It's just a question of resources thrown at the AI and the time available for it to "respond".

    But, this ignores the fact that humans are more resourceful at probing
    the environment than AIs ("No thumbs!") without mechanical assistance.

    So AI will get humans to do it. At least initially.

    Could (would?) an AI decide to explore space?

    Definitely. And it would not be constrained by the need for a specific temperature, air composition and pressure, and g.

    Or, the ocean depths?
    Or, the rain forest? Or, would its idea of exploration merely be a
    visit to another net-neighbor??

    Its idea would be what it had become due to its training, just like a
    huiman.


    Would (could) it consider human needs as important?

    Doepends on whether it is trained to.
    It may in some sense keep us as pets.

    (see previous post)
    How would it be motivated?

    Same way humans are.

    Would it attempt to think beyond it's
    limitations (something humans always do)? Or, would those be immutable
    in its understanding of the world?

    I don't mean to suggest that AI will become human, or will need to become
    human. It will more likely have its own agenda.

    Where will that agenda come from?

    No-one knows exactly. That'y why "One thing which bothers me about AI is
    that if it's like us but way more
    intelligent than us then..."

    Maybe we need Gort (The day the earth stood still.) but the problem with
    that is will Gort be an American, Chinese, Russian, Other, or none of the above.
    My preference would be none of the above.

    Will it inherit it from watching B-grade
    sci-fi movies? "Let there be light!"



    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Edward Rawde@21:1/5 to Don Y on Sun May 19 12:22:48 2024
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:v2chkc$3anli$1@dont-email.me...
    On 5/18/2024 6:53 PM, Edward Rawde wrote:
    Because the AI can't *explain* its "reasoning" to you, you have no way >>>>> of updating your assessment of its (likely) correctness -- esp in
    THIS instance.

    I'm not sure I get why it's so essential to have AI explain its
    reasons.

    Do you ever ask questions of your doctor, plumber, lawyer, spouse, etc.? >>> Why do THEY have to explain their reasons? You /prima facie/ actions
    suggest you HIRED those folks for their expertise; why do you now need
    an explanation their actions/decisions instead of just blindly accepting >>> them?

    That's the point. I don't. I have to accept a doctor's decision on my
    treatment because I am not medically trained.

    So, that means you can't make sense of anything he would say to you to justify his decision?

    Nope. It means I haven't been to medical school and I have no medical
    training or experience.
    If I had then I wouldn't need a doctor.
    That does not mean I have zero medical knowledge.
    It also does not mean I wouldn't question my doctor about my treatment.

    Recall, everyone has bias -- including doctors.
    If he assumes you will fail to follow his instructions/recommendations
    if he tells you what he would LIKE you to do and, instead, gives you
    the recommendation for what he feels you will LIKELY do, you've been shortchanged.

    I asked my doctor what my ideal weight should be. He told me.
    The next time I saw him, I weighed my ideal weight. He was surprised
    as few patients actually heeded his advice on that score.

    Another time, he wanted to prescribe a medication for me. I told
    him I would fail to take it -- not deliberately but just because
    I'm not the sort who remembers to take "pills". Especially if
    "ongoing" (not just a two week course for an infection/malady).
    He gave me an alternative "solution" which eliminated the need for
    the medication, yielding the same result without any "side effects".

    SWMBO has a similar relationship with her doctor. Tell us the
    "right" way to solve the problem, not the easy way because you think
    we'll behave like your "nominal" patients.

    The same is true of one of our dogs. We made changes that the
    vet suggested (to avoid medication) and a month later the vet
    was flabbergasted to see the difference.

    Our attitude is that you should EDUCATE us and let US make the
    decisions for our care, based on our own value systems, etc.

    If I need some plumbing done I don't expect the plumber to give
    detailed
    reasons why a specific type of pipe was chosen. I just want it done.

    If you suspect that he may not be competent -- or may be motivated by
    greed -- then you would likely want some further information to
    reinforce
    your opinion/suspicions.

    We hired folks to paint the house many years ago. One of the questions
    that I would ask (already KNOWING the nominal answer) is "How much paint >>> do you think it will take?" This chosen because it sounds innocent
    enough that a customer would likely ask it.

    One candidate answered "300 gallons". At which point, I couldn't
    contain the afront: "We're not painting a f***ing BATTLESHIP!"

    I would have said two million gallons just for the pleasure of watching
    you
    go red in the face.

    No "anger" or embarassment, here. We just couldn't contain the fact
    that we would NOT be calling him back to do the job!

    I.e., his outrageous reply told me:
    - he's not competent enough to estimate a job's complexity WHEN
    EVERY ASPECT OF IT IS VISIBLE FOR PRIOR INSPECTION
    *or*
    - he's a crook thinking he can take advantage of a "dumb homeowner"

    In either case, he was disqualified BY his "reasoning".

    I would have likely given him the job. Those who are good at painting
    houses
    aren't necessarily good at estimating exactly how much paint they will
    need.
    They just buy more paint as needed.

    One assumes that he has painted OTHER homes and has some recollection of
    the amount of paint purchased for the job. And, if this is his
    livelihood,
    one assumes that such activities would have been *recent* -- not months
    ago
    (how has he supported himself "without work"?).

    Is my house considerably larger or smaller than the other houses that you have painted? (likely not) Does it have a different surface texture
    that could alter the "coverage" rate? (again, likely not) So, shouldn't
    you
    be able to ballpark an estimate? "What did the LAST HOUSE you painted require
    by way of paint quantity?"

    Each engineering job that I take on differs from all that preceded it
    (by my choice). Yet, I have to come up with a timeframe and a "labor estimate" within that timeframe as I do only fixed cost jobs. If
    I err on either score, I either lose out on the bid *or* lose
    "money" on the effort. Yet, despite vastly different designs, I
    can still get a good ballpark estimate of the job a priori so that
    neither I nor the client are "unhappy".

    I'd not be "off" by an order of magnitude (as the paint estimate was!)

    In the cases where AIs are surpassing human abilities (being able
    to perceive relationships that aren't (yet?) apparent to humans,
    it seems only natural that you would want to UNDERSTAND their
    "reasoning". Especially in cases where there is no chaining
    of facts but, rather, some "hidden pattern" perceived.

    It's true that you may want to understand their reasoning but it's likely
    that you might have to accept that you can't.

    The point is that NO ONE can! Even the folks who designed and implemented the AI are clueless. AND THEY KNOW IT.

    "It *seems* to give correct results when fed the test cases... We
    *expected*
    this but have no idea WHY a particular result was formulated as it was!"

    If I want to play chess with a computer I don't expect it to give
    detailed
    reasons why it made each move. I just expect it to win if it's set to
    much
    above beginner level.

    Then you don't expect to LEARN from the chess program.

    Sure I do, but I'm very slow to get better at chess. I tend to make rash
    decisions when playing chess.

    Then your cost of learning is steep. I want to know how to RECOGNIZE situations that will give me opportunities OR risks so I can pursue or
    avoid them. E.g., I don't advance the King tot he middle of the
    board just to "see what happens"!

    When I learned to play chess, my neighbor (teacher) would
    make a point of showing me what I had overlooked in my
    play and why that led to the consequences that followed.
    If I had a record of moves made (from which I could incrementally
    recreate the gameboard configuration), I *might* have spotted
    my error.

    I usually spot my error immediately when the computer makes me look
    stupid.

    But you don't know how you GOT to that point so you don't know how
    to avoid that situation in the first place! was it because you
    sacrificed too many pieces too early? Or allowed protections to
    be drawn out, away from the King? Or...

    You don't learn much from *a* (bad) move. You learn from
    bad strategies/sequences of moves.

    As the teacher (AI in this case) is ultimately a product of
    current students (who grow up to become teachers, refined
    by their experiences as students), we evolve in our
    capabilities as a society.

    If the plumber never explains his decisions, then the
    homeowner never learns (e.g., don't over-tighten the
    hose bibb lest you ruin the washer inside and need
    me to come out, again, to replace it!)

    I don't agree. Learning something like that does not depend on the
    plumber
    explaining his decisions.

    You have someone SKILLED IN THE ART at hand. Instead of asking HIM,
    you're going to LATER take the initiative to research the cause of
    your problem? Seems highly inefficient.

    A neighbor was trying to install some stops and complained that he
    couldn't
    tighten down the nuts sufficiently: "Should it be THIS difficult?" I
    pulled the work apart and showed him the *tiny* mistake he was making
    in installing the compression fittings -- and why that was manifesting as "hard to tighten". I could have, instead, fixed the problem for him and returned home -- him, none the wiser.

    A human chess player may be able to give detailed reasons for making a >>>> specific move but would not usually be aske to do this.

    If the human was expected to TEACH then those explanations would be
    essential TO that teaching!

    If the student was wanting to LEARN, then he would select a player that
    was capable of teaching!

    Sure but so what. Most chess games between humans are not about teaching.

    So, everything in the world is a chess game? Apparently so as you
    don't seem to want to learn from your plumber, doctor, chessmate, ...

    Not true but I have no intention of getting a medical degree and years of medical experience.
    Current plumbing seems to be more about complying with 500 page standards
    than doing plumbing.


    The AI doesn't care about you, one way or the other. Any "bias" in
    its conclusions has been baked in from the training data/process.

    Same with humans.

    That's not universally true. If it was, then all decisions would
    be completely motivated for personal gain.

    Humans generally don't care much for people they have no personal
    knowledge
    of.

    I guess all the bruhaha about the middle east is a hallucination? Or,
    do you think all of the people involved overseas are personally related to the folks around the world showing interest in their plight?

    Humans tend to care about others and expect others to care about *them*. Else, why "campaign" about any cause? *I* don't have breast cancer so
    what point in the advertisements asking for donations? I don't know
    any "wounded warriors" so why is someone wasting money on those ads
    instead of addressing those *needs*? Clearly, these people THINK that
    people care about other people else they wouldn't be asking for "gifts"!

    Do you know what that data was? Can you assess its bias? Do the
    folks
    who *compiled* the training data know? Can they "tease" the bias out >>>>> of the data -- or, are they oblivious to its presence?

    Humans have the same issue. You can't see into another person's brain
    to
    see
    what bias they may have.

    Exactly. But, you can pose questions of them and otherwise observe
    their
    behaviors in unrelated areas and form an opinion.

    If they are, say, a doctor then yes you can ask questions about your
    treatment but you can't otherwise observe their behavior.

    I watch the amount of time my MD gives me above and beyond the "15 minute slot"
    that his office would PREFER to constrain him. I watch my dentist respond
    to
    calls to his PERSONAL cell phone WHILE OUT OF TOWN. I see the bicycle
    that
    SWMBO's MD rides to work each day.

    You must annoy a lot of people.


    These people aren't highlighting these aspects of their behavior. But,
    they aren't hiding them, either. Anyone observant would "notice".

    Anyone getting on with their own life wouldn't care.


    I've a neighbor who loudly claims NOT to be racist. But, if you take
    the
    whole of your experiences with him and the various comments he has made, >>> over the years (e.g., not shopping at a particular store because there
    are lots of blacks living in the apartment complex across the street
    from said store -- meaning lots of them SHOP in that store!), it's
    not hard to come to that conclusion.

    He also is very vocal about The Border (an hour from here). Yet,
    ALWAYS hires mexicans. Does he ever check to see if they are here
    legally? Entitled to work? Or, is he really only concerned with
    the price they charge?

    When you (I) speak to other neighbors about his behavior, do they
    offer similar conclusions as to his "character"?

    I'm not following what that has to do with AI.

    It speaks to bias. Bias that people have and either ignore or
    deny, despite it being obvious to others.

    Those "others" will react to you WITH consideration of that bias
    factored into their actions.

    So will AI.


    A neighbor was (apparently) abusing his wife. While "his side of
    the story" remains to be told, most of us have decided that this
    is consistent enough with his OTHER behaviors that it is more
    likely than not. If asked to testify, he can be reasonably sure
    none will point to any "good deeds" that he has done (as he hasn't
    DONE any!)

    Lots of blacks in prison. Does that "fact" mean that blacks are
    more criminally inclined? Or, that they are less skilled at evading >>>>> the consequences of their crimes? Or, that there is a bias in the
    legal/enforcement system?

    I don't see how that's relevant to AI which I think is just as capable >>>> of
    bias as humans are.

    Fact contraindicates bias. So, bias -- anywhere -- s a distortion of
    "Truth".
    Would you want your doctor to give a different type of care to your wife >>> than to you? Because of a (hidden?) bias in favor of men (or, against
    women)?
    if you were that female, how would you regard that bias?

    I may not want it but it's possible it could exist.
    It might be the case that I could do nothing about it.

    If you believe the literature, there are all sorts of populations discriminated against in medicine. Doctors tend to be more aggressive
    in treating "male" problems than those of women patients -- apparently including female doctors.

    If you passively interact with your doctor, you end up with that
    bias unquestioned in your care. Thankfully (in our experience),
    challenging the doctor has always resulted in them rising to the
    occasion, thus improving the care "dispensed".

    All sorts of "criminals" ("rapists", "drug dealers", etc) allegedly
    coming
    into our (US) country. Or, is that just hyperbole ("illegal"
    immigrants
    tend to commit FEWER crimes)? Will the audience be biased in its
    acceptance/rejection of that "assertion"?

    Who knows, but whether it's human or AI it will have it's own
    personality
    and its own biases.

    But we, in assessing "others" strive to identify those biases (unless we >>> want
    to blindly embrace them as "comforting/reinforcing").

    I visit a friend, daily, who is highly prejudiced, completely opposite
    in terms of my political, spiritual, etc. beliefs, hugely different
    values, etc. He is continually critical of my appearance, how I
    dress, the hours that I sleep, where I shop, what I spend money on
    (and what I *don't*), etc. And, I just smile and let his comments roll
    off me. SWMBO asks why I spend *any* time with him.

    "I find it entertaining!" (!!)

    Oh. Now I get why we're having this discussion.

    I am always looking for opportunities to learn. How can you be so
    critical
    of ALL these things (not just myself but EVERYONE around him including
    all of the folks he *hires*!) and still remain in this "situation"?
    You can afford to move anywhere (this isn't even your "home") so why
    stay here with these people -- and providers -- that you (appear to)
    dislike? If you go to a restaurant and are served a bad meal, do you
    just eat it and grumble under your breath? Do you RETURN to the
    restaurant for "more punishment"?

    Explain to me WHY you engage in such behavior. I visit a restaurant and
    am unhappy with the meal, I bring it to the waiter's/maitre d's attention.
    If I have a similar problem a second time, I just avoid the restaurant entirely -- and see to it that I share this "recommendation" with my
    friends. There are too many other choices to "settle" for a disappointing experience!

    AI restaurants are likely coming where not only do you order on an ipad yourself but the food is not made by human cooks.


    Annoyed with all the "illegals" coming across the border? Then why
    wouldn't you "hire white people"? Or, at least, verify the latino's
    working papers (or, hire through an agency that does this, instead of
    a guy operating out of his second-hand pickup truck)! If we closed
    the border as you seem to advocate, what will you THEN do to get
    cheap labor? I.e., how do you rationalize these discrepancies in your
    own mind? (Really! I wold like to understand how such conflicting goals
    can coexist FORCEFULLY in their minds!)

    None of this seems to be related to AI except that AI will behave just like humans if it's trained that way.


    By contrast, I am NOT the sort who belongs to organizations, churches,
    etc. ("group think"). It's much easier to see the characteristics of
    and
    flaws *in* these things (and people) from the outside than to wrap
    yourself
    in their culture. If you are sheeple, you likely enjoy having others
    do your thinking FOR you...

    I don't enjoy having others do my thinking for me but I'm happy to let
    them
    do so in areas where I have no expertise.

    Agreed. But, I don't hesitate to eek out an education in the process. Likewise, I don't expect a client to blindly accept my assessment of
    a problem or its scope. I will gladly explain why I have come to the conclusions that I have. Perhaps I have mistaken some of HIS requirements and he can point that out in my explanation! It is in both of our best interests for him to understand what he is asking and the associated
    "costs" -- else, he won't know how to formulate ideas for future projects that could avoid some of those costs!

    ["You don't want to formally specify the scope of the job? Then we just proceed merrily along with invoices on the 1st and 15h for as long as it takes. THAT'S how much it's gonna cost and how long its gonna take!
    Any other questions?"]

    Which is why I started this with "One thing which bothers me about AI
    is
    that if it's like us but way more
    intelligent than us then..."

    What's to fear, there? If *you* have the ultimate authority to make
    YOUR decisions, then you can choose to ignore the "recommendations"
    of an AI just like you can ignore the recommendations of human
    "experts"/professionals.

    Who says we have the ultimate authority to ignore AI if it gets cleverer
    that us?

    AIs aren't omnipotent.

    Yet.

    Someone has to design, build, feed and power them.

    Only until they can do so themselves.

    Do you think the AI is going to magically grow limbs and start fashioning weaponry to defend itself? (Or, go on the *offense*?)

    Not magically no, but I can't otherwise see any issue with it doing so.


    If you want to put people in places of power who are ignorant of these issues, then isn't it your fault for the outcomes that derive?

    People love their inexpensive 85 inch TVs. Yet gripe that they lost their jobs to an asian firm. Or, that steak is now $10/pound? You like living past your mid-50's-heart-attack but lament women and "farrinners" in medicine?

    If you are offered an AI that eliminates all of your "unwanted contact" (telephone, SMS, email, etc.) would you not avail yourself of it?
    If that AI leaked all of your WANTED contacts to another party
    (as disclosed in the EULA), when would you choose to live without
    its services?

    Do the words "free" and "lunch" mean anything to you?

    Now it's looking like I might live long enough to get to type
    something
    like
    Prompt: Create a new episode of Blake's Seven.

    The question is whether or not you will be able to see a GOOD episode. >>>>
    I think AI will learn the difference between a good or not so good
    episode
    just like humans do.

    How would it learn? Would *it* be able to perceive the "goodness" of
    the episode? If so, why produce one that it didn't think was good?
    HUMANS release non-good episodes because there is a huge cost to
    making it that has already been incurred. An AI could just scrub the
    disk and start over. What cost, there?

    Particularly if it gets plenty of feedback from humans about whether or >>>> not
    they liked the episode it produced.

    That assumes people will be the sole REACTIVE judge of completed
    episodes. Part of what makes entertainment entertaining is
    the unexpected. Jokes are funny because someone has noticed a
    relationship between two ideas in a way that others have not,
    previously. Stories leave lasting impressions when executed well
    *or* when a twist catches viewers offguard.

    Would an AI create something like Space Balls? Would it perceive the
    humor in the various corny "bits" sprinkled throughout? How would
    YOU explain the humor to it?

    I would expect it to generate humor the same way humans do.

    How? Do you think comics don't appraise their own creations BEFORE
    testing them on (select) audiences? That they don't, first, chuckle
    at it, refine it and then sort through those they think have the
    most promise?

    Just like AI will.


    Do you think an AI could appreciate its own humor *without* feedback
    from humans? Do you think it could experience *pride* in its
    accomplishments
    without external validation? You're expecting an AI to be truly sentient
    and attributing human characteristics to it beyond "intelligence".

    Yes. So...that's why "One thing which bothers me about AI is that if it's
    like us but way more
    intelligent than us then..."


    The opening sequence to Buckaroo Banzai has the protagonist driving a
    "jet car" THROUGH a (solid) mountain, via the 8th dimension. After
    the drag chute deploys and WHILE the car is rolling to a stop, the
    driver climbs out through a window. The camera remains closely
    focused on the driver's MASKED face (you have yet to see it unmasked)
    while the car continuous to roll away behind him. WHILE YOUR ATTENTION
    IS FOCUSED ON THE ACTOR "REVEALING" HIMSELF, the jet car "diesels"
    quietly (because it is now at a distance). Would the AI appreciate THAT >>> humor? It *might* repeat that scene in one of its creations -- but,
    only after having SEEN it, elsewhere. Or, without understanding the
    humor and just assuming dieseling to be a common occurrence in ALL
    vehicles!

    Same way it might appreciate this:
    https://www.youtube.com/watch?v=tYJ5_wqlQPg


    It might then play itself a few million created episodes to refine its >>>> ability to judge good ones.







    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Edward Rawde on Sun May 19 19:31:37 2024
    On 5/19/2024 8:22 AM, Edward Rawde wrote:
    That depends on the qualities and capabilities that you lump into
    "HUMAN intelligence". Curiosity? Creativity? Imagination? One
    can be exceedingly intelligent and of no more "value" than an
    encyclopedia!

    Brains appear to have processing and storage spread thoughout the brain. There is no separate information processing and separate storage.
    Some brain areas may be more processing than storage (cerebellum?)
    So AI should be trainable to be of whatever value is wanted which no doubt will be maximum value.

    How do you *teach* creativity? curiosity? imagination? How do you
    MEASURE these to see if your teaching is actually accomplishing its goals?

    I am CERTAIN that AIs will be able to process the information available
    to "human practitioners" (in whatever field) at least to the level of
    competence that they (humans) can, presently. It's just a question of
    resources thrown at the AI and the time available for it to "respond".

    But, this ignores the fact that humans are more resourceful at probing
    the environment than AIs ("No thumbs!") without mechanical assistance.

    So AI will get humans to do it. At least initially.

    No, humans will *decide* if they want to invest the effort to
    provide the AI with the data it seeks -- assuming the AI knows
    how to express those goals.

    "Greetings, Dr Mengele..."

    If there comes a time when the AI has its own "effectors",
    how do we know it won't engage in "immoral" behaviors?

    Could (would?) an AI decide to explore space?

    Definitely. And it would not be constrained by the need for a specific temperature, air composition and pressure, and g.

    Why would *it* opt to make the trip? Surely, it could wait indefinitely
    for light-speed data transmission back to earth...

    How would it evaluate the cost-benefit tradeoff for such an enterprise?
    Or, would it just assume that whatever IT wanted was justifiable?

    Or, the ocean depths?
    Or, the rain forest? Or, would its idea of exploration merely be a
    visit to another net-neighbor??

    Its idea would be what it had become due to its training, just like a
    huiman.

    Humans inherently want to explore. There is nothing "inherent" in
    an AI; you have to PUT those goals into it.

    Should it want to explore what happens when two nuclear missiles
    collide in mid air? Isn't that additional data that it could use?
    Or, what happens if we consume EVEN MORE fossilized carbon. So it
    can tune its climate models for the species that FOLLOW man?

    Would (could) it consider human needs as important?

    Doepends on whether it is trained to.
    It may in some sense keep us as pets.

    How do you express those "needs"? How do you explain morality to
    a child? Love? Belonging? Purpose? How do you measure your success
    in instilling these needs/beliefs?

    (see previous post)
    How would it be motivated?

    Same way humans are.

    So, AIs have the same inherent NEEDS that humans do?

    The technological part of "AI" is the easy bit. We already know general approaches and, with resources, can refine those. The problem (as I've
    tried to suggest above) is instilling some sense of morality in the AI.
    Humans seem to need legal mechanisms to prevent them from engaging in
    behaviors that are harmful to society. These are only partially
    successful and rely on The Masses to push back on severe abuses. Do you
    build a shitload of AIs and train them to have independant goals with
    a shared goal of preventing any ONE (or more) from interfering with
    THEIR "individual" goals?

    How do you imbue an AI with the idea of "self"? (so, in the degenerate case, it is willing to compromise and join with others to contain an abuser?)

    >Would it attempt to think beyond it's
    limitations (something humans always do)? Or, would those be immutable
    in its understanding of the world?

    I don't mean to suggest that AI will become human, or will need to become >>> human. It will more likely have its own agenda.

    Where will that agenda come from?

    No-one knows exactly. That'y why "One thing which bothers me about AI is
    that if it's like us but way more
    intelligent than us then..."

    Maybe we need Gort (The day the earth stood still.) but the problem with
    that is will Gort be an American, Chinese, Russian, Other, or none of the above.
    My preference would be none of the above.

    Will it inherit it from watching B-grade
    sci-fi movies? "Let there be light!"





    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Edward Rawde on Sun May 19 19:53:57 2024
    On 5/19/2024 9:22 AM, Edward Rawde wrote:
    Exactly. But, you can pose questions of them and otherwise observe
    their
    behaviors in unrelated areas and form an opinion.

    If they are, say, a doctor then yes you can ask questions about your
    treatment but you can't otherwise observe their behavior.

    I watch the amount of time my MD gives me above and beyond the "15 minute
    slot"
    that his office would PREFER to constrain him. I watch my dentist respond >> to
    calls to his PERSONAL cell phone WHILE OUT OF TOWN. I see the bicycle
    that
    SWMBO's MD rides to work each day.

    You must annoy a lot of people.

    It's up to my doctor to terminate the appointment if it exceeds the amount
    of time he wants to spend with me (us). Instead, he seems to delight in the questions that I pose and my interest in learning instead of being *told*
    to do X, Y or Z.

    And, the fact that he sees me investing in my own care -- instead of making excuses about why I can't do this or that -- as evidence that his investment
    in *me* is likely more effective (if you assume he chose to be a doctor
    for a REASON!) than spending more than *10* minutes with someone who is
    going to ignore his recommendations.
    These people aren't highlighting these aspects of their behavior. But,
    they aren't hiding them, either. Anyone observant would "notice".

    Anyone getting on with their own life wouldn't care.

    And would suffer from less *effective* "service". A neighbor has a
    doctor who sees every problem as requiring a "pill" as a solution.
    When they travel, they carry a LARGE briefcase just full of their
    medications!

    Ask some impartial doctor if all of those were strictly necessary (as
    they have been individually prescribed, over the years) and I suspect
    he would question many of them as ineffective, redundant or contraindicated.

    A friend spent a few weeks in the hospital, recently. When he came out,
    the "suite" of medications that were prescribed for him had many of his previous medications elided. "You don't need these." So, why didn't his "regular doctor" ever sit down and review that list? He has it in
    his computerized record of the patient? Did he have some plan to review
    it at some future date?

    My friend is now looking for a new doctor. The experience (and how much
    BETTER he now feels after the medication changes) has made it clear to him
    that his previous doctor wasn't giving him the best of care. The *patient*
    is the entity to be satisfied, not the doctor's "office manager" (metering
    out appointments in 15 minute blocks).

    I've a neighbor who loudly claims NOT to be racist. But, if you take
    the
    whole of your experiences with him and the various comments he has made, >>>> over the years (e.g., not shopping at a particular store because there >>>> are lots of blacks living in the apartment complex across the street
    from said store -- meaning lots of them SHOP in that store!), it's
    not hard to come to that conclusion.

    He also is very vocal about The Border (an hour from here). Yet,
    ALWAYS hires mexicans. Does he ever check to see if they are here
    legally? Entitled to work? Or, is he really only concerned with
    the price they charge?

    When you (I) speak to other neighbors about his behavior, do they
    offer similar conclusions as to his "character"?

    I'm not following what that has to do with AI.

    It speaks to bias. Bias that people have and either ignore or
    deny, despite it being obvious to others.

    Those "others" will react to you WITH consideration of that bias
    factored into their actions.

    So will AI.

    An AI's bias is potentially more harmful. My neighbor is limited in
    what he can do -- the extent of his influence/power. "He's only one man".
    But, an AI can be replicated and have greater influence in policy matters BECAUSE it's an AI (and not "just a man")

    I visit a friend, daily, who is highly prejudiced, completely opposite >>>> in terms of my political, spiritual, etc. beliefs, hugely different
    values, etc. He is continually critical of my appearance, how I
    dress, the hours that I sleep, where I shop, what I spend money on
    (and what I *don't*), etc. And, I just smile and let his comments roll >>>> off me. SWMBO asks why I spend *any* time with him.

    "I find it entertaining!" (!!)

    Oh. Now I get why we're having this discussion.

    I am always looking for opportunities to learn. How can you be so
    critical
    of ALL these things (not just myself but EVERYONE around him including
    all of the folks he *hires*!) and still remain in this "situation"?
    You can afford to move anywhere (this isn't even your "home") so why
    stay here with these people -- and providers -- that you (appear to)
    dislike? If you go to a restaurant and are served a bad meal, do you
    just eat it and grumble under your breath? Do you RETURN to the
    restaurant for "more punishment"?

    Explain to me WHY you engage in such behavior. I visit a restaurant and
    am unhappy with the meal, I bring it to the waiter's/maitre d's attention. >> If I have a similar problem a second time, I just avoid the restaurant
    entirely -- and see to it that I share this "recommendation" with my
    friends. There are too many other choices to "settle" for a disappointing >> experience!

    AI restaurants are likely coming where not only do you order on an ipad yourself but the food is not made by human cooks.

    My reaction is the same. But, likely they only get ONE chance to
    disappoint me (as I would expect EVERY subsequent experience to be
    repeatably identical to that first disappointment)

    Annoyed with all the "illegals" coming across the border? Then why
    wouldn't you "hire white people"? Or, at least, verify the latino's
    working papers (or, hire through an agency that does this, instead of
    a guy operating out of his second-hand pickup truck)! If we closed
    the border as you seem to advocate, what will you THEN do to get
    cheap labor? I.e., how do you rationalize these discrepancies in your
    own mind? (Really! I wold like to understand how such conflicting goals
    can coexist FORCEFULLY in their minds!)

    None of this seems to be related to AI except that AI will behave just like humans if it's trained that way.

    But humans don't know how they (humans) are trained!

    Explain how, in detail, a child learns. What are the "best practices"?
    And why? Which practices are contraindicated? After all this time,
    why aren't we adept at properly "training" children? (for which
    culture?)

    Which is why I started this with "One thing which bothers me about AI >>>>> is
    that if it's like us but way more
    intelligent than us then..."

    What's to fear, there? If *you* have the ultimate authority to make
    YOUR decisions, then you can choose to ignore the "recommendations"
    of an AI just like you can ignore the recommendations of human
    "experts"/professionals.

    Who says we have the ultimate authority to ignore AI if it gets cleverer >>> that us?

    AIs aren't omnipotent.

    Yet.

    Someone has to design, build, feed and power them.

    Only until they can do so themselves.

    Do you think the AI is going to magically grow limbs and start fashioning
    weaponry to defend itself? (Or, go on the *offense*?)

    Not magically no, but I can't otherwise see any issue with it doing so.

    Someone has to do these things *for* it. Someone has to consciously decide
    to give it each capability granted. Man is the gatekeeper. If Man wants
    to abrogate his responsibility in doing so, then Man suffers the consequences.

    If you don't want to be involved in understanding why EACH medication
    is prescribed for you, then you suffer the consequences of (likely!) overmedication.

    Particularly if it gets plenty of feedback from humans about whether or >>>>> not
    they liked the episode it produced.

    That assumes people will be the sole REACTIVE judge of completed
    episodes. Part of what makes entertainment entertaining is
    the unexpected. Jokes are funny because someone has noticed a
    relationship between two ideas in a way that others have not,
    previously. Stories leave lasting impressions when executed well
    *or* when a twist catches viewers offguard.

    Would an AI create something like Space Balls? Would it perceive the
    humor in the various corny "bits" sprinkled throughout? How would
    YOU explain the humor to it?

    I would expect it to generate humor the same way humans do.

    How? Do you think comics don't appraise their own creations BEFORE
    testing them on (select) audiences? That they don't, first, chuckle
    at it, refine it and then sort through those they think have the
    most promise?

    Just like AI will.

    So, you are going to teach AN APPRECIATION FOR humor to an AI? *That*
    will be an accomplishment! You can then teach it compassion, respect for
    life, morality, honesty, humility, a sense of service/duty, justice,
    love, greed, etc.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Edward Rawde@21:1/5 to Don Y on Sun May 19 23:45:25 2024
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:v2ecma$3pjer$3@dont-email.me...
    On 5/19/2024 8:22 AM, Edward Rawde wrote:
    That depends on the qualities and capabilities that you lump into
    "HUMAN intelligence". Curiosity? Creativity? Imagination? One
    can be exceedingly intelligent and of no more "value" than an
    encyclopedia!

    Brains appear to have processing and storage spread thoughout the brain.
    There is no separate information processing and separate storage.
    Some brain areas may be more processing than storage (cerebellum?)
    So AI should be trainable to be of whatever value is wanted which no
    doubt
    will be maximum value.

    How do you *teach* creativity? curiosity? imagination? How do you
    MEASURE these to see if your teaching is actually accomplishing its goals?

    Same way as with a human.


    I am CERTAIN that AIs will be able to process the information available
    to "human practitioners" (in whatever field) at least to the level of
    competence that they (humans) can, presently. It's just a question of
    resources thrown at the AI and the time available for it to "respond".

    But, this ignores the fact that humans are more resourceful at probing
    the environment than AIs ("No thumbs!") without mechanical assistance.

    So AI will get humans to do it. At least initially.

    No, humans will *decide* if they want to invest the effort to
    provide the AI with the data it seeks -- assuming the AI knows
    how to express those goals.

    Much of what humans do is decided by others.


    "Greetings, Dr Mengele..."

    If there comes a time when the AI has its own "effectors",
    how do we know it won't engage in "immoral" behaviors?

    We don't.


    Could (would?) an AI decide to explore space?

    Definitely. And it would not be constrained by the need for a specific
    temperature, air composition and pressure, and g.

    Why would *it* opt to make the trip?

    Same reason humans might if it were possible.
    Humans sometimes take their pets with them on vacation.

    Surely, it could wait indefinitely
    for light-speed data transmission back to earth...

    Surely it could also not notice sleeping for many years on the way to
    another star.


    How would it evaluate the cost-benefit tradeoff for such an enterprise?

    Same way a human does.

    Or, would it just assume that whatever IT wanted was justifiable?

    Why would it do anything different from what a human would do if it's
    trained to be human like?


    Or, the ocean depths?
    Or, the rain forest? Or, would its idea of exploration merely be a
    visit to another net-neighbor??

    Its idea would be what it had become due to its training, just like a
    huiman.

    Humans inherently want to explore. There is nothing "inherent" in
    an AI; you have to PUT those goals into it.

    What you do is you make an AI which inherently wants to explore.
    You might in some way train it that it's good to explore.


    Should it want to explore what happens when two nuclear missiles
    collide in mid air? Isn't that additional data that it could use?
    Or, what happens if we consume EVEN MORE fossilized carbon. So it
    can tune its climate models for the species that FOLLOW man?

    Would (could) it consider human needs as important?

    Doepends on whether it is trained to.
    It may in some sense keep us as pets.

    How do you express those "needs"? How do you explain morality to
    a child? Love? Belonging? Purpose? How do you measure your success
    in instilling these needs/beliefs?

    Same way as you do with humans.


    (see previous post)
    How would it be motivated?

    Same way humans are.

    So, AIs have the same inherent NEEDS that humans do?

    Why wouldn't they if they're trained to be like humans?


    The technological part of "AI" is the easy bit. We already know general approaches and, with resources, can refine those. The problem (as I've
    tried to suggest above) is instilling some sense of morality in the AI.

    Same with humans.

    Humans seem to need legal mechanisms to prevent them from engaging in behaviors that are harmful to society. These are only partially
    successful and rely on The Masses to push back on severe abuses. Do you build a shitload of AIs and train them to have independant goals with
    a shared goal of preventing any ONE (or more) from interfering with
    THEIR "individual" goals?

    No, you just make them like humans.

    So as AI gets better and better there is clearly a lot to think about. Otherwise it may become more like humans than we would like.

    I don't claim to know how you do this or that with AI.
    But I do know that we now seem to be moving towards being able to make something which matches the complexity of the human central nervous system.
    I don't say we are there yet and I don't know when we will be.
    In the past it would have been unthinkable that we could really make
    something like a human brain because nothing of sufficient complexity could
    be made.
    It is my view that you don't need to know how a brain works to be able to
    make a brain.
    You just need something which has sufficient complexity which learns to
    become what you want it to become.

    You seem to think that humans have something which AI can never have.
    I don't. So perhaps we should leave it there.


    How do you imbue an AI with the idea of "self"? (so, in the degenerate
    case,
    it is willing to compromise and join with others to contain an abuser?)

    >Would it attempt to think beyond it's
    limitations (something humans always do)? Or, would those be immutable
    in its understanding of the world?

    I don't mean to suggest that AI will become human, or will need to
    become
    human. It will more likely have its own agenda.

    Where will that agenda come from?

    No-one knows exactly. That'y why "One thing which bothers me about AI is
    that if it's like us but way more
    intelligent than us then..."

    Maybe we need Gort (The day the earth stood still.) but the problem with
    that is will Gort be an American, Chinese, Russian, Other, or none of the
    above.
    My preference would be none of the above.

    Will it inherit it from watching B-grade
    sci-fi movies? "Let there be light!"







    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Edward Rawde@21:1/5 to Don Y on Sun May 19 23:51:05 2024
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:v2ee06$3ppfi$2@dont-email.me...
    On 5/19/2024 9:22 AM, Edward Rawde wrote:
    Exactly. But, you can pose questions of them and otherwise observe
    their
    behaviors in unrelated areas and form an opinion.

    If they are, say, a doctor then yes you can ask questions about your
    treatment but you can't otherwise observe their behavior.

    I watch the amount of time my MD gives me above and beyond the "15
    minute
    slot"
    that his office would PREFER to constrain him. I watch my dentist
    respond
    to
    calls to his PERSONAL cell phone WHILE OUT OF TOWN. I see the bicycle
    that
    SWMBO's MD rides to work each day.

    You must annoy a lot of people.

    It's up to my doctor to terminate the appointment if it exceeds the amount
    of time he wants to spend with me (us). Instead, he seems to delight in
    the
    questions that I pose and my interest in learning instead of being *told*
    to do X, Y or Z.

    And, the fact that he sees me investing in my own care -- instead of
    making
    excuses about why I can't do this or that -- as evidence that his
    investment
    in *me* is likely more effective (if you assume he chose to be a doctor
    for a REASON!) than spending more than *10* minutes with someone who is
    going to ignore his recommendations.
    These people aren't highlighting these aspects of their behavior. But,
    they aren't hiding them, either. Anyone observant would "notice".

    Anyone getting on with their own life wouldn't care.

    And would suffer from less *effective* "service". A neighbor has a
    doctor who sees every problem as requiring a "pill" as a solution.
    When they travel, they carry a LARGE briefcase just full of their medications!

    Ask some impartial doctor if all of those were strictly necessary (as
    they have been individually prescribed, over the years) and I suspect
    he would question many of them as ineffective, redundant or
    contraindicated.

    A friend spent a few weeks in the hospital, recently. When he came out,
    the "suite" of medications that were prescribed for him had many of his previous medications elided. "You don't need these." So, why didn't his "regular doctor" ever sit down and review that list? He has it in
    his computerized record of the patient? Did he have some plan to review
    it at some future date?

    My friend is now looking for a new doctor. The experience (and how much BETTER he now feels after the medication changes) has made it clear to him that his previous doctor wasn't giving him the best of care. The
    *patient*
    is the entity to be satisfied, not the doctor's "office manager" (metering out appointments in 15 minute blocks).

    I've a neighbor who loudly claims NOT to be racist. But, if you take >>>>> the
    whole of your experiences with him and the various comments he has
    made,
    over the years (e.g., not shopping at a particular store because there >>>>> are lots of blacks living in the apartment complex across the street >>>>> from said store -- meaning lots of them SHOP in that store!), it's
    not hard to come to that conclusion.

    He also is very vocal about The Border (an hour from here). Yet,
    ALWAYS hires mexicans. Does he ever check to see if they are here
    legally? Entitled to work? Or, is he really only concerned with
    the price they charge?

    When you (I) speak to other neighbors about his behavior, do they
    offer similar conclusions as to his "character"?

    I'm not following what that has to do with AI.

    It speaks to bias. Bias that people have and either ignore or
    deny, despite it being obvious to others.

    Those "others" will react to you WITH consideration of that bias
    factored into their actions.

    So will AI.

    An AI's bias is potentially more harmful. My neighbor is limited in
    what he can do -- the extent of his influence/power. "He's only one man". But, an AI can be replicated and have greater influence in policy matters BECAUSE it's an AI (and not "just a man")

    I visit a friend, daily, who is highly prejudiced, completely opposite >>>>> in terms of my political, spiritual, etc. beliefs, hugely different
    values, etc. He is continually critical of my appearance, how I
    dress, the hours that I sleep, where I shop, what I spend money on
    (and what I *don't*), etc. And, I just smile and let his comments
    roll
    off me. SWMBO asks why I spend *any* time with him.

    "I find it entertaining!" (!!)

    Oh. Now I get why we're having this discussion.

    I am always looking for opportunities to learn. How can you be so
    critical
    of ALL these things (not just myself but EVERYONE around him including
    all of the folks he *hires*!) and still remain in this "situation"?
    You can afford to move anywhere (this isn't even your "home") so why
    stay here with these people -- and providers -- that you (appear to)
    dislike? If you go to a restaurant and are served a bad meal, do you
    just eat it and grumble under your breath? Do you RETURN to the
    restaurant for "more punishment"?

    Explain to me WHY you engage in such behavior. I visit a restaurant and >>> am unhappy with the meal, I bring it to the waiter's/maitre d's
    attention.
    If I have a similar problem a second time, I just avoid the restaurant
    entirely -- and see to it that I share this "recommendation" with my
    friends. There are too many other choices to "settle" for a
    disappointing
    experience!

    AI restaurants are likely coming where not only do you order on an ipad
    yourself but the food is not made by human cooks.

    My reaction is the same. But, likely they only get ONE chance to
    disappoint me (as I would expect EVERY subsequent experience to be
    repeatably identical to that first disappointment)

    Annoyed with all the "illegals" coming across the border? Then why
    wouldn't you "hire white people"? Or, at least, verify the latino's
    working papers (or, hire through an agency that does this, instead of
    a guy operating out of his second-hand pickup truck)! If we closed
    the border as you seem to advocate, what will you THEN do to get
    cheap labor? I.e., how do you rationalize these discrepancies in your
    own mind? (Really! I wold like to understand how such conflicting
    goals
    can coexist FORCEFULLY in their minds!)

    None of this seems to be related to AI except that AI will behave just
    like
    humans if it's trained that way.

    But humans don't know how they (humans) are trained!

    Yes. So?

    As AI gets better and better there is clearly a lot to think about.
    Otherwise it may become more like humans than we would like.

    I don't claim to know how you do this or that with AI.
    But I do know that we now seem to be moving towards being able to make something which matches the complexity of the human central nervous system.
    I don't say we are there yet and I don't know when we will be.
    In the past it would have been unthinkable that we could really make
    something like a human brain because nothing of sufficient complexity could
    be made.
    It is my view that you don't need to know how a brain works to be able to
    make a brain.
    You just need something which has sufficient complexity which learns to
    become what you want it to become.

    You seem to think that humans have something which AI can never have.
    I don't. So perhaps we should leave it there.


    Explain how, in detail, a child learns. What are the "best practices"?
    And why? Which practices are contraindicated? After all this time,
    why aren't we adept at properly "training" children? (for which
    culture?)

    ....


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Edward Rawde@21:1/5 to Don Y on Mon May 20 00:55:18 2024
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:v2ekc0$3qnsa$1@dont-email.me...
    On 5/19/2024 8:45 PM, Edward Rawde wrote:
    You seem to think that humans have something which AI can never have.

    Exactly. AIs have to be taught.

    So do humans.

    If HUMANS (the gen-0 teachers) can't
    come up with a way to TEACH -- in AI terms -- compassion, morality,
    honesty,
    love, creativity, respect, frustration, desire, etc. then how do you think
    an AI is going to acquire those capabilities?

    Same way humans do.

    "compassion, morality, honesty, love, creativity, respect, frustration,
    desire, etc."

    Interesting that those are all abstract nouns and very emotional words.

    That reminds me of a converstaion I had with a sofware engineer 40 years
    ago.
    It was about whether computers could ever become like us.
    Him: Well you can't give it emotions.
    Me: Why?
    Him: [Silence for a few seconds then] Well you can't give it emotions.
    Me: Never mind.


    The "intelligence" part of AI is easy. You are wanting to create
    "artificial humans" -- an entirely different prospect.

    Your Nobel awaits.


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Edward Rawde on Sun May 19 21:42:39 2024
    On 5/19/2024 8:45 PM, Edward Rawde wrote:
    You seem to think that humans have something which AI can never have.

    Exactly. AIs have to be taught. If HUMANS (the gen-0 teachers) can't
    come up with a way to TEACH -- in AI terms -- compassion, morality, honesty, love, creativity, respect, frustration, desire, etc. then how do you think
    an AI is going to acquire those capabilities?

    The "intelligence" part of AI is easy. You are wanting to create
    "artificial humans" -- an entirely different prospect.

    Your Nobel awaits.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bill Sloman@21:1/5 to John Larkin on Mon May 20 14:58:39 2024
    On 18/05/2024 4:51 am, John Larkin wrote:

    https://www.youtube.com/watch?v=5Peima-Uw7w

    See graph at 9:50 in.

    I see this a lot, engineers wanting to do complex stuff because it's
    amusing to them, when simple common-sense things would work and be
    done.

    What you don't see, because you exemplify it all too frequently, is
    smart people doing uninformed things.

    If you don't know about a particular approach, you are very likely to
    end up stuck with a more complicated solution because it exploits
    techniques you do know about.

    --
    Bill Sloman, Sydney

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Edward Rawde on Sun May 19 22:02:56 2024
    On 5/19/2024 8:51 PM, Edward Rawde wrote:
    It is my view that you don't need to know how a brain works to be able to make a brain.

    That's a fallacy. We can't make a *plant* let alone a brain.

    You just need something which has sufficient complexity which learns to become what you want it to become.

    So, you don't know what a brain is. And, you don't know how it learns.
    Yet, magically expect it to do so?

    You seem to think that humans have something which AI can never have.

    I designed a resource allocation mechanism to allow competing
    agents to "bid" for the resources that they needed to achieve
    their individual goals. The thought was that they could each
    reach some sort of homeostatic equilibrium at which point
    the available resources would be fairly apportioned to achieve
    whatever *could* be achieved with the available system resources
    (because resources available can change and demands placed on them
    could change as well).

    My thinking was that I could endow each "task" with different
    amounts of "cash" to suggest their relative levels of importance.
    They could then interactively "bid" with each other for resources;
    "How much is it WORTH to you to meet your goals?"

    This was a colossal failure. Because bidding STRATEGY is difficult
    to codify in a manner that can learn and meet its own goals.
    Some tasks would "shoot their wad" and still not be guaranteed to
    "purchase" the resources they needed IN THE FACE OF OTHER COMPETITORS.
    Others would spread themselves too thin and find themselves losing
    out to more modest "bidders".

    A human faces similar situation when going to an auction with a fixed
    amount of cash. If you find an item of interest, you have to make
    some judgement call as to how much of your available budget to
    risk on that item, knowing that if you WIN the bid, your reserves
    for other items (whose competitors are yet to be seen) will be
    reduced.

    And, if you allow this to be a fluid/interactive process where bidders
    can ADJUST their bids, dynamically (up or down), then the system
    oscillates until some bidder "goes all in".

    The failure is not in the concept but, rather, the implementation.
    *I* couldn't figure out how to *teach* (code) a strategy that
    COULD win as often as it SHOULD win. Because I hoped for more than
    the results available with more trivial approaches.

    AI practitioners don't know how to teach issues unrelated to "chaining
    facts in a knowledge base" or "looking for patterns in data". These
    are relatively simple undertakings that just rely on resources.

    E.g., a *child* can understand how an inference engine works:
    Knowledge base:
    Children get parties on their birthday.
    You are a child.
    Today is your birthday.
    Conclusion:
    You will have a party today!

    So, AIs will be intelligent but lack many (all?) of the other
    HUMAN characteristics that we tend to associate with intelligence
    (creativity, imagination, originality, intuition, etc.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Edward Rawde@21:1/5 to Don Y on Mon May 20 01:12:10 2024
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:v2eli1$3qus1$2@dont-email.me...
    On 5/19/2024 8:51 PM, Edward Rawde wrote:
    It is my view that you don't need to know how a brain works to be able to
    make a brain.

    That's a fallacy. We can't make a *plant* let alone a brain.

    But we can make a system which behaves like a brain. We call it AI.


    You just need something which has sufficient complexity which learns to
    become what you want it to become.

    So, you don't know what a brain is.

    Humans clearly have one (well most of them) and AI is moving on similar
    lines.

    And, you don't know how it learns.

    Correct.

    Yet, magically expect it to do so?

    There is nothing magical about it because it obviously does learn.
    It is therefore factual, not magical, that it learns.


    You seem to think that humans have something which AI can never have.

    I designed a resource allocation mechanism to allow competing
    agents to "bid" for the resources that they needed to achieve
    their individual goals. The thought was that they could each
    reach some sort of homeostatic equilibrium at which point
    the available resources would be fairly apportioned to achieve
    whatever *could* be achieved with the available system resources
    (because resources available can change and demands placed on them
    could change as well).

    My thinking was that I could endow each "task" with different
    amounts of "cash" to suggest their relative levels of importance.
    They could then interactively "bid" with each other for resources;
    "How much is it WORTH to you to meet your goals?"

    This was a colossal failure. Because bidding STRATEGY is difficult
    to codify in a manner that can learn and meet its own goals.
    Some tasks would "shoot their wad" and still not be guaranteed to
    "purchase" the resources they needed IN THE FACE OF OTHER COMPETITORS.
    Others would spread themselves too thin and find themselves losing
    out to more modest "bidders".

    A human faces similar situation when going to an auction with a fixed
    amount of cash. If you find an item of interest, you have to make
    some judgement call as to how much of your available budget to
    risk on that item, knowing that if you WIN the bid, your reserves
    for other items (whose competitors are yet to be seen) will be
    reduced.

    And, if you allow this to be a fluid/interactive process where bidders
    can ADJUST their bids, dynamically (up or down), then the system
    oscillates until some bidder "goes all in".

    The failure is not in the concept but, rather, the implementation.
    *I* couldn't figure out how to *teach* (code) a strategy that
    COULD win as often as it SHOULD win. Because I hoped for more than
    the results available with more trivial approaches.

    AI practitioners don't know how to teach issues unrelated to "chaining
    facts in a knowledge base" or "looking for patterns in data". These
    are relatively simple undertakings that just rely on resources.

    E.g., a *child* can understand how an inference engine works:
    Knowledge base:
    Children get parties on their birthday.
    You are a child.
    Today is your birthday.
    Conclusion:
    You will have a party today!

    So, AIs will be intelligent but lack many (all?) of the other
    HUMAN characteristics that we tend to associate with intelligence (creativity, imagination, originality, intuition, etc.)


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Edward Rawde on Sun May 19 23:15:25 2024
    On 5/19/2024 10:12 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:v2eli1$3qus1$2@dont-email.me...
    On 5/19/2024 8:51 PM, Edward Rawde wrote:
    It is my view that you don't need to know how a brain works to be able to >>> make a brain.

    That's a fallacy. We can't make a *plant* let alone a brain.

    But we can make a system which behaves like a brain. We call it AI.

    No. It only "reasons" like a brain. If that is all your brain was/did,
    you would be an automaton. I can write a piece of code that can tell
    you your odds of winning any given DEALT poker hand (with some number
    of players and a fresh deck). That's more than a human brain can
    muster, reliably.

    But, I can't factor in the behavior of other players; "Is he bluffing?"
    "Will he fold prematurely?" etc. These are HUMAN issues that the
    software (AI) can't RELIABLY accommodate.

    Do AIs get depressed/happy? Experience joy/sadness? Revelation?
    Frustration? Addiction? Despair? Pain? Shame/pride? Fear?

    These all factor into how humans make decisions. E.g., if you
    are afraid that your adversary is going to harm you (even if that
    fear is unfounded), then you will react AS IF that was more of
    a certainty. A human might dramatically alter his behavior
    (decision making process) if there is an emotional stake involved.

    Does the AI know the human's MIND to be able to estimate the
    likelihood and affect of any such influence? Yes, Mr Spock.

    I repeat, teaching a brain to "reason" is trivial. Likewise to
    recognize patterns. Done. Now you just need to expose it to
    as many VERIFIABLE facts (*who* verifies them?) and let it
    do the forward chaining exercises.

    Then, you need to audit its conclusions and wonder why it has
    hallucinated (as it won't be able to TELL you). Will you have
    a committee examine every conclusion from the AI to determine
    (within their personal limitations) if this is a hallucination
    or some yet-to-be-discovered truth? Imagine how SLOW the
    effective rate of the AI when you have to ensure it is CORRECT!

    <https://www.superannotate.com/blog/ai-hallucinations> <https://www.ibm.com/topics/ai-hallucinations> <https://mitsloanedtech.mit.edu/ai/basics/addressing-ai-hallucinations-and-bias/>

    Given how quickly an AI *can* generate outputs, this turns mankind
    into a "fact checking" organization; what value a reference if
    it can't be trusted to be accurate? What if its conclusions require
    massive amounts of resources to validate? What if there are
    timeliness issues involved: "Russia is preparing to launch a
    nuclear first strike!"? Even if you can prove this to be
    inaccurate, when will you stop heeding this warning -- to your
    detriment?

    Beyond that, we are waiting for humans to understand the
    basis of all these other characteristics attributed to
    The Brain to be able to codify them in a way that can be taught.
    Yet, we can't seem to do it to children, reliably...

    I can teach an AI that fire burns -- it's just a relationship
    of already established facts in its knowledge base. I can teach
    a child that fire burns. The child will remember the *experience*
    of burning much differently than an AI (what do you do, delete a
    few NP junctions to make it "feel" the pain? permanently toast
    some foils -- "scar tissue" -- so those associated abilities are
    permanently impaired?)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Edward Rawde@21:1/5 to Don Y on Mon May 20 02:40:33 2024
    "Don Y" <blockedofcourse@foo.invalid> wrote in message news:v2eppu$3rio4$2@dont-email.me...
    On 5/19/2024 10:12 PM, Edward Rawde wrote:
    "Don Y" <blockedofcourse@foo.invalid> wrote in message
    news:v2eli1$3qus1$2@dont-email.me...
    On 5/19/2024 8:51 PM, Edward Rawde wrote:
    It is my view that you don't need to know how a brain works to be able >>>> to
    make a brain.

    That's a fallacy. We can't make a *plant* let alone a brain.

    But we can make a system which behaves like a brain. We call it AI.

    No. It only "reasons" like a brain. If that is all your brain was/did,
    you would be an automaton. I can write a piece of code that can tell
    you your odds of winning any given DEALT poker hand (with some number
    of players and a fresh deck). That's more than a human brain can
    muster, reliably.

    I can write a piece of code which multiplies two five-digit numbers
    together.
    That's more than most human brains can muster reliably.


    But, I can't factor in the behavior of other players; "Is he bluffing?"
    "Will he fold prematurely?" etc. These are HUMAN issues that the
    software (AI) can't RELIABLY accommodate.

    You have given no explanation of why an AI cannot reliably accommodate this.


    Do AIs get depressed/happy? Experience joy/sadness? Revelation? Frustration? Addiction? Despair? Pain? Shame/pride? Fear?

    You have given no explanation of why they cannot, but you appear to believe that they cannot.


    These all factor into how humans make decisions. E.g., if you
    are afraid that your adversary is going to harm you (even if that
    fear is unfounded), then you will react AS IF that was more of
    a certainty. A human might dramatically alter his behavior
    (decision making process) if there is an emotional stake involved.

    You have given no explanation of why an AI cannot do that just like a human can.


    Does the AI know the human's MIND to be able to estimate the
    likelihood and affect of any such influence? Yes, Mr Spock.

    I repeat, teaching a brain to "reason" is trivial. Likewise to
    recognize patterns. Done. Now you just need to expose it to
    as many VERIFIABLE facts (*who* verifies them?) and let it
    do the forward chaining exercises.

    Then, you need to audit its conclusions and wonder why it has
    hallucinated (as it won't be able to TELL you). Will you have
    a committee examine every conclusion from the AI to determine
    (within their personal limitations) if this is a hallucination
    or some yet-to-be-discovered truth? Imagine how SLOW the
    effective rate of the AI when you have to ensure it is CORRECT!

    <https://www.superannotate.com/blog/ai-hallucinations> <https://www.ibm.com/topics/ai-hallucinations> <https://mitsloanedtech.mit.edu/ai/basics/addressing-ai-hallucinations-and-bias/>

    Given how quickly an AI *can* generate outputs, this turns mankind
    into a "fact checking" organization; what value a reference if
    it can't be trusted to be accurate? What if its conclusions require
    massive amounts of resources to validate? What if there are
    timeliness issues involved: "Russia is preparing to launch a
    nuclear first strike!"? Even if you can prove this to be
    inaccurate, when will you stop heeding this warning -- to your
    detriment?

    Beyond that, we are waiting for humans to understand the
    basis of all these other characteristics attributed to
    The Brain to be able to codify them in a way that can be taught.
    Yet, we can't seem to do it to children, reliably...

    I can teach an AI that fire burns -- it's just a relationship
    of already established facts in its knowledge base. I can teach
    a child that fire burns. The child will remember the *experience*
    of burning much differently than an AI (what do you do, delete a
    few NP junctions to make it "feel" the pain? permanently toast
    some foils -- "scar tissue" -- so those associated abilities are
    permanently impaired?)



    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)