• ChatGPT fails at algebra

    From Sylvia Else@21:1/5 to All on Thu Feb 9 16:10:31 2023
    Mind you, I'm surprised that a text based AI can even get this far.

    My comments with ** **

    Q. In Miss Q's class, 1/3 of the students brought a teddy bear to
    school. Each boy to 12 books, each girl took 17 books, and each teddy
    bear took 9 books. A total of 305 books were taken out. How many girls
    where there in Miss Q's class?

    A. Let's call the number of students in Miss Q's class "S". We know that
    1/3 of the students brought a teddy bear, so that means there were S/3
    teddy bears. Each teddy bear took 9 books, so the total number of books
    taken by the teddy bears is (S/3) * 9 = 9S/3.

    ** OK **

    Each boy took 12 books, so the total number of books taken by the boys
    is 12 * the number of boys = 12B.

    ** Yes **

    Similarly, each girl took 17 books, so the total number of books taken
    by the girls is 17 * the number of girls = 17G.

    ** Good so far **

    Finally, we know that the total number of books taken out is 305, so we
    can set up an equation to represent this:

    9S/3 + 12B + 17G = 305

    ** Looks promising **

    Now we can solve for the number of girls, G.

    First, we'll isolate G by subtracting 9S/3 and 12B from both sides:

    17G = 305 - 9S/3 - 12B

    ** Might simplify 9S/3 to 3S, but never mind **

    Expanding the right side:

    17G = 305 - (3 * 9S/3) - 12B = 305 - 27S/3 - 12B

    ** Huh, what? Where did that extra 3 come from? **

    Next, we'll simplify the right side of the equation:

    ** Not much point now. **

    17G = 305 - 27S/3 - 12B = 305 - (3 * 9S/3) - 12B = 305 - 27S/3 - 12B

    Finally, we'll substitute S = B + G into the equation to eliminate S:

    17G = 305 - 27(B + G)/3 - 12B

    Expanding the right side:

    17G = 305 - 27B/3 - 27G/3 - 12B

    Combining like terms:

    17G = 305 - 27B/3 - 27G/3 - 12B

    Finally, dividing both sides by 17:

    G = (305 - 27B/3 - 12B) / 17

    So, the number of girls, G, depends on the number of boys, B. To get an
    exact answer, we would need to know the number of boys.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Burns@21:1/5 to Sylvia Else on Thu Feb 9 08:25:15 2023
    Sylvia Else wrote:

    Mind you, I'm surprised that a text based AI can even get this far.


    On maths questions, Ive seen it give the wrong method, yet the correct
    result; I've also seen it give the correct method with the wrong result
    ... it's almost like it hasn't got a clue what it's doing :-P

    The most impressive thing I've seen it do was when someone asked it to
    pretend to be a SQL server, and then gave it commands to create a table,
    insert data into it and query it, which it "emulated" did properly, they
    then asked it to write a stored procedure to perform an "upsert". The
    worst "mistake" it made during the SQL demo was they asked it not to
    keep explaining itself, but apparently it couldn't resist.

    I still don't like the idea of AI, how will we ever be able to trust
    that search engine results aren't made-up bollocks? Oh wait ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Sylvia Else@21:1/5 to Andy Burns on Thu Feb 9 22:08:06 2023
    On 09-Feb-23 7:25 pm, Andy Burns wrote:
    Sylvia Else wrote:

    Mind you, I'm surprised that a text based AI can even get this far.


    On maths questions, Ive seen it give the wrong method, yet the correct result; I've also seen it give the correct method with the wrong result
    ... it's almost like it hasn't got a clue what it's doing :-P

    The most impressive thing I've seen it do was when someone asked it to pretend to be a SQL server, and then gave it commands to create a table, insert data into it and query it, which it "emulated" did properly, they
    then asked it to write a stored procedure to perform an "upsert".  The
    worst "mistake" it made during the SQL demo was they asked it not to
    keep explaining itself, but apparently it couldn't resist.

    I still don't like the idea of AI, how will we ever be able to trust
    that search engine results aren't made-up bollocks?  Oh wait ...


    We may find that AIs make the same mistakes as humans, just faster.

    In the above example, I doubt it's possible to understand why it made
    this mistake. All one can do is train it some more on algebra, and hope
    for the best.

    Sylvia.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Julio Di Egidio@21:1/5 to Sylvia Else on Thu Feb 9 05:58:53 2023
    On Thursday, 9 February 2023 at 12:08:09 UTC+1, Sylvia Else wrote:

    In the above example, I doubt it's possible to understand why it made
    this mistake. All one can do is train it some more on algebra, and hope
    for the best.

    Or read the fine print and realize why it is called "chat" to begin with.

    Indeed, we remain far from inventing any strong AI, and the fact that
    ML and overall the statistical methods, i.e. the generalized lying with numbers, are becoming ubiquitous and even strategic is quite the
    measure of how globally insane and retarded our society is becoming.

    Julio

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Julio Di Egidio@21:1/5 to Rich on Thu Feb 9 06:52:33 2023
    On Thursday, 9 February 2023 at 15:34:55 UTC+1, Rich wrote:
    Andy Burns <use...@andyburns.uk> wrote:

    I still don't like the idea of AI, how will we ever be able to trust
    that search engine results aren't made-up bollocks? Oh wait ...

    Yes, it is not like current search engine results aren't made-up
    bollocks. Although at least today one understands why. The bollocks
    were paid for by advertisiers who paid for "search placement".

    Nope, it's way worse than that. In the early 90's experts were
    rather pointing out that "if you look at garbage with a super
    powerful lens you are still looking at garbage", and that, for
    the web in particular not to quickly explode with everything
    and the contrary of everything, game-theoretic protocols for
    collaborative relevance and selection were critical: so Google
    rather hijacked and destroyed Usenet... Which is just the tip
    of the beginning of our globally insane and demented story.

    Julio

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich@21:1/5 to Andy Burns on Thu Feb 9 14:34:54 2023
    Andy Burns <usenet@andyburns.uk> wrote:

    I still don't like the idea of AI, how will we ever be able to trust
    that search engine results aren't made-up bollocks? Oh wait ...


    Yes, it is not like current search engine results aren't made-up
    bollocks. Although at least today one understands why. The bollocks
    were paid for by advertisiers who paid for "search placement".

    With AI, the bollocks will become "weird mystery bollocks".

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Computer Nerd Kev@21:1/5 to Sylvia Else on Fri Feb 10 08:38:46 2023
    Sylvia Else <sylvia@email.invalid> wrote:

    We may find that AIs make the same mistakes as humans, just faster.

    Based on what I read about an earlier GPT incarnation it seems to
    make somewhat different mistakes as well. It can remember much more
    than a (normal?) human, so it tends to slip into quoting things
    verbatim or with minor tweaks, which is easily mistaken for a real understanding because we can't remember all the source material
    that it's reading from. As a result it tends towards detailed
    and believeable answers to a slightly different question (or
    wrongly derived from an answer to another question).

    In the above example, I doubt it's possible to understand why it made
    this mistake. All one can do is train it some more on algebra, and hope
    for the best.

    That's potentially like training a mouse in nuclear physics, you'll
    never get there no matter how long you try. I'm sure there are
    improvements to the core system design which could produce better
    results, hence the different versions (I think I first read about
    GPT-2, while I was intending to find info on the GUID Partition
    Table).

    --
    __ __
    #_ < |\| |< _#

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Julio Di Egidio@21:1/5 to Computer Nerd Kev on Fri Feb 10 05:09:18 2023
    On Thursday, 9 February 2023 at 23:38:55 UTC+1, Computer Nerd Kev wrote:
    Sylvia Else <syl...@email.invalid> wrote:

    All one can do is train it some more on algebra, and hope for the best.

    That's potentially like training a mouse in nuclear physics,

    No, it isn't at all, e.g. you can exasperate a mouse, but you
    cannot "overfit it". ML has just fuck all to do with any actual
    learning and/or understanding. -- But people won't get it...

    Julio

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)