• ChatGPT

    From Anatoly Chernyshev@21:1/5 to All on Thu Mar 30 14:49:43 2023
    Just had an exercise with ChatGPT for Ada.

    Here is what my request was: "Produce Ada code for solving a quadratic equation."

    And here's the output from ChatGPT (quite flawless forgiving the absence of a math lib in the header):
    ----------------------------
    with Ada.Text_IO; use Ada.Text_IO;

    procedure Quadratic_Equation is
    A, B, C, D : Float;
    X1, X2 : Float;
    begin
    -- read coefficients A, B, and C from the user
    Put("Enter coefficient A: ");
    Get(A);

    Put("Enter coefficient B: ");
    Get(B);

    Put("Enter coefficient C: ");
    Get(C);

    -- calculate the discriminant
    D := B*B - 4.0*A*C;

    -- check if the discriminant is negative
    if D < 0.0 then
    Put_Line("No real roots exist.");
    else
    -- calculate the roots
    X1 := (-B + Sqrt(D)) / (2.0*A);
    X2 := (-B - Sqrt(D)) / (2.0*A);

    -- output the roots
    Put_Line("Root 1 = " & Float'Image(X1));
    Put_Line("Root 2 = " & Float'Image(X2));
    end if;
    end Quadratic_Equation;

    and an extra remark:

    Note that this implementation assumes that the discriminant (B^2 - 4AC) is positive or zero. If the discriminant is negative, then no real roots exist.
    -----------------------------------------------

    Before crying wolf, my impression is that ChatGPT might be useful for drafting simple pieces of code (say, homeworks). Very soon though one will arrive at a situation when the amount of efforts to explain the task to ChatGPT will be the same as to write
    the code yourself (I shall call it a limiting Kolmogorov complexity).

    What do you think?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jeffrey R.Carter@21:1/5 to Anatoly Chernyshev on Fri Mar 31 01:00:44 2023
    On 2023-03-30 23:49, Anatoly Chernyshev wrote:

    What do you think?

    No doubt there are a large number of such programs in the training data. If it had simply regurgitated one of those, at least the program would have compiled. That it couldn't even do as good as that is not impressive.

    --
    Jeff Carter
    "Fundamental improvements in performance
    are most often made by algorithm changes,
    not by tuning."
    Elements of Programming Style
    201

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jerry@21:1/5 to Anatoly Chernyshev on Thu Mar 30 15:32:22 2023
    On Thursday, March 30, 2023 at 2:49:45 PM UTC-7, Anatoly Chernyshev wrote:
    What do you think?

    ChatGPT demands my phone number which it will not get. Has anyone found a workaround?
    Jerry

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dmitry A. Kazakov@21:1/5 to Jeffrey R.Carter on Fri Mar 31 08:54:59 2023
    On 2023-03-31 01:00, Jeffrey R.Carter wrote:
    On 2023-03-30 23:49, Anatoly Chernyshev wrote:

    What do you think?

    No doubt there are a large number of such programs in the training data.
    If it had simply regurgitated one of those, at least the program would
    have compiled. That it couldn't even do as good as that is not impressive.

    Right. Fun would be adding qualifiers to the request. E.g. "in extended precision", "taking arguments from user input" etc. Parroting works up
    to some limit.

    What I find interesting is that the whole swindle highlights that some
    human activities considered creative are in fact not. BTW, it repeats
    much earlier dethroning of playing chess as being indicative to human intellect. The machine easily beats us in chess.

    Actually, it is the abilities to sort out garbage and pick up vegetables
    which make us intelligent! (:-))

    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From magardner2010@21:1/5 to Dmitry A. Kazakov on Fri Mar 31 14:04:55 2023
    On 31/03/2023 09:54, Dmitry A. Kazakov wrote:
    On 2023-03-31 01:00, Jeffrey R.Carter wrote:
    On 2023-03-30 23:49, Anatoly Chernyshev wrote:

    What do you think?

    No doubt there are a large number of such programs in the training
    data. If it had simply regurgitated one of those, at least the program
    would have compiled. That it couldn't even do as good as that is not
    impressive.

    Right. Fun would be adding qualifiers to the request. E.g. "in extended precision", "taking arguments from user input" etc. Parroting works up
    to some limit.


    I have been trying to get chatGPT to help me with a larger Ada project
    on and off for the past few months. It not only has no idea regarding
    what functions do and do not exist in a given library, but it sometimes
    doesn't even understand how private types or tasks work. I will admit,
    those are not necessarily super common, but given how easy Ada makes it
    to use them, I can honestly say that Ada is the only language in which I
    am comfortable writing multithreaded programs that require more
    sophistication than a simple fork() call.

    So, yeah. ChatGPT knows the syntax of Ada, and it is familiar with Ada.Text_IO's more common functions. It knows about the existence of GNAT.Sockets and AdaSockets, but it has no idea on how to use them.

    I would be quite confident that that is pretty much it as far as ChatGPT
    and Ada goes.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anatoly Chernyshev@21:1/5 to All on Fri Mar 31 14:44:26 2023
    Data science people swear it's just a matter of the size of training set used...
    I did also a few tests on some simple chemistry problems. ChatGPT looks like a bad but diligent student, who memorized the formulas, but has no clue how to use them. Specifically, units conversions (e.g. between mL, L, m3) is completely off-limits as of
    now.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dmitry A. Kazakov@21:1/5 to Anatoly Chernyshev on Sat Apr 1 09:39:49 2023
    On 2023-03-31 23:44, Anatoly Chernyshev wrote:
    Data science people swear it's just a matter of the size of training set used...

    They lie. In machine learning overtraining is as much a problem as undertraining. The simplest example from mathematics is polynomial interpolation becoming unstable with higher orders.

    And this does not even touch contradictory samples requiring retraining
    or time constrained samples etc.

    I did also a few tests on some simple chemistry problems. ChatGPT looks like a bad but diligent student, who memorized the formulas, but has no clue how to use them. Specifically, units conversions (e.g. between mL, L, m3) is completely off-limits as
    of now.

    One must remember that ChatGPT is nothing but ELIZA on steroids.

    https://en.wikipedia.org/wiki/ELIZA

    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Hou Van Boere@21:1/5 to All on Sat Apr 1 05:10:37 2023
    I have the same concerns. There are websites that allow for temp phone numbers but they never work. Get a burners phone with a pay as you go plan that does not auto-renew. Maybe you already have a retired phone. A simm card will be $10 and a month's
    worth of talk time perhaps another $10. Wish it was easier :(

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ken Burtch@21:1/5 to Dmitry A. Kazakov on Thu Apr 6 18:51:26 2023
    On Saturday, April 1, 2023 at 3:39:51 AM UTC-4, Dmitry A. Kazakov wrote:
    On 2023-03-31 23:44, Anatoly Chernyshev wrote:
    Data science people swear it's just a matter of the size of training set used...
    They lie. In machine learning overtraining is as much a problem as undertraining. The simplest example from mathematics is polynomial interpolation becoming unstable with higher orders.

    And this does not even touch contradictory samples requiring retraining
    or time constrained samples etc.
    I did also a few tests on some simple chemistry problems. ChatGPT looks like a bad but diligent student, who memorized the formulas, but has no clue how to use them. Specifically, units conversions (e.g. between mL, L, m3) is completely off-limits as
    of now.
    One must remember that ChatGPT is nothing but ELIZA on steroids.

    https://en.wikipedia.org/wiki/ELIZA
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    For what it's worth on the subject of the chatbot, " "Produce Ada code for solving a quadratic equation." is a terrible choice for a test of ChatGPT as one is asking if it can do a Google search. To test its abilities, you have to pick a challenge that
    cannot be solved with a Google search.

    My short assessment of ChatGPT, with the history of chatbots, are available on my February blog post. I gave it a simple programming problem and it failed 3 times out of 4. It's not surprising as I've learned since February that the chatbot doesn't
    actually understand programming: it uses examples off the Internet and tries to predict what you might have typed based on keyword patterns. It is an imitation of an imitation, smoke and mirrors. This is why Vint Cerf denounced it. You can read my
    thoughts on my blog:

    https://www.pegasoft.ca/coder/coder_february_2023.html

    Ken Burtch

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)