What do you think?
What do you think?
On 2023-03-30 23:49, Anatoly Chernyshev wrote:
What do you think?
No doubt there are a large number of such programs in the training data.
If it had simply regurgitated one of those, at least the program would
have compiled. That it couldn't even do as good as that is not impressive.
On 2023-03-31 01:00, Jeffrey R.Carter wrote:
On 2023-03-30 23:49, Anatoly Chernyshev wrote:
What do you think?
No doubt there are a large number of such programs in the training
data. If it had simply regurgitated one of those, at least the program
would have compiled. That it couldn't even do as good as that is not
impressive.
Right. Fun would be adding qualifiers to the request. E.g. "in extended precision", "taking arguments from user input" etc. Parroting works up
to some limit.
Data science people swear it's just a matter of the size of training set used...
I did also a few tests on some simple chemistry problems. ChatGPT looks like a bad but diligent student, who memorized the formulas, but has no clue how to use them. Specifically, units conversions (e.g. between mL, L, m3) is completely off-limits asof now.
On 2023-03-31 23:44, Anatoly Chernyshev wrote:of now.
Data science people swear it's just a matter of the size of training set used...They lie. In machine learning overtraining is as much a problem as undertraining. The simplest example from mathematics is polynomial interpolation becoming unstable with higher orders.
And this does not even touch contradictory samples requiring retraining
or time constrained samples etc.
I did also a few tests on some simple chemistry problems. ChatGPT looks like a bad but diligent student, who memorized the formulas, but has no clue how to use them. Specifically, units conversions (e.g. between mL, L, m3) is completely off-limits as
One must remember that ChatGPT is nothing but ELIZA on steroids.
https://en.wikipedia.org/wiki/ELIZA
--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
On 2023-03-31 23:44, Anatoly Chernyshev wrote:of now.
Data science people swear it's just a matter of the size of training set used...They lie. In machine learning overtraining is as much a problem as undertraining. The simplest example from mathematics is polynomial interpolation becoming unstable with higher orders.
And this does not even touch contradictory samples requiring retraining
or time constrained samples etc.
I did also a few tests on some simple chemistry problems. ChatGPT looks like a bad but diligent student, who memorized the formulas, but has no clue how to use them. Specifically, units conversions (e.g. between mL, L, m3) is completely off-limits as
One must remember that ChatGPT is nothing but ELIZA on steroids.
https://en.wikipedia.org/wiki/ELIZA
--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
2. Performance Overhead:
The safety features inherent in Ada, such as range checking and bounds >checking, introduce additional overhead that can affect performance. This >overhead is crucial for safety-critical applications but may not be >well-handled by ChatGPT when generating code
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 365 |
Nodes: | 16 (2 / 14) |
Uptime: | 05:52:11 |
Calls: | 7,785 |
Calls today: | 7 |
Files: | 12,914 |
Messages: | 5,750,434 |