I was making an attempt to formalise algebra as a series of text
A FAC is an optional MO followed by a VV"
On 11-Feb-23 9:45 am, Sylvia Else wrote:
I was making an attempt to formalise algebra as a series of text
A FAC is an optional MO followed by a VV"
Oops - I got this wrong too. It was meant to be followed by optional
repeated terms. This makes its later determination that "2 M 3" is a FAC rather strange
On 10/02/2023 23:13, Sylvia Else wrote:
On 11-Feb-23 9:45 am, Sylvia Else wrote:
I was making an attempt to formalise algebra as a series of text
A FAC is an optional MO followed by a VV"
Oops - I got this wrong too. It was meant to be followed by optional
repeated terms. This makes its later determination that "2 M 3" is a
FAC rather strange
Is this training applicable to only the session between you and it, or
is it going to reference this knowledge, while we are all huddled
frightened in a corner trying to find ways to dismantle it without the assistance of rogue terminators, time travel and advanced weapons?
I've pretty much hit a wall with this experiment. Even within the same session, getting ChatGPT to recognise that it's made a mistake does not
mean it won't make the same mistake again.
It's like trying to teach a dumb student something that is beyond them.
Even when you think they've finally got it, it turns out that they haven't.
And this is just with easy stuff. I have no hope that it would ever
learn to apply more complicated manipulations correctly.
Perhaps my whole approach is misconceived.
nothing seems to be preserved beyond the session
On 11-Feb-23 11:15 pm, Adrian Caspersz wrote:
On 10/02/2023 23:13, Sylvia Else wrote:
On 11-Feb-23 9:45 am, Sylvia Else wrote:
I was making an attempt to formalise algebra as a series of text
A FAC is an optional MO followed by a VV"
Oops - I got this wrong too. It was meant to be followed by optional
repeated terms. This makes its later determination that "2 M 3" is a
FAC rather strange
Is this training applicable to only the session between you and it, or
is it going to reference this knowledge, while we are all huddled
frightened in a corner trying to find ways to dismantle it without the
assistance of rogue terminators, time travel and advanced weapons?
Yes, nothing seems to be preserved beyond the session. Just as well,
since otherwise getting a clean slate is difficult.
On the other hand, each new session seems to bring its own
idiosyncrasies, which can be somewhat exasperating.
I've pretty much hit a wall with this experiment. Even within the same session, getting ChatGPT to recognise that it's made a mistake does not
mean it won't make the same mistake again.
It's like trying to teach a dumb student something that is beyond them.
Even when you think they've finally got it, it turns out that they haven't.
And this is just with easy stuff. I have no hope that it would ever
learn to apply more complicated manipulations correctly.
Perhaps my whole approach is misconceived.
Sylvia.
I've pretty much hit a wall with this experiment. Even within the same
session, getting ChatGPT to recognise that it's made a mistake does not
mean it won't make the same mistake again.
It's like trying to teach a dumb student something that is beyond them.
Even when you think they've finally got it, it turns out that they haven't. >>
And this is just with easy stuff. I have no hope that it would ever
learn to apply more complicated manipulations correctly.
Perhaps my whole approach is misconceived.
On further research[*] I think that last comment is correct. One is not actually teaching it anything during one of these sessions. One is
merely adding to the text that it will use as input to its neural
network to determine the next word to output. I wondered why its outputs
come as a slowish sequence of words, separated in time by significant intervals. I believe this is because during those intervals it is
determining the next most probable word to follow the previous words in
the session (both the user's inputs and AI's previous output).
So it can sometimes appear to be following instructions, but it's not
really doing that, and the more complicated the instruction, the less
likely the answer is to be correct.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 388 |
Nodes: | 16 (2 / 14) |
Uptime: | 05:33:42 |
Calls: | 8,220 |
Calls today: | 18 |
Files: | 13,122 |
Messages: | 5,872,261 |
Posted today: | 1 |