ChatGPT doesn't seem to have any "intelligence" about backgammon at
all and doesn't even see that its own answers make no sense.
On 5/31/2023 9:50 AM, peps...@gmail.com wrote:
ChatGPT doesn't seem to have any "intelligence" about backgammon atThere's no reason to expect ChatGPT to reason accurately or get its
all and doesn't even see that its own answers make no sense.
facts right. It's a large language model, so what it's designed to
do is speak *fluently*, not do the usual things we've come to expect
from computers, such as perform correct logical calculations or act
as a search engine.
Understood. Ironically, the way it sounds perfectly coherent while getting everything completely wrong, from a bg standpoint, is actually far more impressive
than if it simply gave the rule correctly. In the quote on this thread, ChatGPT is
very fluent indeed!
If it gave the 8/9/12 rule correctly, it would just be (or at least seem like) a very minor tweak on current
search-engines.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 297 |
Nodes: | 16 (2 / 14) |
Uptime: | 19:22:15 |
Calls: | 6,667 |
Calls today: | 1 |
Files: | 12,216 |
Messages: | 5,337,042 |