On Thu, 1 Jun 2023 14:33:13 +0100, Sn!pe wrote:
vallor <vallor@vallor.earth> wrote:
On Sun, 28 May 2023 06:17:15 +0100, Sn!pe wrote:
Lawyer admits using AI for research after citing 'bogus' cases from
ChatGPT.
<https://tinyurl.com/yntupbe4>
Poor example of "don't trust, do verify".
Hence my earlier (unchallenged) point that ChatGPT
does not provide citations.
It does provide citations sometimes. And sometimes, those citations
are bogus, which is what happened to our hero the cyberlawyer...
vallor <vallor@vallor.earth> wrote:
On Thu, 1 Jun 2023 14:33:13 +0100, Sn!pe wrote:You'd think that an educated person like a lawyer would check.
vallor <vallor@vallor.earth> wrote:It does provide citations sometimes. And sometimes, those citations
On Sun, 28 May 2023 06:17:15 +0100, Sn!pe wrote:
Lawyer admits using AI for research after citing 'bogus' cases
from ChatGPT.
<https://tinyurl.com/yntupbe4>
Poor example of "don't trust, do verify".Hence my earlier (unchallenged) point that ChatGPT does not provide
citations.
are bogus, which is what happened to our hero the cyberlawyer...
I wonder how many naïve people would bother, rather than just accept the results as facts. I hear that some people even believe what they read
in the papers or see on TV (strange but true).
[...]
So is that the last word on these AI shells? I still
use them, even though the novelty has worn off a bit. (I'd
still be much more interested if they were "answer machines"
instead of "say what sounds good" machines. :)
I don't trust them, but do verify them -- and I recommend
others do the same.
vallor wrote:
So is that the last word on these AI shells? I still
use them, even though the novelty has worn off a bit. (I'd
still be much more interested if they were "answer machines"
instead of "say what sounds good" machines. :)
I don't trust them, but do verify them -- and I recommend
others do the same.
I don't use them, though I do notice those people who test them out, or believe in the answers blindly, also those who used to be paid to
develop them turning against them ... maybe it'll take a multi-million
dollar lawsuit to make them go away for a decade?
vallor wrote:
So is that the last word on these AI shells? I still
use them, even though the novelty has worn off a bit. (I'd
still be much more interested if they were "answer machines"
instead of "say what sounds good" machines. :)
I don't trust them, but do verify them -- and I recommend
others do the same.
I don't use them, though I do notice those people who test them out,
or believe in the answers blindly, also those who used to be paid to
develop them turning against them ... maybe it'll take a multi-million
dollar lawsuit to make them go away for a decade?
I think the goal for some is a computer like HAL or the computer on the
USS Enterprise (Picard).
|oversight. Integrating domain expertise into chatbot models
|can reduce the likelihood of producing bogus citations.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 297 |
Nodes: | 16 (2 / 14) |
Uptime: | 11:18:13 |
Calls: | 6,666 |
Files: | 12,213 |
Messages: | 5,336,376 |