• Re: Chatbots produce bogus citations

    From Sn!pe@21:1/5 to vallor on Thu Jun 1 20:45:36 2023
    vallor <vallor@vallor.earth> wrote:

    On Thu, 1 Jun 2023 14:33:13 +0100, Sn!pe wrote:

    vallor <vallor@vallor.earth> wrote:

    On Sun, 28 May 2023 06:17:15 +0100, Sn!pe wrote:

    Lawyer admits using AI for research after citing 'bogus' cases from
    ChatGPT.

    <https://tinyurl.com/yntupbe4>

    Poor example of "don't trust, do verify".


    Hence my earlier (unchallenged) point that ChatGPT
    does not provide citations.


    It does provide citations sometimes. And sometimes, those citations
    are bogus, which is what happened to our hero the cyberlawyer...


    You'd think that an educated person like a lawyer would check.
    I wonder how many naïve people would bother, rather than just
    accept the results as facts. I hear that some people even believe
    what they read in the papers or see on TV (strange but true).

    [...]

    --
    ^Ï^. – Sn!pe – <https://youtu.be/_kqytf31a8E>

    My pet rock Gordon just is.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From vallor@21:1/5 to All on Wed Jun 7 23:32:21 2023
    On Thu, 1 Jun 2023 20:45:36 +0100, Sn!pe wrote:

    vallor <vallor@vallor.earth> wrote:

    On Thu, 1 Jun 2023 14:33:13 +0100, Sn!pe wrote:

    vallor <vallor@vallor.earth> wrote:

    On Sun, 28 May 2023 06:17:15 +0100, Sn!pe wrote:

    Lawyer admits using AI for research after citing 'bogus' cases
    from ChatGPT.

    <https://tinyurl.com/yntupbe4>

    Poor example of "don't trust, do verify".


    Hence my earlier (unchallenged) point that ChatGPT does not provide
    citations.


    It does provide citations sometimes. And sometimes, those citations
    are bogus, which is what happened to our hero the cyberlawyer...


    You'd think that an educated person like a lawyer would check.
    I wonder how many naïve people would bother, rather than just accept the results as facts. I hear that some people even believe what they read
    in the papers or see on TV (strange but true).

    [...]


    So is that the last word on these AI shells? I still
    use them, even though the novelty has worn off a bit. (I'd
    still be much more interested if they were "answer machines"
    instead of "say what sounds good" machines. :)

    I don't trust them, but do verify them -- and I recommend
    others do the same.

    (Still, is it not amusing to see what it comes up with on its own?)

    --
    -v

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Burns@21:1/5 to vallor on Thu Jun 8 06:07:17 2023
    vallor wrote:

    So is that the last word on these AI shells? I still
    use them, even though the novelty has worn off a bit. (I'd
    still be much more interested if they were "answer machines"
    instead of "say what sounds good" machines. :)

    I don't trust them, but do verify them -- and I recommend
    others do the same.

    I don't use them, though I do notice those people who test them out, or
    believe in the answers blindly, also those who used to be paid to
    develop them turning against them ... maybe it'll take a multi-million
    dollar lawsuit to make them go away for a decade?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Sn!pe@21:1/5 to Andy Burns on Thu Jun 8 10:37:39 2023
    Andy Burns <usenet@andyburns.uk> wrote:

    vallor wrote:

    So is that the last word on these AI shells? I still
    use them, even though the novelty has worn off a bit. (I'd
    still be much more interested if they were "answer machines"
    instead of "say what sounds good" machines. :)

    I don't trust them, but do verify them -- and I recommend
    others do the same.

    I don't use them, though I do notice those people who test them out, or believe in the answers blindly, also those who used to be paid to
    develop them turning against them ... maybe it'll take a multi-million
    dollar lawsuit to make them go away for a decade?

    It seems to me that they are potentially very powerful tools for misinformation. There was a story in the Times recently about
    an AI generated deep-fake video. Foist some of that sort of
    output onto the uncritical masses and see what you get...

    (paywall) <https://www.thetimes.co.uk/article/ai-deepfake-avatar-donald-trump-htlxp77jl> (paywall defeated) <https://12ft.io/proxy?q=https%3A%2F%2Fwww.thetimes.co.uk%2Farticle%2Fai-deepfake-avatar-donald-trump-htlxp77jl>
    (tinyurl)
    <https://tinyurl.com/masshtx9>

    --
    ^Ï^. – Sn!pe – <https://youtu.be/_kqytf31a8E>

    My pet rock Gordon just is.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nic@21:1/5 to Andy Burns on Thu Jun 8 10:46:49 2023
    On 6/8/23 1:07 AM, Andy Burns wrote:
    vallor wrote:

    So is that the last word on these AI shells?  I still
    use them, even though the novelty has worn off a bit.  (I'd
    still be much more interested if they were "answer machines"
    instead of "say what sounds good" machines. :)

    I don't trust them, but do verify them -- and I recommend
    others do the same.

    I don't use them, though I do notice those people who test them out,
    or believe in the answers blindly, also those who used to be paid to
    develop them turning against them ... maybe it'll take a multi-million
    dollar lawsuit to make them go away for a decade?

    I think the goal for some is a computer like HAL or the computer on the
    USS Enterprise (Picard). A computer that can use information from many disciplines to form a reasonable answer.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stefan Ram@21:1/5 to Nic on Thu Jun 8 16:54:16 2023
    Nic <Nic@none.net> writes:
    I think the goal for some is a computer like HAL or the computer on the
    USS Enterprise (Picard).

    Me: In which regard do current AI chatbots still not meet the
    expectations? Chatbot (all chatbot answers were shortened by me):

    |AI chatbots still face limited understanding, lack of
    |creativity and slow learning.

    What about factual accuracy and logic deductions?

    |Achieving high levels of precision and consistency remains a
    |challenge for current AI chatbots.

    Do chatbots produce bogus citations?

    |It is possible for chatbots to generate incorrect or
    |misleading citations, especially when they rely solely on
    |machine learning techniques without proper validation or
    |oversight. Integrating domain expertise into chatbot models
    |can reduce the likelihood of producing bogus citations.

    What are the differences in capabilities between a today's
    chatbot and HAL 9000?

    |Modern chatbots still fall short of matching HAL's level of
    |cognitive abilities.

    Me: Open the pod bay door. Chatbot:

    |I am sorry, but as an AI assistant, I don't have access to
    |physical doors or any control over them.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stefan Ram@21:1/5 to Stefan Ram on Fri Jun 9 12:59:18 2023
    ram@zedat.fu-berlin.de (Stefan Ram) writes:
    |oversight. Integrating domain expertise into chatbot models
    |can reduce the likelihood of producing bogus citations.

    And today it is reported, that Bard indeed uses additional
    means, for example to calculated results of a calculation task.
    But it does not always detect correctly where this is required.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)