• AI: If this made me laugh should I feel guilty?

    From risky biz@21:1/5 to All on Mon Dec 26 05:05:37 2022
    ' Nabla, a French start-up specializing in healthcare technology, tested GPT-3 as a medical chatbot, though OpenAI itself warned against such use. As expected, GPT-3 showed several limitations. For example, while testing GPT-3 responses about mental
    health issues, the AI advised a simulated patient to commit suicide.' https://en.wikipedia.org/wiki/GPT-3#Applications

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From BTSinAustin@21:1/5 to risky biz on Mon Dec 26 07:53:45 2022
    On Monday, December 26, 2022 at 8:05:41 AM UTC-5, risky biz wrote:
    ' Nabla, a French start-up specializing in healthcare technology, tested GPT-3 as a medical chatbot, though OpenAI itself warned against such use. As expected, GPT-3 showed several limitations. For example, while testing GPT-3 responses about mental
    health issues, the AI advised a simulated patient to commit suicide.'
    https://en.wikipedia.org/wiki/GPT-3#Applications

    Was it French Canadian?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From risky biz@21:1/5 to BTSinAustin on Mon Dec 26 12:35:12 2022
    On Monday, December 26, 2022 at 7:53:49 AM UTC-8, BTSinAustin wrote:
    On Monday, December 26, 2022 at 8:05:41 AM UTC-5, risky biz wrote:
    ' Nabla, a French start-up specializing in healthcare technology, tested GPT-3 as a medical chatbot, though OpenAI itself warned against such use. As expected, GPT-3 showed several limitations. For example, while testing GPT-3 responses about mental
    health issues, the AI advised a simulated patient to commit suicide.'
    https://en.wikipedia.org/wiki/GPT-3#Applications

    ~ Was it French Canadian?


    Montreal, maybe.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)