• Be very, very afraid...

    From Don Y@21:1/5 to All on Sat Oct 26 17:43:26 2024
    No, not that AI is going to take your job ("render you
    redundant") but, rather, that it is going to be relied upon
    for information that is provably incorrect:

    <https://apnews.com/article/ai-artificial-intelligence-health-business-90020cdf5fa16c79ca2e5b6c4c9bbb14>

    it seems the most expeditious way to get these sorts of problems
    fixed is NOT to regulate how it can be used but, rather, to
    hold firms using it financially (criminally?) accountable
    for its follies...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jan Panteltje@21:1/5 to blockedofcourse@foo.invalid on Sun Oct 27 05:53:10 2024
    On a sunny day (Sat, 26 Oct 2024 17:43:26 -0700) it happened Don Y <blockedofcourse@foo.invalid> wrote in <vfk2bp$3ueps$1@dont-email.me>:

    No, not that AI is going to take your job ("render you
    redundant") but, rather, that it is going to be relied upon
    for information that is provably incorrect:

    <https://apnews.com/article/ai-artificial-intelligence-health-business-90020cdf5fa16c79ca2e5b6c4c9bbb14>

    it seems the most expeditious way to get these sorts of problems
    fixed is NOT to regulate how it can be used but, rather, to
    hold firms using it financially (criminally?) accountable
    for its follies...

    Sure, like doctors that make mistakes:
    https://qualitysafety.bmj.com/content/33/2/109

    Does them using AI reduce errors?
    If so they could be accountable for not using it...
    Like a second opinion, always required?

    It is complicated..
    hacked;-)
    Power outage...
    etc etc

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Cursitor Doom@21:1/5 to Jan Panteltje on Sun Oct 27 08:34:49 2024
    On Sun, 27 Oct 2024 05:53:10 GMT, Jan Panteltje wrote:

    On a sunny day (Sat, 26 Oct 2024 17:43:26 -0700) it happened Don Y <blockedofcourse@foo.invalid> wrote in <vfk2bp$3ueps$1@dont-email.me>:

    No, not that AI is going to take your job ("render you redundant") but, >>rather, that it is going to be relied upon for information that is
    provably incorrect:

    <https://apnews.com/article/ai-artificial-intelligence-health- business-90020cdf5fa16c79ca2e5b6c4c9bbb14>

    it seems the most expeditious way to get these sorts of problems fixed
    is NOT to regulate how it can be used but, rather, to hold firms using
    it financially (criminally?) accountable for its follies...

    Sure, like doctors tha'st make mistakes:
    https://qualitysafety.bmj.com/content/33/2/109

    Does them using AI reduce errors?
    If so they could be accountable for not using it...
    Like a second opinion, always required?

    It is complicated..
    hacked;-)
    Power outage...
    etc etc

    Anyone concerned about medical confidentiality should *never* consent to
    having their records held in digital form anyway. Databases holding such
    info are prime targets for cyberattacks, as the UK's NHS is quickly
    finding out.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jan Panteltje@21:1/5 to Doom on Sun Oct 27 09:34:20 2024
    On a sunny day (Sun, 27 Oct 2024 08:34:49 -0000 (UTC)) it happened Cursitor Doom <cd999666@notformail.com> wrote in <vfktv9$7486$1@dont-email.me>:

    On Sun, 27 Oct 2024 05:53:10 GMT, Jan Panteltje wrote:

    On a sunny day (Sat, 26 Oct 2024 17:43:26 -0700) it happened Don Y
    <blockedofcourse@foo.invalid> wrote in <vfk2bp$3ueps$1@dont-email.me>:

    No, not that AI is going to take your job ("render you redundant") but, >>>rather, that it is going to be relied upon for information that is >>>provably incorrect:

    <https://apnews.com/article/ai-artificial-intelligence-health- >business-90020cdf5fa16c79ca2e5b6c4c9bbb14>

    it seems the most expeditious way to get these sorts of problems fixed
    is NOT to regulate how it can be used but, rather, to hold firms using
    it financially (criminally?) accountable for its follies...

    Sure, like doctors tha'st make mistakes:
    https://qualitysafety.bmj.com/content/33/2/109

    Does them using AI reduce errors?
    If so they could be accountable for not using it...
    Like a second opinion, always required?

    It is complicated..
    hacked;-)
    Power outage...
    etc etc

    Anyone concerned about medical confidentiality should *never* consent to >having their records held in digital form anyway. Databases holding such
    info are prime targets for cyberattacks, as the UK's NHS is quickly
    finding out.

    My medical records are held in digital form and when I moved house I had to give consent for the records
    to be made availabe to the local doctor here.
    But those digital records only go back a few years.. to when they started with it.
    Not much in there.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lasse Langwadt@21:1/5 to Don Y on Mon Oct 28 23:10:38 2024
    On 10/27/24 02:43, Don Y wrote:
    No, not that AI is going to take your job ("render you
    redundant") but, rather, that it is going to be relied upon
    for information that is provably incorrect:

    <https://apnews.com/article/ai-artificial-intelligence-health-business-90020cdf5fa16c79ca2e5b6c4c9bbb14>

    it seems the most expeditious way to get these sorts of problems
    fixed is NOT to regulate how it can be used but, rather, to
    hold firms using it financially (criminally?) accountable
    for its follies...


    https://www.washingtonpost.com/travel/2024/02/18/air-canada-airline-chatbot-ruling/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Lasse Langwadt on Mon Oct 28 15:32:21 2024
    On 10/28/2024 3:10 PM, Lasse Langwadt wrote:
    On 10/27/24 02:43, Don Y wrote:
    No, not that AI is going to take your job ("render you
    redundant") but, rather, that it is going to be relied upon
    for information that is provably incorrect:

    <https://apnews.com/article/ai-artificial-intelligence-health-business-90020cdf5fa16c79ca2e5b6c4c9bbb14>

    it seems the most expeditious way to get these sorts of problems
    fixed is NOT to regulate how it can be used but, rather, to
    hold firms using it financially (criminally?) accountable
    for its follies...

    https://www.washingtonpost.com/travel/2024/02/18/air-canada-airline-chatbot-ruling/

    Imagine an AI interpreting a CT scan and coming to a conclusion
    on which a human doctor would otherwise have "hedged".

    If the doctor *overrules* the AI and the pt suffers some "loss",
    he's screwed.

    If he *defers* to the AI and the pt suffers a loss, same outcome.

    As to the originally cited issue, who's responsibility is it to
    ensure the accuracy of the transcription of a pt visit? Prior
    to AI, the pt would just see a *summary* of his visit -- no specific
    notes from the provider.

    Will the pt have to vouch for the transcription? Will the provider?

    It's a standing joke, here, that providers always want you to
    show up 15 minutes early -- "for paperwork":
    "Review these documents and make any corrections that are necessary..." Yet, the corrections are still not incorporated for your NEXT visit.

    Ah, here's a provider that lets you review the documents
    ELECTRONICALLY and make corrections to them "live"! Shirley,
    these updates will find their way into the record...?

    Nope.

    "Look, I have told you EACH TIME I VISITED about my allergies
    and contraindicated medications. Yet, each visit shows that
    section as 'blank'... is there a problem with YOU system?
    Perhaps I should ask YOU to initial my corrections so I have
    more conclusive proof if you screw the pooch?"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)