No, not that AI is going to take your job ("render you
redundant") but, rather, that it is going to be relied upon
for information that is provably incorrect:
<https://apnews.com/article/ai-artificial-intelligence-health-business-90020cdf5fa16c79ca2e5b6c4c9bbb14>
it seems the most expeditious way to get these sorts of problems
fixed is NOT to regulate how it can be used but, rather, to
hold firms using it financially (criminally?) accountable
for its follies...
On a sunny day (Sat, 26 Oct 2024 17:43:26 -0700) it happened Don Y <blockedofcourse@foo.invalid> wrote in <vfk2bp$3ueps$1@dont-email.me>:
No, not that AI is going to take your job ("render you redundant") but, >>rather, that it is going to be relied upon for information that is
provably incorrect:
<https://apnews.com/article/ai-artificial-intelligence-health- business-90020cdf5fa16c79ca2e5b6c4c9bbb14>
it seems the most expeditious way to get these sorts of problems fixed
is NOT to regulate how it can be used but, rather, to hold firms using
it financially (criminally?) accountable for its follies...
Sure, like doctors tha'st make mistakes:
https://qualitysafety.bmj.com/content/33/2/109
Does them using AI reduce errors?
If so they could be accountable for not using it...
Like a second opinion, always required?
It is complicated..
hacked;-)
Power outage...
etc etc
On Sun, 27 Oct 2024 05:53:10 GMT, Jan Panteltje wrote:
On a sunny day (Sat, 26 Oct 2024 17:43:26 -0700) it happened Don Y
<blockedofcourse@foo.invalid> wrote in <vfk2bp$3ueps$1@dont-email.me>:
No, not that AI is going to take your job ("render you redundant") but, >>>rather, that it is going to be relied upon for information that is >>>provably incorrect:
<https://apnews.com/article/ai-artificial-intelligence-health- >business-90020cdf5fa16c79ca2e5b6c4c9bbb14>
it seems the most expeditious way to get these sorts of problems fixed
is NOT to regulate how it can be used but, rather, to hold firms using
it financially (criminally?) accountable for its follies...
Sure, like doctors tha'st make mistakes:
https://qualitysafety.bmj.com/content/33/2/109
Does them using AI reduce errors?
If so they could be accountable for not using it...
Like a second opinion, always required?
It is complicated..
hacked;-)
Power outage...
etc etc
Anyone concerned about medical confidentiality should *never* consent to >having their records held in digital form anyway. Databases holding such
info are prime targets for cyberattacks, as the UK's NHS is quickly
finding out.
No, not that AI is going to take your job ("render you
redundant") but, rather, that it is going to be relied upon
for information that is provably incorrect:
<https://apnews.com/article/ai-artificial-intelligence-health-business-90020cdf5fa16c79ca2e5b6c4c9bbb14>
it seems the most expeditious way to get these sorts of problems
fixed is NOT to regulate how it can be used but, rather, to
hold firms using it financially (criminally?) accountable
for its follies...
On 10/27/24 02:43, Don Y wrote:
No, not that AI is going to take your job ("render you
redundant") but, rather, that it is going to be relied upon
for information that is provably incorrect:
<https://apnews.com/article/ai-artificial-intelligence-health-business-90020cdf5fa16c79ca2e5b6c4c9bbb14>
it seems the most expeditious way to get these sorts of problems
fixed is NOT to regulate how it can be used but, rather, to
hold firms using it financially (criminally?) accountable
for its follies...
https://www.washingtonpost.com/travel/2024/02/18/air-canada-airline-chatbot-ruling/
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 415 |
Nodes: | 16 (2 / 14) |
Uptime: | 93:01:49 |
Calls: | 8,690 |
Calls today: | 5 |
Files: | 13,250 |
Messages: | 5,947,022 |