• Re: Is it AI or not

    From Peter W.@21:1/5 to All on Thu Aug 10 12:21:34 2023
    What say you?

    Sitting on my fingers....

    Peter Wieck
    Melrose Park, PA

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From ohger1s@gmail.com@21:1/5 to micky on Thu Aug 10 12:57:58 2023
    On Thursday, August 10, 2023 at 2:43:50 PM UTC-4, micky wrote:
    No one in popular news talked about AI 6 months ago and all of sudden
    it's everywhere.

    The most recent discussion I heard was about "using AI to read X-rays
    and other medical imaging".

    They have computer programs that will "look" at, examine, x-rays etc.
    and find medical problems, sometimes ones that the radiologist misses.

    So it's good if both look them.

    But is it AI? Seems to me it one slightly complicated algorith and
    comes nowhere close to AI. The Turing test for example.

    And that lots of thigns they are calling AI these days are just slightly
    or moderately complicated computer programs, black boxes maybe, but not
    AI.

    What say you?

    I think it's a matter of definition. Unless and until an "AI" becomes self aware, it's not AI.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jeroni Paul@21:1/5 to micky on Fri Aug 11 04:20:17 2023
    micky wrote:
    No one in popular news talked about AI 6 months ago and all of sudden
    it's everywhere.

    The cloud has been trend for some years and has no more juice left, they had to take on something else.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Allodoxaphobia@21:1/5 to Jeroni Paul on Fri Aug 11 12:51:11 2023
    On Fri, 11 Aug 2023 04:20:17 -0700 (PDT), Jeroni Paul wrote:
    micky wrote:
    No one in popular news talked about AI 6 months ago and all of sudden
    it's everywhere.

    The cloud has been trend for some years and has no more juice left,
    they had to take on something else.

    I think you'll notice that anything new and shiny,
    even if it only employs an 8-bit microprocessor,
    well be granted the cloak of artificial intelligence.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From ohger1s@gmail.com@21:1/5 to Peter W. on Fri Aug 11 07:02:00 2023
    On Friday, August 11, 2023 at 9:23:59 AM UTC-4, Peter W. wrote:


    A natural blonde dyes her hair a dark shade of brunette - and then says to her husband: Look! Artificial intelligence!"


    Snort.. good one Peter.

    If there was true AI there would have been a virtual rimshot accompanying that post.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Peter W.@21:1/5 to All on Fri Aug 11 06:23:56 2023
    MPffffff..... I have sat on my fingers long enough.

    Apologies in advance to all blondes everywhere, including my wife!

    A natural blonde dyes her hair a dark shade of brunette - and then says to her husband: Look! Artificial intelligence!"

    That is about as seriously as I take AI as it affects my daily life. Sure, reading X-rays, doing certain types of surgery, and many other repetitive, but exacting tasks is well suited to a process that does not get tired, does not get blurry vision, does
    not ignore something or similar, and can even learn as it repeats those tasks ways to do so with fewer steps - but that is hardly "intelligence" - there is no self-awareness, just increasingly more complex responses to a defined problem.

    But, a robo-caller, as much as it/he/she may try to be 'real' is so obviously artificial as to make be very sad for those who might be fooled. And robotic phone-trees - such as many companies use to avoid having humans on the payroll - are the furthest
    possible thing from Intelligent.

    Peter Wieck
    Melrose Park, PA

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From steve1001908@outlook.com@21:1/5 to All on Fri Aug 11 16:40:38 2023
    On Thu, 10 Aug 2023 14:43:42 -0400, micky <NONONOmisc07@fmguy.com>
    wrote:

    No one in popular news talked about AI 6 months ago and all of sudden
    it's everywhere.

    The most recent discussion I heard was about "using AI to read X-rays
    and other medical imaging".

    They have computer programs that will "look" at, examine, x-rays etc.
    and find medical problems, sometimes ones that the radiologist misses.

    So it's good if both look them.

    But is it AI? Seems to me it one slightly complicated algorith and
    comes nowhere close to AI. The Turing test for example.

    And that lots of thigns they are calling AI these days are just slightly
    or moderately complicated computer programs, black boxes maybe, but not
    AI.

    What say you?

    Designing machines for looking at images and finding specific items
    like unexpected growths is quite simple. That's not AI. Simple 3 layer
    neural networks can learn to do that already. When machines do things
    they are not supposed to do that may be by using a form of AI. When my
    washing machine starts making meals for me that could be described as
    AI!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John@21:1/5 to tracy@invalid.com on Fri Aug 11 16:17:09 2023
    tracy@invalid.com writes:

    On 11 Aug 2023 01:50:48 GMT, rbowman <bowman@montana.com> wrote:

    On Thu, 10 Aug 2023 14:55:10 -0500, tracy wrote:


    Personally, I'm sick of ths AI crap which seems to exist only in the
    minds of the tech idiots. When it devolves into the lives of us common
    dummies, I'll worry about it then.

    Already there:

    https://www.prnewswire.com/news-releases/ai-powered-litterbox-system- >>offers-new-standard-of-care-for-cat-owners-301632491.html

    "Using artificial intelligence developed by a team of Purina pet and data >>experts, the Petivity Smart Litterbox System detects meaningful changes >>that indicate health conditions that may require a veterinarian's
    attention or diagnosis. The monitor, which users are instructed to place >>under each litterbox in the household, gathers precise data on each cat's >>weight and important litterbox habits to help owners be proactive about >>their pet's health."

    And why should I be worried about AI for litterboxes?

    Let me know when it starts breaking TrueCrypt or PGP encryption and devastating our security more than Giggle.com and Redmond are doing.

    Until then - shut the *F* UP about *realistic" AI used for something
    outside of bagel baking and litterboxes.

    Unless there's an as-yet-unknown flaw in the design of RSA or other
    public-key algorithms, there's no way in which an AI of any sort could
    "break" encryption -- artificial intelligence cannot beat mathematics. I
    guess some hypothetical super-AI could design a quantum computer which
    could break non-quantum-resistant algorithms, but that's a pretty far
    cry from the gussied-up chatbots most people are talking about.

    Anyway I'm pretty sure you've contributed at least 50% of the traffic on
    this topic just by whining that people are talking about something you
    don't care about. Learn to use your client's scoring system, killfile
    the conversation, then take your own advice and shut the fuck up.

    john

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bruce bowser@21:1/5 to micky on Sun Aug 13 07:37:53 2023
    On Thursday, August 10, 2023 at 2:43:50 PM UTC-4, micky wrote:
    No one in popular news talked about AI 6 months ago and all of sudden
    it's everywhere.

    The most recent discussion I heard was about "using AI to read X-rays
    and other medical imaging".

    They have computer programs that will "look" at, examine, x-rays etc.
    and find medical problems, sometimes ones that the radiologist misses.

    So it's good if both look them.

    But is it AI? Seems to me it one slightly complicated algorith and
    comes nowhere close to AI. The Turing test for example.

    And that lots of thigns they are calling AI these days are just slightly
    or moderately complicated computer programs, black boxes maybe, but not
    AI.

    What say you?

    Being a person with a polly sci major and Army ROTC background in college (along with union electrical construction school), my understanding of AI however, came from talk radio (both politically leftist and rightist).

    I heard rightist Hugh Hewitt say something about teaching law school during the day and how someone put a reading, writing version of AI in front of a California state bar exam lawyer certification test and it did much better than several of its human
    counterparts. I guess that's around the time when AI became all the talk.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bruce bowser@21:1/5 to Chris on Mon Aug 14 04:45:46 2023
    On Sunday, August 13, 2023 at 5:39:38 PM UTC-4, Chris wrote:
    Paul <nos...@needed.invalid> wrote:
    On 8/10/2023 2:43 PM, micky wrote:
    No one in popular news talked about AI 6 months ago and all of sudden
    it's everywhere.

    The most recent discussion I heard was about "using AI to read X-rays
    and other medical imaging".

    They have computer programs that will "look" at, examine, x-rays etc.
    and find medical problems, sometimes ones that the radiologist misses.

    So it's good if both look them.

    But is it AI? Seems to me it one slightly complicated algorith and
    comes nowhere close to AI. The Turing test for example.

    And that lots of thigns they are calling AI these days are just slightly >> or moderately complicated computer programs, black boxes maybe, but not >> AI.

    What say you?


    A radiologist assistant is not a Large Language Model.

    I would expect to some extent, image analysis would be a
    "module" on an LLM, and not a part of the main bit.

    Bare minimum, it's a neural network, trained on images,
    one at a time, that slosh around and train the neurons.

    For example, something like YOLO_5 (You Only Look Once), can
    be trained to identify animals in photos. It draws a box around
    the presumed animal and names it (or whatever). That uses a lot
    less hardware than a Large Language Model, and less storage.
    The article had a picture with a bear in it, and indeed, the
    bear had a square drawn around it.

    But as for whether the "quality" is there, that is another
    issue entirely. In my opinion, no radiologist would ever trust
    something as sketchy as YOLO. Radiologists are very particular
    about their jobs, as they hate getting sued.

    It's a sad reflection of priorities where the primary concern is about
    being sued rather than making sure patients get the best treatment.

    In the UK and Europe, the plaintiff must repay expenses of those involved if their side loses.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From three_jeeps@21:1/5 to micky on Thu Aug 17 13:52:35 2023
    On Thursday, August 10, 2023 at 2:43:50 PM UTC-4, micky wrote:
    No one in popular news talked about AI 6 months ago and all of sudden
    it's everywhere.

    The most recent discussion I heard was about "using AI to read X-rays
    and other medical imaging".

    They have computer programs that will "look" at, examine, x-rays etc.
    and find medical problems, sometimes ones that the radiologist misses.

    So it's good if both look them.

    But is it AI? Seems to me it one slightly complicated algorith and
    comes nowhere close to AI. The Turing test for example.

    And that lots of thigns they are calling AI these days are just slightly
    or moderately complicated computer programs, black boxes maybe, but not
    AI.

    What say you?
    I did a fair amount of 'AI' research in the 80s and early 90s. The amount of hype was amazing and it was all about 'branding' IMHO...a new science fiction technology made real. I bundled up for the first AI winter....I've moved to a different climate
    where I don't have to bundle up for the second AI winter.....

    The really hard aspects of AI are knowledge discovery and composition which has made some progress but nowhere near sensational. Ask a computer program to design and build an automatic transmission, and then figure out why it doesn't work as well when a
    different ATF is used.....We are *really* far away from that.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From three_jeeps@21:1/5 to rbowman on Thu Aug 17 13:44:28 2023
    On Thursday, August 10, 2023 at 10:33:57 PM UTC-4, rbowman wrote:
    On Thu, 10 Aug 2023 21:08:29 GMT, Scott Lurndal wrote:

    The term "AI" has been misused by media and most non-computer
    scientists. The current crop "AI" tools (e.g. chatGPT) are not
    artificial intelligence, but rather simple statistical algorithms based
    on a huge volume of pre-processed data.
    Not quite...

    https://blog.dataiku.com/large-language-model-chatgpt

    I played around with neural networks in the '80s. It was going to be the Next Big Thing. The approach was an attempt to quantify the biological neuron model and the relationship of axons and dendrites.

    https://en.wikipedia.org/wiki/Biological_neuron_model

    There was one major problem: the computing power wasn't there. Fast
    forward 40 years and the availability of GPUs. Google calls their proprietary units TPUs, or tensor processing units, which is more
    accurate. That's the linear algebra tensor, not the physics tensor. While they are certainly related the terminology changes a bit between disciplines.

    These aren't quite the GPUs in your gaming PC:

    https://beincrypto.com/chatgpt-spurs-nvidia-deep-learning-gpu-demand-post- crypto-mining-decline/

    For training a GPT you need a lot of them -- and a lot of power. They make the crypto miners look good.

    The dirty little secret is after you've trained your model with the
    training dataset, validated it with the validation data, and tweaked the parameters for minimal error you don't really know what's going on under
    the hood.

    https://towardsdatascience.com/text-generation-with-markov-chains-an- introduction-to-using-markovify-742e6680dc33

    Markov chains are relatively simple.

    I developed ANN classifiers in the mid-late 80s for explosive recognition in suitcases and as hardware plant observers for control loops. The size of the training set is one of the issues...but..... the web has a gazillion images that training sets are
    harvested. I know of a few local self-driving car companies that do exactly that.

    Yes, verification of the ANN is and still is a big issue.
    J

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From whit3rd@21:1/5 to Paul on Thu Aug 17 20:46:47 2023
    On Friday, August 11, 2023 at 2:42:17 AM UTC-7, Paul wrote:

    Every technology needs to be classified.

    CharGPT is about as useful as OCR. OCR is about 99% accurate.
    You've just run 200 pages through the scanner. Now what...

    Voluminous output, that must be scrupulously checked.

    An "advisor", not a "boss".

    Yeah, you DO want to classify that source. Alas, the
    internet is set up so... it could be that 'Paul' is an AI.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)