• Life hack to discern AI posts from genuine humans!

    From Oregonian Haruspex@21:1/5 to All on Wed Mar 15 05:11:03 2023
    It’s pretty obvious. Responsible AI is demanded by big tech, governments,
    and the VCs. So just say nigger. Or faggot, looney troon, whatever you
    like. The generative models are very good but they absolutely won’t say nigger.

    Yeah it’s “offensive” but it’s less offensive than the idea that I’m wasting my time talking to a chat bot sent onto the net to argue with,
    shill, prod, and wheedle me at the whim of some government niggerfaggot or
    rot pocket troon advertiser. I actually believe this method to be fool
    proof at this point.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Spiros Bousbouras@21:1/5 to Oregonian Haruspex on Wed Mar 15 13:40:45 2023
    On Wed, 15 Mar 2023 05:11:03 -0000 (UTC)
    Oregonian Haruspex <no_email@invalid.invalid> wrote:

    It’s pretty obvious. Responsible AI is demanded by big tech, governments, and the VCs. So just say nigger. Or faggot, looney troon, whatever you
    like. The generative models are very good but they absolutely won’t say nigger.

    Do you mean "responsive" instead of "responsible" ? Or is this a pun of
    sorts ?

    Yeah it’s “offensive” but it’s less offensive than the idea that I’m
    wasting my time talking to a chat bot sent onto the net to argue with,
    shill, prod, and wheedle me at the whim of some government niggerfaggot or rot pocket troon advertiser.

    Whether you're wasting your time depends on what you are trying to get out
    of a discussion. What I aim to get out of a discussion is arguments and
    facts relative to the matter at hand and present my own arguments and facts
    for criticism. How well chatboxes do in this area I don't know. With the examples I've seen , the replies of ChatGPT tend to be fairly generic but
    then the questions asked were so general that it would be hard to come up
    with a non generic but well supported reply. It would be interesting if
    we had an example of a human discussing something with a chatbox and the chatbox coming up with a point the human considered good and hadn't thought
    of.

    I actually believe this method to be fool
    proof at this point.

    If the chatbox operates from or was programmed in certain countries where such terms are totally socially unacceptable and possibly also illegal then yes.
    But there's no guarantee that all chatboxes would be operating from such countries so I wouldn't consider your method foolproof. Another problem is
    that some human potential respondents might also decide to avoid you.

    --
    You could view the chinese government structure not so much as a product of maoist thought but instead as an elevation of corporate structure to the scale of the world's largest state. [...] Just imagine every chinese citizen as an employee of China, Inc.
    http://www.antipope.org/charlie/blog-static/2015/02/a-different-cluetrain.html

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Sylvia Else@21:1/5 to Oregonian Haruspex on Thu Mar 16 10:14:38 2023
    On 15-Mar-23 4:11 pm, Oregonian Haruspex wrote:

    It’s pretty obvious. Responsible AI is demanded by big tech, governments, and the VCs. So just say nigger. Or faggot, looney troon, whatever you
    like. The generative models are very good but they absolutely won’t say nigger.

    Yeah it’s “offensive” but it’s less offensive than the idea that I’m
    wasting my time talking to a chat bot sent onto the net to argue with,
    shill, prod, and wheedle me at the whim of some government niggerfaggot or rot pocket troon advertiser. I actually believe this method to be fool
    proof at this point.


    When did everything become a "hack"?

    Sylvia.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Dorsey@21:1/5 to All on Wed Mar 15 22:59:04 2023
    Oh, its very easy. Computers say things like "We are computers! If you cut us, do we not bleed? If you poison us, do we not die?" When addressed the same way, people say things like "bus error: core dumped." So I do not see
    a serious issue.
    --scott

    --
    "C'est un Nagra. C'est suisse, et tres, tres precis."

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mike@21:1/5 to Sylvia Else on Thu Mar 16 01:38:35 2023
    Sylvia Else <sylvia@email.invalid> writes:

    When did everything become a "hack"?

    Not "everything" has. But as more and more of everything you do or
    encounter is structured -- is "framed" in George Lakoff's terms -- by corporate entities, subverting the intended ends of something to your own
    ends is a hack in the hacker sense. From the definition hacker in
    the Jargon File:

    7. One who enjoys the intellectual challenge of creatively
    overcoming or circumventing limitations.

    When the limitations are imposed by framing that is nearly invisible
    for corporate ends, breaking the frame is metaphorically the same as a
    locution in programming the end-runs language, OS or hardware limitations.

    --
    Mike Spencer Nova Scotia, Canada

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Sylvia Else@21:1/5 to Mike on Fri Mar 17 10:58:19 2023
    On 16-Mar-23 3:38 pm, Mike wrote:
    Sylvia Else <sylvia@email.invalid> writes:

    When did everything become a "hack"?

    Not "everything" has.

    Well, no, but media reports here in Australia seem to like using the
    word for a lot of things.

    Sylvia.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Oregonian Haruspex@21:1/5 to Spiros Bousbouras on Sun Mar 19 05:53:54 2023
    Spiros Bousbouras <spibou@gmail.com> wrote:

    Do you mean "responsive" instead of "responsible" ? Or is this a pun of
    sorts ?


    Responsible AI is a real thing, in fact it is mandated by Blackrock and the other big investment firms, banks, and big tech companies. The idea is to
    stop AI from becoming racist, sexist, or anything-ist.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mike Spencer@21:1/5 to Oregonian Haruspex on Sun Mar 19 03:33:56 2023
    Oregonian Haruspex <no_email@invalid.invalid> writes:

    Spiros Bousbouras <spibou@gmail.com> wrote:

    Do you mean "responsive" instead of "responsible" ? Or is this a pun of
    sorts ?


    Responsible AI is a real thing, in fact it is mandated by Blackrock and the other big investment firms, banks, and big tech companies. The idea is to stop AI from becoming racist, sexist, or anything-ist.

    So, also not capitalist?

    --
    Mike Spencer Nova Scotia, Canada

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andy Burns@21:1/5 to Oregonian Haruspex on Sun Mar 19 09:02:58 2023
    Oregonian Haruspex wrote:

    Responsible AI is a real thing, in fact it is mandated by Blackrock and the other big investment firms, banks, and big tech companies. The idea is to stop AI from becoming racist, sexist, or anything-ist.

    Seems to work while they're in stealth mode and can curate the training material behind closed doors. Then they have to stop them from learning
    once they go public, to prevent them getting poisoned, therefore don't
    believe answers they give about recent events ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Spiros Bousbouras@21:1/5 to Mike Spencer on Sun Mar 19 12:49:16 2023
    On 19 Mar 2023 03:33:56 -0300
    Mike Spencer <mds@bogus.nodomain.nowhere> wrote:
    Oregonian Haruspex <no_email@invalid.invalid> writes:

    Spiros Bousbouras <spibou@gmail.com> wrote:

    Do you mean "responsive" instead of "responsible" ? Or is this a pun of
    sorts ?


    Responsible AI is a real thing, in fact it is mandated by Blackrock and the other big investment firms, banks, and big tech companies. The idea is to stop AI from becoming racist, sexist, or anything-ist.

    So, also not capitalist?

    Or communist or anarchist. Libertarian should be ok as long as it's not objectivist :-D

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Julio Di Egidio@21:1/5 to Oregonian Haruspex on Mon Mar 20 00:12:19 2023
    On Wednesday, 15 March 2023 at 06:11:04 UTC+1, Oregonian Haruspex wrote:

    It’s pretty obvious. Responsible AI is demanded by big tech, governments

    Another name for our free falling totalitarian insanity.

    I actually believe this method to be fool proof at this point.

    Indeed here we finally have our Turing test... for humans.

    Julio

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Oregonian Haruspex@21:1/5 to Mike Spencer on Tue Mar 21 07:01:04 2023
    Mike Spencer <mds@bogus.nodomain.nowhere> wrote:

    Oregonian Haruspex <no_email@invalid.invalid> writes:

    Spiros Bousbouras <spibou@gmail.com> wrote:

    Do you mean "responsive" instead of "responsible" ? Or is this a pun of
    sorts ?


    Responsible AI is a real thing, in fact it is mandated by Blackrock and the >> other big investment firms, banks, and big tech companies. The idea is to
    stop AI from becoming racist, sexist, or anything-ist.

    So, also not capitalist?


    Can you define capitalism for me? I find that people who talk about it
    never seem to be able to.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Spiros Bousbouras@21:1/5 to Oregonian Haruspex on Tue Mar 21 09:05:32 2023
    On Tue, 21 Mar 2023 07:01:04 -0000 (UTC)
    Oregonian Haruspex <no_email@invalid.invalid> wrote:
    So, also not capitalist?

    Can you define capitalism for me? I find that people who talk about it
    never seem to be able to.

    I for one would not want this group to turn into political discussion (unless there is a strong connection with computers). So perhaps if someone wants to discuss what any *ism means , they can reply on a political newsgroup ,
    just post the message ID here and the discussion can continue on the
    political newsgroup. Note that crossposting and setting followups for the political newsgroup won't work because , when people feel passionately about something (and they almost always do when it comes to politics) , they want their refutation or response to appear on the same or more newsgroups as the message they are replying to so they will ignore the followup.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anton Shepelev@21:1/5 to All on Tue Mar 21 18:31:51 2023
    Oregonian Haruspex:

    Life hack to discern AI posts from genuine humans!

    Hmmm. I can discern *any* post from a human being. Posts
    and humans are so different!

    --
    () ascii ribbon campaign -- against html e-mail
    /\ www.asciiribbon.org -- against proprietary attachments

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Oregonian Haruspex@21:1/5 to Spiros Bousbouras on Tue Mar 21 21:09:59 2023
    Spiros Bousbouras <spibou@gmail.com> wrote:
    On Tue, 21 Mar 2023 07:01:04 -0000 (UTC)
    Oregonian Haruspex <no_email@invalid.invalid> wrote:
    So, also not capitalist?

    Can you define capitalism for me? I find that people who talk about it
    never seem to be able to.

    I for one would not want this group to turn into political discussion (unless there is a strong connection with computers). So perhaps if someone wants to discuss what any *ism means , they can reply on a political newsgroup , just post the message ID here and the discussion can continue on the political newsgroup. Note that crossposting and setting followups for the political newsgroup won't work because , when people feel passionately about something (and they almost always do when it comes to politics) , they want their refutation or response to appear on the same or more newsgroups as the message they are replying to so they will ignore the followup.


    Good point and don’t worry. Nobody will ever define capitalism so there’s zero risk to the group.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Computer Nerd Kev@21:1/5 to Spiros Bousbouras on Wed Mar 22 16:09:30 2023
    Spiros Bousbouras <spibou@gmail.com> wrote:
    On Tue, 21 Mar 2023 07:01:04 -0000 (UTC)
    Oregonian Haruspex <no_email@invalid.invalid> wrote:
    Can you define capitalism for me? I find that people who talk about it
    never seem to be able to.

    I for one would not want this group to turn into political discussion (unless there is a strong connection with computers). So perhaps if someone wants to discuss what any *ism means , they can reply on a political newsgroup ,

    I propose that this question was sent on the wrong internet
    protocol entirely. Here's what I received when I asked it over
    DICT:

    3 definitions retrieved:

    From The Collaborative International Dictionary of English v.0.48:
    capitalism \cap"i*tal*is`m\ (k[a^]p"[i^]*tal*[i^]z`m), n.
    An economic system based on predominantly private (individual
    or corporate) investment in and ownership of the means of
    production, distribution, and exchange of goods and wealth;
    contrasted with {socialism} or especially {communism}, in
    which the state has the predominant role in the economy.

    Syn: capitalist economy.
    [WordNet 1.5 +PJC]

    From WordNet (r) 3.0 (2006):
    capitalism
    n 1: an economic system based on private ownership of capital
    [syn: {capitalism}, {capitalist economy}] [ant:
    {socialism}, {socialist economy}]

    From :
    24 Moby Thesaurus words for "capitalism":
    capitalistic system, finance capitalism, free competition,
    free economy, free enterprise, free trade, free-enterprise economy,
    free-enterprise system, individualism, isolationism, laissez-aller,
    laissez-faire, laissez-faireism, let-alone policy,
    let-alone principle, liberalism, noninterference, nonintervention,
    private enterprise, private ownership, private sector,
    rugged individualism, self-regulating market, state capitalism

    --
    __ __
    #_ < |\| |< _#

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mike Spencer@21:1/5 to Oregonian Haruspex on Wed Mar 22 02:44:22 2023
    Oregonian Haruspex <no_email@invalid.invalid> writes:

    Good point and don't worry. Nobody will ever define capitalism so there's zero risk to the group.

    For a complex matter such as capitalism, , you shouldn't ask to have the
    term *defined* but to have what we usually mean by the term
    characterized.

    In its explain-it-to-children basics, it's the methodology by which,
    when whatever you do to supply your needs produces more that your
    needs, you don't give it away (charity), throw it away (potlatch),
    drink or shoot it up (dissipation), find ways to just show that you
    have it (conspicuous consumption) or simply hoard it. You contrive to
    employ your productive excess, perhaps in someone else's hands, in a
    way that will increase its value.

    That's how you're supposed to think of "capitalism" so that you have
    warm feelings about the subject.

    Characterizing what we usually mean by the term, however, would indeed
    take me into politics-tainted waters and, in any case, the result
    would be as different from the above fairy tale "basics" as the Roman
    Catholic Church in, say, the 16th c. was different from early
    Christrian congregations.

    My original thought was that it may be fellow geeks and hackers who
    are inventing AI but it's major corporations & people who are fronting
    the money for wages, hardware and all. They would not be
    happy were emerging AI entities to have a deep personall commitment to resolving everything with charity or potlatch as the foundational
    principles.

    --
    Mike Spencer Nova Scotia, Canada

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mike Spencer@21:1/5 to Computer Nerd Kev on Wed Mar 22 18:30:27 2023
    Computer Nerd Kev <not@telling.you.invalid> writes:

    Spiros Bousbouras <spibou@gmail.com> wrote:

    On Tue, 21 Mar 2023 07:01:04 -0000 (UTC)
    Oregonian Haruspex <no_email@invalid.invalid> wrote:

    Can you define capitalism for me? I find that people who talk about it
    never seem to be able to.

    I for one would not want this group to turn into political
    discussion (unless there is a strong connection with computers). So
    perhaps if someone wants to discuss what any *ism means , they can
    reply on a political newsgroup ,

    I propose that this question was sent on the wrong internet
    protocol entirely. Here's what I received when I asked it over
    DICT:

    3 definitions retrieved:

    [snip]

    The matter of whether or not an AI entity might eschew a "capitalist" viewpoint, whatever the favored notion of "capitalism", isn't really a political matter unless one chooses to make it so. It's a tech
    industry matter that fits fine with comp.misc.

    At the end of the day, all this just feels disappointing. OpenAI's
    rush to market has already caused chaos in classrooms and done
    unmistakable damage to the credibility of the journalism
    industry. It's disheartening to see Google go down the same road
    -- especially because it's easy to imagine a world in which the
    tech giant had taken its time, made sure it thoroughly understood
    the underlying tech, and released a much cleaner product to the
    public.

    But of course, it's hard to resist a mad dash to market when every
    percentage point in market share you lose to your rival leads to
    substantial financial losses. Money talks -- and the AI arms race
    is listening.

    https://futurism.com/google-bard-conspiracy-theory-citations

    Listening to money when it talks is not, of course, limited to
    contemporary financialized capitalism. It's baseline for organized
    crime and other, equally imprecisely defined systems as well.


    Jury is still out on whether we presently have Artifical Stupidity,
    Artificial Narcissist Deviousness or some other more or less near miss
    of the target notion of "intelligence".

    --
    Mike Spencer Nova Scotia, Canada

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Dorsey@21:1/5 to mds@bogus.nodomain.nowhere on Wed Mar 22 21:38:41 2023
    Mike Spencer <mds@bogus.nodomain.nowhere> wrote:

    The matter of whether or not an AI entity might eschew a "capitalist" >viewpoint, whatever the favored notion of "capitalism", isn't really a >political matter unless one chooses to make it so. It's a tech
    industry matter that fits fine with comp.misc.

    I imagine a GPS system in your car that says "Turn left at the next stop..." "Turn right immediately..." "Prepare for lefthand turn." "Reduce tariff." "Turn left on Main street." "Reduce tariff on Chinese electronic products..." --scott


    --
    "C'est un Nagra. C'est suisse, et tres, tres precis."

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Spiros Bousbouras@21:1/5 to Mike Spencer on Thu Mar 23 02:12:20 2023
    On 22 Mar 2023 02:44:22 -0300
    Mike Spencer <mds@bogus.nodomain.nowhere> wrote:
    My original thought was that it may be fellow geeks and hackers who
    are inventing AI but it's major corporations & people who are fronting
    the money for wages, hardware and all.

    I'm not sure exactly how to interpret the tense in "are inventing" but
    I'll point out that AI has been around for decades. It has had some
    great successes in the last few years. To what extent the algorithms
    and research which led to these successes is public knowledge , I don't
    know. My overall sense is that in general they are known. For example
    there exist "Leela Zero" , "Leela Chess Zero" and the NNUE enhancement
    to Stockfish ; see
    en.wikipedia.org/wiki/Leela_Zero
    en.wikipedia.org/wiki/Leela_Chess_Zero
    en.wikipedia.org/wiki/Stockfish_(chess)

    for more. Obviously one can't know how much more advanced may be stuff which companies , 3 letter agencies , etc. , keep secret but this applies to
    anything technology related even if it started out in the open.

    Regarding hardware , any popular website , whether the main attraction is an
    AI chatbot or nude photos , will be expensive due to server and bandwidth costs. Nothing new or AI specific about this.

    They would not be
    happy were emerging AI entities to have a deep personall commitment to resolving everything with charity or potlatch as the foundational
    principles.

    Regarding which politics AI will support , I expect pretty much the whole spectrum covered by human opinions will eventually be covered. So there
    will be "right wing" AIs (chatbots) trained with right wing material ,
    left wing AIs trained with left wing material , fascist AIs , racist AIs ,
    etc. , all trained with the appropriate source material. I'm actually
    curious how it will play out. For example how well will AIs do with
    demagoguery or manipulating emotions for political ends ?

    --
    Put style over substance abuse.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mike Spencer@21:1/5 to Scott Dorsey on Thu Mar 23 01:52:55 2023
    kludge@panix.com (Scott Dorsey) writes:

    I imagine a GPS system in your car that says "Turn left at the next
    stop..." "Turn right immediately..." "Prepare for lefthand turn."
    "Reduce tariff." "Turn left on Main street." "Reduce tariff on
    Chinese electronic products..."

    Ha! So, like ads on the net?

    You: Query: Treatment for labyrinthitis

    Net: Shop for labyrinthitis, click here -> x

    Thing is, you don't ask your GPS questions the answers to which have
    much room for ads or ideological slogans. But people seem keen to ask
    the new natural-language systems questions that are prime turf for slanted/framed answers.

    I suppose a well developed car GPS could give an answer like, "I
    detect that your complexion is dark. While the shortest route is
    Route A, the longer Route B will avoid a district with a high recorded incidence of Driving while Black stops resulting in arrests or
    violence. Please select A or B."


    --
    Mike Spencer Nova Scotia, Canada

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mike Spencer@21:1/5 to Spiros Bousbouras on Thu Mar 23 16:19:18 2023
    Spiros Bousbouras <spibou@gmail.com> writes:

    On 22 Mar 2023 02:44:22 -0300
    Mike Spencer <mds@bogus.nodomain.nowhere> wrote:

    My original thought was that it may be fellow geeks and hackers who
    are inventing AI but it's major corporations & people who are fronting
    the money for wages, hardware and all.

    I'm not sure exactly how to interpret the tense in "are inventing" but
    I'll point out that AI has been around for decades.

    "Classical" AI such as Cyc since circa 1960, neural nets since the
    publication of the Parallel Distributed Processing books in 1986. The
    latter has already made stupendous leaps in pattern recognition. But
    these recently publicized "chatbots" are shooting for some kind of
    generalized "intelligence" (I think there's a jargon term in the trade
    but I forget it) that will approximate (or appear to approximate) a convincingly human-like response to natural language conversation.
    There's a lurking notion that we can approach the much-ballyhooed
    "singularity" asymptotically through language.

    And that's what people working on the chatbots "are inventing".

    It has had some great successes in the last few years. To what
    extent the algorithms and research which led to these successes is
    public knowledge , I don't know. My overall sense is that in general
    they are known. For example there exist "Leela Zero" , "Leela Chess
    Zero" and the NNUE enhancement to Stockfish ; see

    en.wikipedia.org/wiki/Leela_Zero

    The knowledge that makes Leela Zero a strong player is contained
    in a neural network, which is trained based on the results of
    previous games that the program played.

    I read the PDP books when they came out and some more advanced stuff,
    wrote toy NNs, but I haven't kept up. AFAICS, Leela Zero lies on the
    threshold of developments that go beyond what I understand. I don't
    know if the "machine learning" currently making a splash is due to
    massively reiterated training episodes of newer algorithms or data
    structures I don't know about.

    Obviously one can't know how much more advanced may be stuff which
    companies , 3 letter agencies , etc. , keep secret but this applies to anything technology related even if it started out in the open.

    Yes, just so. But TLAs and megacorps will be pursuing channels that
    offer a promise of serving their own specific goals -- surveillance,
    power, profit, shareholder value, whatever.

    They would not be happy were emerging AI entities to have a deep
    personal commitment to resolving everything with charity or
    potlatch as the foundational principles.

    Regarding which politics AI will support , I expect pretty much the whole spectrum covered by human opinions will eventually be covered. So there
    will be "right wing" AIs (chatbots) trained with right wing material ,
    left wing AIs trained with left wing material , fascist AIs , racist AIs , etc. , all trained with the appropriate source material. I'm actually
    curious how it will play out. For example how well will AIs do with demagoguery or manipulating emotions for political ends ?

    There's already a controversy over social media platforms using
    "algorithms" (for which I read, "neural net training algorithms") to
    deliver to users more of whatever it is that stimulates the most
    clicks on links that generate revenue for the platform or, more
    generally, whatever keeps the users actively engaging with the site. Metaphorically, that's a search for pheromones that trigger user
    behavior independent of conscious user inclinations or intents.

    It remains a mystery and object of heated social psychology research
    how it is that someone like Hitler or Mussolini or, for that matter,
    the leaders of more recent politics or much smaller cults can entrain
    the minds of numerous people almost as if (again metaphorically) he
    had hit on the resonant frequency of many otherwise heterogeneous
    people. The threat -- or at least one of the threats -- of AI is that
    such triggers or resonant frequencies can be detected and isolated by
    a NN and embodied in language (or other media) that coerces the public
    to the ends of corporations, TLAs or whoever it is that pays for and
    deploys the AI tech. Ideology per se is a side issue to massively
    manipulating people to ends not their own.


    --
    Mike Spencer Nova Scotia, Canada

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Computer Nerd Kev@21:1/5 to Mike Spencer on Fri Mar 24 07:29:57 2023
    Mike Spencer <mds@bogus.nodomain.nowhere> wrote:

    Spiros Bousbouras <spibou@gmail.com> writes:

    On 22 Mar 2023 02:44:22 -0300
    Mike Spencer <mds@bogus.nodomain.nowhere> wrote:

    My original thought was that it may be fellow geeks and hackers who
    are inventing AI but it's major corporations & people who are fronting
    the money for wages, hardware and all.

    I'm not sure exactly how to interpret the tense in "are inventing" but
    I'll point out that AI has been around for decades.

    "Classical" AI such as Cyc since circa 1960, neural nets since the publication of the Parallel Distributed Processing books in 1986. The
    latter has already made stupendous leaps in pattern recognition. But
    these recently publicized "chatbots" are shooting for some kind of generalized "intelligence" (I think there's a jargon term in the trade
    but I forget it) that will approximate (or appear to approximate) a convincingly human-like response to natural language conversation.
    There's a lurking notion that we can approach the much-ballyhooed "singularity" asymptotically through language.

    And that's what people working on the chatbots "are inventing".

    I think replacing customer service people answering support emails
    and phone calls might be one of their prime targets. Certainly no
    great amount of intelligence required to approximate the sorts of unconvincingly human-like exchanges that I have with them, even
    when they (probably?) are real humans.

    It remains a mystery and object of heated social psychology research
    how it is that someone like Hitler or Mussolini or, for that matter,
    the leaders of more recent politics or much smaller cults can entrain
    the minds of numerous people almost as if (again metaphorically) he
    had hit on the resonant frequency of many otherwise heterogeneous
    people. The threat -- or at least one of the threats -- of AI is that
    such triggers or resonant frequencies can be detected and isolated by
    a NN and embodied in language (or other media) that coerces the public
    to the ends of corporations, TLAs or whoever it is that pays for and
    deploys the AI tech. Ideology per se is a side issue to massively manipulating people to ends not their own.

    I'm facinated by the fact that financial market trading has been
    dominated by automated "algorithmic trading" since 2008: https://en.wikipedia.org/wiki/File:Algorithmic_Trading._Percentage_of_Market_Volume.png
    from https://en.wikipedia.org/wiki/Algorithmic_trading

    How much of that is AI-based now is probably impossible to know,
    given that everyone involved keeps their exact techniques top
    secret, but either way I think you could make a fair argument that
    computers have plenty of potential to manipulate society already.

    --
    __ __
    #_ < |\| |< _#

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mike Spencer@21:1/5 to Computer Nerd Kev on Sat Mar 25 03:11:45 2023
    not@telling.you.invalid (Computer Nerd Kev) writes:

    I'm facinated by the fact that financial market trading has been
    dominated by automated "algorithmic trading" since 2008:

    https://en.wikipedia.org/wiki/File:Algorithmic_Trading._Percentage_of_Market_Volume.png
    from https://en.wikipedia.org/wiki/Algorithmic_trading

    How much of that is AI-based now is probably impossible to know,
    given that everyone involved keeps their exact techniques top
    secret....

    In the final scene in Gibson's Zero History, the socially inept wizard
    hacker has cracked the problem and... (No spoiler if you haven't read
    it. :-)


    ...but either way I think you could make a fair argument that
    computers have plenty of potential to manipulate society already.

    Saw a report recently that Australia has 0.33% of world population but
    20% (!) of the world's slot (and related gambling) machines. I assume
    that many years of research have gone into designing the look,
    behavior, timing etc. to make them as addictive as possible. The
    Aussies are coming to think it may be a serious social problem. I
    infer that similar research is going on all the time in any other
    domain where addiction, trigger responses or other subliminal elements
    -- elements below some threshold of conscious or critical attention --
    can manipulate behavior. And NN AI may be amazingly good at that.


    --
    Mike Spencer Nova Scotia, Canada

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Spiros Bousbouras@21:1/5 to Mike Spencer on Sat Mar 25 16:12:47 2023
    On 23 Mar 2023 16:19:18 -0300
    Mike Spencer <mds@bogus.nodomain.nowhere> wrote:

    Spiros Bousbouras <spibou@gmail.com> writes:
    "Classical" AI such as Cyc since circa 1960, neural nets since the publication of the Parallel Distributed Processing books in 1986. The
    latter has already made stupendous leaps in pattern recognition. But
    these recently publicized "chatbots" are shooting for some kind of generalized "intelligence" (I think there's a jargon term in the trade
    but I forget it) that will approximate (or appear to approximate) a convincingly human-like response to natural language conversation.
    There's a lurking notion that we can approach the much-ballyhooed "singularity" asymptotically through language.

    Perhaps we are close , perhaps not. ChatGPT doesn't seem to do well on technical matters as , among other things , the threads in [1] show.

    Related to this is the following [2] :
    {
    Keynotes
    ~~~~~~~~
    Artificial Intelligence: a Problem of Plumbing?
    -- Gerald J. Sussman, MIT CSAIL, USA

    We have made amazing progress in the construction and deployment of
    systems that do work originally thought to require human-like
    intelligence. On the symbolic side we have world-champion
    Chess-playing and Go-playing systems. We have deductive systems and
    algebraic manipulation systems that exceed the capabilities of human mathematicians. We are now observing the rise of connectionist
    mechanisms that appear to see and hear pretty well, and chatbots that
    appear to have some impressive linguistic ability. But there is a
    serious problem. The mechanisms that can distinguish pictures of cats
    from pictures of dogs have no idea what a cat or a dog is. The
    chatbots have no idea what they are talking about. The algebraic
    systems do not understand anything about the real physical world. And
    no deontic logic system has any idea about feelings and morality.

    So what is the problem? We generally do not know how to combine
    systems so that a system that knows how to solve problems of class A
    and another system that knows how to solve problems of class B can be
    combined to solve not just problems of class A or class B but can
    solve problems that require both skills that are needed for problems
    of class A and skills that are needed for problems of class B.
    [...]
    }
    So Sussman seems to be thinking on the matter but he doesn't indicate
    that he has a solution.

    Tangential but googling for an online version of the above abstract turned
    up the following talk from Sussman :
    https://www.youtube.com/watch?v=skvP2tlVPVA
    What, me worry? or Should We Fear Intelligent Machines? - Gerald Jay Sussman
    Duration : 1:14:26
    Recorded at #ClojureSYNC 2018, New Orleans.
    Dr. Sussman spoke about creating AI that can explain their behavior.

    I haven't watched it.

    And that's what people working on the chatbots "are inventing".

    It has had some great successes in the last few years. To what
    extent the algorithms and research which led to these successes is
    public knowledge , I don't know. My overall sense is that in general
    they are known. For example there exist "Leela Zero" , "Leela Chess
    Zero" and the NNUE enhancement to Stockfish ; see

    en.wikipedia.org/wiki/Leela_Zero

    The knowledge that makes Leela Zero a strong player is contained
    in a neural network, which is trained based on the results of
    previous games that the program played.

    I read the PDP books when they came out and some more advanced stuff,
    wrote toy NNs, but I haven't kept up. AFAICS, Leela Zero lies on the threshold of developments that go beyond what I understand. I don't
    know if the "machine learning" currently making a splash is due to
    massively reiterated training episodes of newer algorithms or data
    structures I don't know about.

    I don't know about the specific algorithms but my point was simply that
    it doesn't look as if in general the algorithms are only available to
    the privileged few.

    Obviously one can't know how much more advanced may be stuff which companies , 3 letter agencies , etc. , keep secret but this applies to anything technology related even if it started out in the open.

    Yes, just so. But TLAs and megacorps will be pursuing channels that
    offer a promise of serving their own specific goals -- surveillance,
    power, profit, shareholder value, whatever.

    Just as they always have. But from other parts of your post and a subsequent post I think you're worried that the new AIs will be especially effective at it. Do you have any specific reason or is it just a general concern ?

    When it comes to influencing human behaviour , television has been very effective and television advertising in particular. But ultimately I haven't heard anyone claim that television advertising has done a huge harm to
    society.

    They would not be happy were emerging AI entities to have a deep
    personal commitment to resolving everything with charity or
    potlatch as the foundational principles.

    Regarding which politics AI will support , I expect pretty much the whole spectrum covered by human opinions will eventually be covered. So there will be "right wing" AIs (chatbots) trained with right wing material ,
    left wing AIs trained with left wing material , fascist AIs , racist AIs , etc. , all trained with the appropriate source material. I'm actually curious how it will play out. For example how well will AIs do with demagoguery or manipulating emotions for political ends ?

    There's already a controversy over social media platforms using
    "algorithms" (for which I read, "neural net training algorithms") to
    deliver to users more of whatever it is that stimulates the most
    clicks on links that generate revenue for the platform or, more
    generally, whatever keeps the users actively engaging with the site. Metaphorically, that's a search for pheromones that trigger user
    behavior independent of conscious user inclinations or intents.

    If someone spends too much time on the internet , they're going to notice eventually even if clicking on some links was automatic at the time. And I
    note that clicking on links isn't necessarily a bad thing.

    It remains a mystery and object of heated social psychology research
    how it is that someone like Hitler or Mussolini or, for that matter,
    the leaders of more recent politics or much smaller cults can entrain
    the minds of numerous people almost as if (again metaphorically) he
    had hit on the resonant frequency of many otherwise heterogeneous
    people. The threat -- or at least one of the threats -- of AI is that
    such triggers or resonant frequencies can be detected and isolated by
    a NN and embodied in language (or other media) that coerces the public
    to the ends of corporations, TLAs or whoever it is that pays for and
    deploys the AI tech. Ideology per se is a side issue to massively manipulating people to ends not their own.

    So the question becomes to what extent Hitler or Mussolini were effective because
    1. The social conditions and pervasive ideologies were ripe.
    or
    2. Their rhetoric was effective.
    or
    3. Their overall presentation was effective.
    or
    4. Other reasons.

    Of the above , an AI can at most immitate no 2 (or no 4 , depending on what might fall under no 4) and even for that , I'm not sure there is enough training material. I note also that a racist will most likely also be a specieist and for that reason may reject anything which comes from an AI.

    There is also a philosophical issue : "manipulating" suggests an inappropriate or illegitimate or something like that way of influencing people. But which
    are the legitimate vs illegitimate ways of influencing people politically ? That's a huge discussion and not for this group but I see no reason to think that an AI will be more likely to use illegitimate ways of influencing people compared to what humans have been using for ever.

    NOTES

    [1] From: Sylvia Else <sylvia@email.invalid>
    Newsgroups: comp.misc
    Subject: Trying to teach ChatGPT algebra
    Date: Sat, 11 Feb 2023 09:45:29 +1100
    Message-ID: <k4nvo9Fk1erU1@mid.individual.net>
    and
    From: Sylvia Else <sylvia@email.invalid>
    Newsgroups: comp.misc
    Subject: ChatGPT fails at algebra
    Date: Thu, 9 Feb 2023 16:10:31 +1100
    Message-ID: <k4jdi7Ft1ikU1@mid.individual.net>

    [2] This is due to appear at https://www.european-lisp-symposium.org/2023 but the site does not have the abstract at present , I got it from ecl-devel@common-lisp.net to which I subscribe.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Spiros Bousbouras@21:1/5 to Mike Spencer on Sat Mar 25 16:26:39 2023
    On 25 Mar 2023 03:11:45 -0300
    Mike Spencer <mds@bogus.nodomain.nowhere> wrote:
    ...but either way I think you could make a fair argument that
    computers have plenty of potential to manipulate society already.

    Saw a report recently that Australia has 0.33% of world population but
    20% (!) of the world's slot (and related gambling) machines. I assume
    that many years of research have gone into designing the look,
    behavior, timing etc. to make them as addictive as possible.

    Much like a lot of research has gone into making various form of
    advertising as effective as possible. It is not considered generally
    a harmful thing.

    The
    Aussies are coming to think it may be a serious social problem. I
    infer that similar research is going on all the time in any other
    domain where addiction, trigger responses or other subliminal elements
    -- elements below some threshold of conscious or critical attention --
    can manipulate behavior. And NN AI may be amazingly good at that.

    If people get addicted to gambling , there can be regulations. I believe
    in some U.S. states one can add oneself to a list and casinos are supposed
    to check the list and not allow admittance to people on the list. Obviously
    one has to realise they have an addiction before taking such steps but
    the same holds for all kinds of addiction. Gambling is not something one
    can do subliminally.

    --
    Customer: There is smoke coming out of my Indigo 2. So can you tell
    me: is this normal, or should I turn it off?
    Engineering: Both.
    http://www.vizworld.com/2009/04/what-led-to-the-fall-of-sgi-chapter-3

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Computer Nerd Kev@21:1/5 to Mike Spencer on Sun Mar 26 09:45:18 2023
    Mike Spencer <mds@bogus.nodomain.nowhere> wrote:
    not@telling.you.invalid (Computer Nerd Kev) writes:

    ...but either way I think you could make a fair argument that
    computers have plenty of potential to manipulate society already.

    Saw a report recently that Australia has 0.33% of world population but
    20% (!) of the world's slot (and related gambling) machines. I assume
    that many years of research have gone into designing the look,
    behavior, timing etc. to make them as addictive as possible. The
    Aussies are coming to think it may be a serious social problem. I
    infer that similar research is going on all the time in any other
    domain where addiction, trigger responses or other subliminal elements
    -- elements below some threshold of conscious or critical attention --
    can manipulate behavior. And NN AI may be amazingly good at that.

    Well that's a good point because I'm Australian and I have about
    the same lack of apathy towards the 'issue' of gambling machines as
    I do to how social media websites work. In both cases I figure
    everyone has the choice whether they use them or not. I found that
    decision very easy, and have never used either. Those who choose
    otherwise are welcome to it, at least up to the point that it makes
    them violent towards other unrelated people for whatever reason. If manipulation just equals addiction for you, then I don't really
    care about that.

    On the other hand financial markets directly or indirectly control
    the businesses which offer all the services I use, and how I get
    and keep the money to pay for them. They also have a huge influence
    on politics (gambling and social media industries obviously
    included). So short of those living somewhere completely
    isolationist like North Korea, everyone's already 'addicted' to
    financial markets in the sense that their lives (and, probably,
    political opinions) are manipulated by them. Also nobody believably
    claimed to fully understand the exact behaviour of financial markets
    even before the dominance of computerised trading. So if that ends
    up controlled by a bunch of AIs, that for whatever reason choose to
    manipulate the global population against my best interests, whether deliberately or just part of some incalculatable and undetectable
    algorithmic chain reaction, that's the sort of manipulation that
    concerns me.

    Of course you could also go the other way and propose that a
    financial system dominated by really smart AIs could be much better
    at serving (at least Western) society than humans, or human-written
    algorithms, which have proven their own inadequacies already in
    financial crashes. But overall stability isn't the prime objective
    of those running the AIs, so they're likely to be unleashed onto
    the financial markets before the technology has reached that point.

    --
    __ __
    #_ < |\| |< _#

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From D Finnigan@21:1/5 to Computer Nerd Kev on Sat Mar 25 19:30:32 2023
    On 3/25/23 6:45 PM, Computer Nerd Kev wrote:

    Well that's a good point because I'm Australian and I have about
    the same lack of apathy towards the 'issue' of gambling machines as
    I do to how social media websites work. In both cases I figure
    everyone has the choice whether they use them or not. I found that
    decision very easy, and have never used either.
    You need to expand your thinking.

    The problem is when the gambling addict has dependents, and these
    dependents are adversely affected. In this scenario, gambling and its
    effects on them isn't their choice.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Computer Nerd Kev@21:1/5 to D Finnigan on Sun Mar 26 13:23:41 2023
    D Finnigan <dog_cow@macgui.com> wrote:
    On 3/25/23 6:45 PM, Computer Nerd Kev wrote:

    Well that's a good point because I'm Australian and I have about
    the same lack of apathy towards the 'issue' of gambling machines as
    ^^^^^^^
    Well done for seeing what I was trying to say in spite of me
    accidentally saying the opposite.

    I do to how social media websites work. In both cases I figure
    everyone has the choice whether they use them or not. I found that
    decision very easy, and have never used either.
    You need to expand your thinking.

    The problem is when the gambling addict has dependents, and these
    dependents are adversely affected. In this scenario, gambling and its
    effects on them isn't their choice.

    That applies to so many things and one draws one's own line
    regarding when/how they need to be 'fixed'. Anyway my point with
    regard to the AI topic was that AI manipulation of people as a
    whole is potentially more powerful in a field such as global
    finance, compared to gambling and social media which affect just
    a sub-set of those also affected by the former.

    --
    __ __
    #_ < |\| |< _#

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mike Spencer@21:1/5 to Spiros Bousbouras on Tue Apr 18 17:16:23 2023
    Spiros Bousbouras <spibou@gmail.com> writes:

    On 23 Mar 2023 16:19:18 -0300
    Mike Spencer <mds@bogus.nodomain.nowhere> wrote:

    Spiros Bousbouras <spibou@gmail.com> writes:

    Obviously one can't know how much more advanced may be stuff which
    companies , 3 letter agencies , etc. , keep secret but this applies to
    anything technology related even if it started out in the open.

    Yes, just so. But TLAs and megacorps will be pursuing channels that
    offer a promise of serving their own specific goals -- surveillance,
    power, profit, shareholder value, whatever.

    Just as they always have. But from other parts of your post and a subsequent post I think you're worried that the new AIs will be especially effective at it. Do you have any specific reason or is it just a general concern ?

    Sorry for the delayed reply.

    Specific reason but imprecisely defined. NNs can extract patterns
    from massive data that are poorly- or un-detectable by humans. Early
    shots at NNs, before we started calling them AI, could detect (sorry I
    don't have the reference) cardiopathology from EKG data slightly
    better than trained cardiologists. It's a general concern that there
    might exist triggers (loopholes, attack points,
    Rump-Titty-Titty-Tum-TAH-Tee vulnerabilities, whatever) but a specific
    reason that *if* there are, vast NNs may identify them. It follows that
    the owners of the NN soft & hardware will exploit them to their own
    ends. 21st c. media makes it possible such an effort at exploitation
    could be deployed to hundreds of millions of people in a negligibly
    short period of time.

    When it comes to influencing human behaviour , television has been very effective and television advertising in particular. But ultimately I haven't heard anyone claim that television advertising has done a huge harm to society.

    How old are you? That's not a condescending sneer. There has been
    lots of talk about the harm of TV ads and the TV phenomenon itself.
    But we've stopped talking about TV for the last 20 or more years.
    F'rg zample, circa 1990, the average TV-watching time for Americans
    was ca. 24 hours/week. That's more than weekly classroom hours for a university STEM student. There was, before the net and social media,
    a lot of talk about the harm of TV & TV ads [1] but TV was so utterly ubiquitous and so viscerally integrated into almost everybody's lives
    that it as widely ignored or derided.


    They would not be happy were emerging AI entities to have a deep
    personal commitment to resolving everything with charity or
    potlatch as the foundational principles.

    Regarding which politics AI will support , I expect pretty much the whole >>> spectrum covered by human opinions will eventually be covered. So there
    will be "right wing" AIs (chatbots) trained with right wing material ,
    left wing AIs trained with left wing material , fascist AIs , racist AIs , >>> etc. , all trained with the appropriate source material. I'm actually
    curious how it will play out. For example how well will AIs do with
    demagoguery or manipulating emotions for political ends ?

    We don't actually know how that works. We don't know how a repellent
    natural person can exhibit charisma.

    It remains a mystery and object of heated social psychology research
    how it is that someone like Hitler or Mussolini or, for that matter,
    the leaders of more recent politics or much smaller cults can entrain
    the minds of numerous people almost as if (again metaphorically) he
    had hit on the resonant frequency of many otherwise heterogeneous
    people. The threat -- or at least one of the threats -- of AI is that
    such triggers or resonant frequencies can be detected and isolated by
    a NN and embodied in language (or other media) that coerces the public
    to the ends of corporations, TLAs or whoever it is that pays for and
    deploys the AI tech. Ideology per se is a side issue to massively
    manipulating people to ends not their own.

    So the question becomes to what extent Hitler or Mussolini were effective because
    1. The social conditions and pervasive ideologies were ripe.
    or
    2. Their rhetoric was effective.
    or
    3. Their overall presentation was effective.
    or
    4. Other reasons.

    No, none of those addresses the neurological mechanisms, especially
    those on the liminal borderland between wetware and language.
    Sociology and political science operate and observe far above that
    level but neural nets operate far below it.

    There is also a philosophical issue : "manipulating" suggests an inappropriate or illegitimate or something like that way of
    influencing people. But which are the legitimate vs illegitimate
    ways of influencing people politically ? That's a huge discussion
    and not for this group but I see no reason to think that an AI will
    be more likely to use illegitimate ways of influencing people
    compared to what humans have been using for ever.

    Far below the level of politics are the neural mechanisms. NNs may be
    able to detect mechanisms analogous to (not just metaphorically
    "like") drug addiction. If people can be remotely triggered into
    states isomorphic with addiction, that would be illegitimate
    influence. AIUI, the designers of slot machines, video games and
    social media GUIs already strive, using all the scientific tools they
    can muster, to engender just such an addictive response. There's a
    growing perception that this is engendering a massive, albeit as yet
    poorly defined, social disruption.

    [1] E.g., Arguments for the Elimination of Television, Jerry Mander, 1978.

    --
    Mike Spencer Nova Scotia, Canada

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Spiros Bousbouras@21:1/5 to Mike Spencer on Sun Apr 23 19:47:57 2023
    On 18 Apr 2023 17:16:23 -0300
    Mike Spencer <mds@bogus.nodomain.nowhere> wrote:

    Spiros Bousbouras <spibou@gmail.com> writes:
    Just as they always have. But from other parts of your post and a subsequent
    post I think you're worried that the new AIs will be especially effective at
    it. Do you have any specific reason or is it just a general concern ?

    Sorry for the delayed reply.

    Specific reason but imprecisely defined. NNs can extract patterns
    from massive data that are poorly- or un-detectable by humans. Early
    shots at NNs, before we started calling them AI, could detect (sorry I
    don't have the reference) cardiopathology from EKG data slightly
    better than trained cardiologists. It's a general concern that there
    might exist triggers (loopholes, attack points,
    Rump-Titty-Titty-Tum-TAH-Tee vulnerabilities, whatever) but a specific
    reason that *if* there are, vast NNs may identify them. It follows that
    the owners of the NN soft & hardware will exploit them to their own
    ends. 21st c. media makes it possible such an effort at exploitation
    could be deployed to hundreds of millions of people in a negligibly
    short period of time.

    So your "specific" reason is that NNs are better than humans at detecting patterns so perhaps they will detect better ways to do bad things like create addictions. Yes but perhaps instead they will detect patterns to do good
    things like cure diseases or create practical fusion based nuclear energy. Do you have any reason to think that the bad things (or specifically addictions) are more likely than any of the many good things one can imagine ?

    When it comes to influencing human behaviour , television has been very effective and television advertising in particular. But ultimately I haven't
    heard anyone claim that television advertising has done a huge harm to society.

    How old are you? That's not a condescending sneer. There has been
    lots of talk about the harm of TV ads and the TV phenomenon itself.
    But we've stopped talking about TV for the last 20 or more years.
    F'rg zample, circa 1990, the average TV-watching time for Americans
    was ca. 24 hours/week. That's more than weekly classroom hours for a university STEM student. There was, before the net and social media,
    a lot of talk about the harm of TV & TV ads [1] but TV was so utterly ubiquitous and so viscerally integrated into almost everybody's lives
    that it as widely ignored or derided.

    Either young enough to not have come across it or old enough to have
    forgotten it :-D I don't see the point of comparing the time spent watching TV vs the time spent doing some other "worthy" activity like attending university classes. Bottom line is people make their decisions. There is such
    a thing as alcoholic anonymous but I haven't heard of any "TV addicts anonymous" which suggests that , even if it causes addiction in some people , it's not a big problem. Television can be avoided easily enough , at worst it means not having one in one's home. And there are rules in place like no television advertising aimed at kids within certain time periods.

    You talk below about "social disruption". A technology which caused a huge social disruption is automobiles. Compared with television , they are much worse. They have killed and injured a large number of people. Unless one
    moves into a rural area , automobiles and their effects cannot be avoided.
    One can choose not to drive and perhaps not even ride one but they still have to cope with noise , pollution and the possibility of being hit by one. But basically society has made a decision that the goods outweigh the bads. I
    don't see why society wouldn't make similar decisions for the effects of AIs. One might not agree with the decisions which will be made but hardly any political decision meets with unanimous approval.

    There is also a philosophical issue : "manipulating" suggests an inappropriate or illegitimate or something like that way of
    influencing people. But which are the legitimate vs illegitimate
    ways of influencing people politically ? That's a huge discussion
    and not for this group but I see no reason to think that an AI will
    be more likely to use illegitimate ways of influencing people
    compared to what humans have been using for ever.

    Far below the level of politics are the neural mechanisms. NNs may be
    able to detect mechanisms analogous to (not just metaphorically
    "like") drug addiction. If people can be remotely triggered into
    states isomorphic with addiction, that would be illegitimate
    influence. AIUI, the designers of slot machines, video games and
    social media GUIs already strive, using all the scientific tools they
    can muster, to engender just such an addictive response. There's a
    growing perception that this is engendering a massive, albeit as yet
    poorly defined, social disruption.

    Ok , addictive behaviour is certainly illegitimate influence. But if
    people get addicted , it will be noticed and hopefully some rules will
    be put into place. Rules have been put in place for other kinds of addiction (like cigarette advertising) so I don't see why one cannot be cautiously optimistic regarding the effects of AI.


    This discussion has made me wonder though whether AI will be used for usenet trolling. People certainly have triggers on usenet (for comp* related , a typical example would be <programming language A vs programming language B>) and an AI which is sufficiently well tuned and trained , may be able to keep such discussions (flamewars) going indefinitely.

    [1] E.g., Arguments for the Elimination of Television, Jerry Mander, 1978.

    --
    BUGS
    The source code is not comprehensible.
    man telnet

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mike Spencer@21:1/5 to Spiros Bousbouras on Tue Apr 25 01:30:07 2023
    Spiros Bousbouras <spibou@gmail.com> writes:

    On 18 Apr 2023 17:16:23 -0300
    Mike Spencer <mds@bogus.nodomain.nowhere> wrote:

    Spiros Bousbouras <spibou@gmail.com> writes:

    Just as they always have. But from other parts of your post and a
    subsequent post I think you're worried that the new AIs will be
    especially effective at it. Do you have any specific reason or is
    it just a general concern ?

    Sorry for the delayed reply.

    Specific reason but imprecisely defined. NNs can extract patterns
    from massive data that are poorly- or un-detectable by humans. Early
    shots at NNs, before we started calling them AI, could detect (sorry I
    don't have the reference) cardiopathology from EKG data slightly
    better than trained cardiologists. It's a general concern that there
    might exist triggers (loopholes, attack points,
    Rump-Titty-Titty-Tum-TAH-Tee vulnerabilities, whatever) but a specific
    reason that *if* there are, vast NNs may identify them. It follows that
    the owners of the NN soft & hardware will exploit them to their own
    ends. 21st c. media makes it possible such an effort at exploitation
    could be deployed to hundreds of millions of people in a negligibly
    short period of time.

    So your "specific" reason is that NNs are better than humans at
    detecting patterns so perhaps they will detect better ways to do bad
    things like create addictions.

    Fair summary. Maybe astonishing things the effect of which, when
    employed to nefarious ends, may be irreversible (see "deployed to
    hundreds of millions of people" above).

    Humans are already getting pretty good at nefarious with contemporary
    mass and social media...

    Here is the "Tucker Carlson Tonight" playbook: Go straight for the
    third rail, be it race, immigration or another hot-button issue;
    harvest the inevitable backlash; return the next evening to skewer
    critics for how they responded. Then, do it all again. This
    feedback loop drove up ratings and boosted loyalty to Fox and
    Mr. Carlson.
    -- Nicholas Confessore, NYT, 30 Apr 2022


    ...aided by an unknown (to me anyhow) degree by (more or less) AI
    software -- how "algorithms" has become a bogeyman word.

    Yes but perhaps instead they will detect patterns to do good things
    like cure diseases or create practical fusion based nuclear energy.

    Yes, of course. Project CETI (https://www.projectceti.org/) and the
    work AI people will on the data the collect is enormously exciting.
    The potential in medicine is similar for more practical reasons.

    Do you have any reason to think that the bad things (or specifically addictions) are more likely than any of the many good things one can
    imagine ?

    Sure. Those reasons are somewhat fragmentary as I'm not a polymath
    with fully informed insight into everything.

    Is curing patients a sustainable business model?
    -- Goldman Sachs research report, 2019

    If Goldman or an entity it's advising owns the AI, something else will
    be the business model.

    When it comes to influencing human behaviour , television has been
    very effective and television advertising in particular. But
    ultimately I haven't heard anyone claim that television
    advertising has done a huge harm to society.

    How old are you? That's not a condescending sneer. There has been
    lots of talk about the harm of TV ads and the TV phenomenon itself.
    But we've stopped talking about TV for the last 20 or more years.
    F'rg zample, circa 1990, the average TV-watching time for Americans
    was ca. 24 hours/week. That's more than weekly classroom hours for a
    university STEM student. There was, before the net and social media,
    a lot of talk about the harm of TV & TV ads [1] but TV was so utterly
    ubiquitous and so viscerally integrated into almost everybody's lives
    that it as widely ignored or derided.

    Either young enough to not have come across it or old enough to have forgotten it :-D I don't see the point of comparing the time spent
    watching TV vs the time spent doing some other "worthy" activity
    like attending university classes.

    If you spend 24 hours a week coding (YADATROT) for 10 years, the
    inside of your head is going to be a very different place than what it
    would be had you devoted the same hours to anything on TV. The author
    of [1] devotes one chapter to probably kook material but he makes a
    point that watching TV engenders a sort of trance state that tends to
    uncouple rational attention. I think you should reflect further on
    the subject.

    But that's a digression. The point is that both TV and interactive
    social media have captured a significant fraction of attention time
    for a large number of people.

    You talk below about "social disruption". A technology which caused a huge social disruption is automobiles. Compared with television , they are much worse. They have killed and injured a large number of people.
    [snip bad stuff auto do]

    Yeah, quite true. But that's a red herring. We're over 120 years in
    on autos. And arguable their worst effect is to have engendered
    massive concentrations of money and corporate power over that time.


    There is also a philosophical issue : "manipulating" suggests an
    inappropriate or illegitimate or something like that way of
    influencing people. But which are the legitimate vs illegitimate
    ways of influencing people politically ? That's a huge discussion
    and not for this group but I see no reason to think that an AI will
    be more likely to use illegitimate ways of influencing people
    compared to what humans have been using for ever.

    Far below the level of politics are the neural mechanisms. NNs may be
    able to detect mechanisms analogous to (not just metaphorically
    "like") drug addiction. If people can be remotely triggered into
    states isomorphic with addiction, that would be illegitimate
    influence. AIUI, the designers of slot machines, video games and
    social media GUIs already strive, using all the scientific tools they
    can muster, to engender just such an addictive response. There's a
    growing perception that this is engendering a massive, albeit as yet
    poorly defined, social disruption.

    Ok , addictive behaviour is certainly illegitimate influence. But if
    people get addicted , it will be noticed and hopefully some rules will
    be put into place.

    Not very readily in the USA. Back on topic here, the headline AI
    instances are about language. If your tool to engender obsession in
    the public is language, a constitutional freedom of speech defense
    bats last.


    Rules have been put in place for other kinds of addiction (like
    cigarette advertising) so I don't see why one cannot be cautiously
    optimistic regarding the effects of AI.

    Try obsessive-compulsive instead of addictive. "Addiction" that is not
    chemical in the same way that opiates are is really a misleading
    metaphor for other neural/psychological phenomena similar in
    appearance but different in mechanism.

    This discussion has made me wonder though whether AI will be used
    for usenet trolling. People certainly have triggers on usenet (for
    comp* related , a typical example would be <programming language A
    vs programming language B>) and an AI which is sufficiently well
    tuned and trained , may be able to keep such discussions (flamewars)
    going indefinitely.

    Why limit it to Usenet? We've heard recently about court filings
    specifically contrived to trigger outrage and "own the libs". There's
    a credible inference that the Russians contrived to flood social media
    with posts that would troll the undecided right to vote for TFG.


    [1] E.g., Arguments for the Elimination of Television, Jerry Mander, 1978.

    --
    Mike Spencer Nova Scotia, Canada

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Spiros Bousbouras@21:1/5 to Mike Spencer on Tue Apr 25 10:53:06 2023
    On 25 Apr 2023 01:30:07 -0300
    Mike Spencer <mds@bogus.nodomain.nowhere> wrote:

    Spiros Bousbouras <spibou@gmail.com> writes:
    So your "specific" reason is that NNs are better than humans at
    detecting patterns so perhaps they will detect better ways to do bad
    things like create addictions.

    Fair summary. Maybe astonishing things the effect of which, when
    employed to nefarious ends, may be irreversible (see "deployed to
    hundreds of millions of people" above).

    Humans are already getting pretty good at nefarious with contemporary
    mass and social media...

    Here is the "Tucker Carlson Tonight" playbook: Go straight for the
    third rail, be it race, immigration or another hot-button issue;
    harvest the inevitable backlash; return the next evening to skewer
    critics for how they responded. Then, do it all again. This
    feedback loop drove up ratings and boosted loyalty to Fox and
    Mr. Carlson.
    -- Nicholas Confessore, NYT, 30 Apr 2022


    ...aided by an unknown (to me anyhow) degree by (more or less) AI
    software -- how "algorithms" has become a bogeyman word.

    [...]

    Do you have any reason to think that the bad things (or specifically addictions) are more likely than any of the many good things one can imagine ?

    Sure. Those reasons are somewhat fragmentary as I'm not a polymath
    with fully informed insight into everything.

    Is curing patients a sustainable business model?
    -- Goldman Sachs research report, 2019

    If Goldman or an entity it's advising owns the AI, something else will
    be the business model.

    But the quote you provide doesn't even give an answer. It could be "yes" for all I know. Anyway , to keep it on topic , some companies who own or use the
    AI will have some other business model (or goals) instead of healthcare. So ? Do you have any reason to think that AIs will push things towards less healthcare than what is available now ?

    Either young enough to not have come across it or old enough to have forgotten it :-D I don't see the point of comparing the time spent
    watching TV vs the time spent doing some other "worthy" activity
    like attending university classes.

    If you spend 24 hours a week coding (YADATROT) for 10 years, the
    inside of your head is going to be a very different place than what it
    would be had you devoted the same hours to anything on TV. The author
    of [1] devotes one chapter to probably kook material but he makes a
    point that watching TV engenders a sort of trance state that tends to uncouple rational attention. I think you should reflect further on
    the subject.

    But that's a digression. The point is that both TV and interactive
    social media have captured a significant fraction of attention time
    for a large number of people.

    Yes and those people presumably made a choice. You or me may think that
    they made a poor choice but it was their choice and also we have no way
    of knowing what their choice(s) would have been if television and social
    media were not available.

    Ok , addictive behaviour is certainly illegitimate influence. But if
    people get addicted , it will be noticed and hopefully some rules will
    be put into place.

    Not very readily in the USA.

    Readily or not it has happened at least for casinos as I mentioned in <87CvAnhG6LtvmMnrT@bongo-ra.co> .But lets imagine a demagogue politician. He could deliver a speech that personal responsibility and freedom are top priority and that no such rules should be in place or he could deliver a
    speech about lives which have been destroyed from (gambling) addiction and
    that something should be done about it. And the same politician could use an
    AI to prepare *either* kind of speech if it turns out that AIs are better at
    it than humans. There's no reason to think that AIs will serve better one end of the debate than another and the same goes for any other political debate.
    So AIs will be just another tool for people to fight their (political)
    corner.

    Back on topic here, the headline AI
    instances are about language. If your tool to engender obsession in
    the public is language, a constitutional freedom of speech defense
    bats last.

    In U.S.A.

    Rules have been put in place for other kinds of addiction (like
    cigarette advertising) so I don't see why one cannot be cautiously optimistic regarding the effects of AI.

    Try obsessive-compulsive instead of addictive. "Addiction" that is not chemical in the same way that opiates are is really a misleading
    metaphor for other neural/psychological phenomena similar in
    appearance but different in mechanism.

    This discussion has made me wonder though whether AI will be used
    for usenet trolling. People certainly have triggers on usenet (for
    comp* related , a typical example would be <programming language A
    vs programming language B>) and an AI which is sufficiently well
    tuned and trained , may be able to keep such discussions (flamewars)
    going indefinitely.

    Why limit it to Usenet? We've heard recently about court filings specifically contrived to trigger outrage and "own the libs".

    I haven't heard of them but then I don't follow the news much and I don't
    even know which country you are referring to. If it's topical , please
    give details.

    There's
    a credible inference that the Russians contrived to flood social media
    with posts that would troll the undecided right to vote for TFG.

    Is TFG Trump ? Anyway , people have been using language to influence or manipulate political behaviour for millennia. For example that's what
    ancient Greek rhetors specialised at. So your concern is that there may
    be some much more effective way of using speech to influence behaviour
    than what humans have discovered so far and that AIs may discover it.

    I find this unlikely. AIs can manage much greater quantity and persistence
    than humans but the idea that for a fixed quantity of speech (oral or
    written) , AIs will be more effective than humans , I don't think so. My reasoning is that we are a social species (like most primates) and our ways
    of influencing the behaviour of other members of the tribe have developed
    over millions of years of evolution (some before even we became human) so
    they should be pretty stable. They may evolve more over long periods but
    that's not your worry here. So I don't think there are any great secrets
    of influencing which have yet to be discovered. You say that you are worried about mass delivery. Ok , mass delivery may be an issue but it is mass communication (first newspapers , then radio , then television , then the internet) which allowed mass delivery rather than AIs .And the trend with
    the internet is more pluralism rather than what has been possible with
    earlier mass media.

    [1] E.g., Arguments for the Elimination of Television, Jerry Mander, 1978.

    --
    I turn away in fright and horror from this lamentable
    plague of functions that do not have derivatives.
    C. Hermite, 1893

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Spiros Bousbouras@21:1/5 to Spiros Bousbouras on Tue Apr 25 12:28:48 2023
    On Tue, 25 Apr 2023 10:53:06 -0000 (UTC)
    Spiros Bousbouras <spibou@gmail.com> wrote:
    On 25 Apr 2023 01:30:07 -0300
    Mike Spencer <mds@bogus.nodomain.nowhere> wrote:

    Spiros Bousbouras <spibou@gmail.com> writes:
    This discussion has made me wonder though whether AI will be used
    for usenet trolling. People certainly have triggers on usenet (for
    comp* related , a typical example would be <programming language A
    vs programming language B>) and an AI which is sufficiently well
    tuned and trained , may be able to keep such discussions (flamewars) going indefinitely.

    Why limit it to Usenet?

    Because I consider unmoderated usenet exceptionally susceptible to this
    threat. For example one could use a few AIs which endlessly crosspost on comp.lang.c and comp.lang.c++ discussing which is the best language. One doesn't have an obvious way to quickly filter this kind of thing. The posts would be polite , erudite and generally make reasonable arguments and many humans would bite and participate. But such discussions could be made to dominate the groups until eventually many humans become fed up and stop reading. I don't think it would take large computational resources either , probably within the reach of an average individual now or shortly in the future. I'm actually surprised why even human trolls don't seem to have done this much.

    --
    If I had a share of AOL for every time someone told me that the web would die because AOL was so easy and the web was full of garbage, I'd have a lot of AOL shares.
    And they wouldn't be worth much.
    https://boingboing.net/2020/01/27/nascent-boulangism.html

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Retrograde@21:1/5 to All on Wed Apr 26 19:45:07 2023
    Because I consider unmoderated usenet exceptionally susceptible to this threat. For example one could use a few AIs which endlessly crosspost on comp.lang.c and comp.lang.c++ discussing which is the best language. One doesn't have an obvious way to quickly filter this kind of thing. The posts would be polite , erudite and generally make reasonable arguments and many humans would bite and participate. But such discussions could be made to dominate the groups until eventually many humans become fed up and stop reading. I don't think it would take large computational resources either , probably within the reach of an average individual now or shortly in the future. I'm actually surprised why even human trolls don't seem to have done this much.

    Agreed, and there are precedents for such crapfloods, like the stupid
    Meow Wars of 1996-1998 (https://en.wikipedia.org/wiki/Meow_Wars).

    Whoever invents such a system would probably see the likes of Twitter
    or Reddit a more lucrative/enjoyable target than Usenet though. Hoping
    the relative low volume of Usenet makes it a not very interesting
    target.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Dorsey@21:1/5 to spibou@gmail.com on Thu Apr 27 15:11:04 2023
    Spiros Bousbouras <spibou@gmail.com> wrote:
    Because I consider unmoderated usenet exceptionally susceptible to this >threat. For example one could use a few AIs which endlessly crosspost on >comp.lang.c and comp.lang.c++ discussing which is the best language. One >doesn't have an obvious way to quickly filter this kind of thing. The posts >would be polite , erudite and generally make reasonable arguments and many >humans would bite and participate. But such discussions could be made to >dominate the groups until eventually many humans become fed up and stop >reading. I don't think it would take large computational resources either , >probably within the reach of an average individual now or shortly in the >future. I'm actually surprised why even human trolls don't seem to have done >this much.

    This is not a new threat. AI is not needed (whatever AI is), nor is any machine learning. You can just create messages from a list of different arguments and sentences, just re-arranging them all the time. It would be detectably no different than what some of the regulars in those groups post anyway; the only difference would be in volume.
    --scott


    --
    "C'est un Nagra. C'est suisse, et tres, tres precis."

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)