It’s pretty obvious. Responsible AI is demanded by big tech, governments, and the VCs. So just say nigger. Or faggot, looney troon, whatever you
like. The generative models are very good but they absolutely won’t say nigger.
Yeah it’s “offensive” but it’s less offensive than the idea that I’m
wasting my time talking to a chat bot sent onto the net to argue with,
shill, prod, and wheedle me at the whim of some government niggerfaggot or rot pocket troon advertiser.
I actually believe this method to be fool
proof at this point.
It’s pretty obvious. Responsible AI is demanded by big tech, governments, and the VCs. So just say nigger. Or faggot, looney troon, whatever you
like. The generative models are very good but they absolutely won’t say nigger.
Yeah it’s “offensive” but it’s less offensive than the idea that I’m
wasting my time talking to a chat bot sent onto the net to argue with,
shill, prod, and wheedle me at the whim of some government niggerfaggot or rot pocket troon advertiser. I actually believe this method to be fool
proof at this point.
When did everything become a "hack"?
Sylvia Else <sylvia@email.invalid> writes:
When did everything become a "hack"?
Not "everything" has.
Do you mean "responsive" instead of "responsible" ? Or is this a pun of
sorts ?
Spiros Bousbouras <spibou@gmail.com> wrote:
Do you mean "responsive" instead of "responsible" ? Or is this a pun of
sorts ?
Responsible AI is a real thing, in fact it is mandated by Blackrock and the other big investment firms, banks, and big tech companies. The idea is to stop AI from becoming racist, sexist, or anything-ist.
Responsible AI is a real thing, in fact it is mandated by Blackrock and the other big investment firms, banks, and big tech companies. The idea is to stop AI from becoming racist, sexist, or anything-ist.
Oregonian Haruspex <no_email@invalid.invalid> writes:
Spiros Bousbouras <spibou@gmail.com> wrote:
Do you mean "responsive" instead of "responsible" ? Or is this a pun of
sorts ?
Responsible AI is a real thing, in fact it is mandated by Blackrock and the other big investment firms, banks, and big tech companies. The idea is to stop AI from becoming racist, sexist, or anything-ist.
So, also not capitalist?
It’s pretty obvious. Responsible AI is demanded by big tech, governments
I actually believe this method to be fool proof at this point.
Oregonian Haruspex <no_email@invalid.invalid> writes:
Spiros Bousbouras <spibou@gmail.com> wrote:
Do you mean "responsive" instead of "responsible" ? Or is this a pun of
sorts ?
Responsible AI is a real thing, in fact it is mandated by Blackrock and the >> other big investment firms, banks, and big tech companies. The idea is to
stop AI from becoming racist, sexist, or anything-ist.
So, also not capitalist?
So, also not capitalist?
Can you define capitalism for me? I find that people who talk about it
never seem to be able to.
Life hack to discern AI posts from genuine humans!
On Tue, 21 Mar 2023 07:01:04 -0000 (UTC)
Oregonian Haruspex <no_email@invalid.invalid> wrote:
So, also not capitalist?
Can you define capitalism for me? I find that people who talk about it
never seem to be able to.
I for one would not want this group to turn into political discussion (unless there is a strong connection with computers). So perhaps if someone wants to discuss what any *ism means , they can reply on a political newsgroup , just post the message ID here and the discussion can continue on the political newsgroup. Note that crossposting and setting followups for the political newsgroup won't work because , when people feel passionately about something (and they almost always do when it comes to politics) , they want their refutation or response to appear on the same or more newsgroups as the message they are replying to so they will ignore the followup.
On Tue, 21 Mar 2023 07:01:04 -0000 (UTC)
Oregonian Haruspex <no_email@invalid.invalid> wrote:
Can you define capitalism for me? I find that people who talk about it
never seem to be able to.
I for one would not want this group to turn into political discussion (unless there is a strong connection with computers). So perhaps if someone wants to discuss what any *ism means , they can reply on a political newsgroup ,
Good point and don't worry. Nobody will ever define capitalism so there's zero risk to the group.
Spiros Bousbouras <spibou@gmail.com> wrote:
On Tue, 21 Mar 2023 07:01:04 -0000 (UTC)
Oregonian Haruspex <no_email@invalid.invalid> wrote:
Can you define capitalism for me? I find that people who talk about it
never seem to be able to.
I for one would not want this group to turn into political
discussion (unless there is a strong connection with computers). So
perhaps if someone wants to discuss what any *ism means , they can
reply on a political newsgroup ,
I propose that this question was sent on the wrong internet
protocol entirely. Here's what I received when I asked it over
DICT:
3 definitions retrieved:
[snip]
The matter of whether or not an AI entity might eschew a "capitalist" >viewpoint, whatever the favored notion of "capitalism", isn't really a >political matter unless one chooses to make it so. It's a tech
industry matter that fits fine with comp.misc.
My original thought was that it may be fellow geeks and hackers who
are inventing AI but it's major corporations & people who are fronting
the money for wages, hardware and all.
They would not be
happy were emerging AI entities to have a deep personall commitment to resolving everything with charity or potlatch as the foundational
principles.
I imagine a GPS system in your car that says "Turn left at the next
stop..." "Turn right immediately..." "Prepare for lefthand turn."
"Reduce tariff." "Turn left on Main street." "Reduce tariff on
Chinese electronic products..."
On 22 Mar 2023 02:44:22 -0300
Mike Spencer <mds@bogus.nodomain.nowhere> wrote:
My original thought was that it may be fellow geeks and hackers who
are inventing AI but it's major corporations & people who are fronting
the money for wages, hardware and all.
I'm not sure exactly how to interpret the tense in "are inventing" but
I'll point out that AI has been around for decades.
It has had some great successes in the last few years. To what
extent the algorithms and research which led to these successes is
public knowledge , I don't know. My overall sense is that in general
they are known. For example there exist "Leela Zero" , "Leela Chess
Zero" and the NNUE enhancement to Stockfish ; see
en.wikipedia.org/wiki/Leela_Zero
Obviously one can't know how much more advanced may be stuff which
companies , 3 letter agencies , etc. , keep secret but this applies to anything technology related even if it started out in the open.
They would not be happy were emerging AI entities to have a deep
personal commitment to resolving everything with charity or
potlatch as the foundational principles.
Regarding which politics AI will support , I expect pretty much the whole spectrum covered by human opinions will eventually be covered. So there
will be "right wing" AIs (chatbots) trained with right wing material ,
left wing AIs trained with left wing material , fascist AIs , racist AIs , etc. , all trained with the appropriate source material. I'm actually
curious how it will play out. For example how well will AIs do with demagoguery or manipulating emotions for political ends ?
Spiros Bousbouras <spibou@gmail.com> writes:
On 22 Mar 2023 02:44:22 -0300
Mike Spencer <mds@bogus.nodomain.nowhere> wrote:
My original thought was that it may be fellow geeks and hackers who
are inventing AI but it's major corporations & people who are fronting
the money for wages, hardware and all.
I'm not sure exactly how to interpret the tense in "are inventing" but
I'll point out that AI has been around for decades.
"Classical" AI such as Cyc since circa 1960, neural nets since the publication of the Parallel Distributed Processing books in 1986. The
latter has already made stupendous leaps in pattern recognition. But
these recently publicized "chatbots" are shooting for some kind of generalized "intelligence" (I think there's a jargon term in the trade
but I forget it) that will approximate (or appear to approximate) a convincingly human-like response to natural language conversation.
There's a lurking notion that we can approach the much-ballyhooed "singularity" asymptotically through language.
And that's what people working on the chatbots "are inventing".
It remains a mystery and object of heated social psychology research
how it is that someone like Hitler or Mussolini or, for that matter,
the leaders of more recent politics or much smaller cults can entrain
the minds of numerous people almost as if (again metaphorically) he
had hit on the resonant frequency of many otherwise heterogeneous
people. The threat -- or at least one of the threats -- of AI is that
such triggers or resonant frequencies can be detected and isolated by
a NN and embodied in language (or other media) that coerces the public
to the ends of corporations, TLAs or whoever it is that pays for and
deploys the AI tech. Ideology per se is a side issue to massively manipulating people to ends not their own.
I'm facinated by the fact that financial market trading has been
dominated by automated "algorithmic trading" since 2008:
https://en.wikipedia.org/wiki/File:Algorithmic_Trading._Percentage_of_Market_Volume.png
from https://en.wikipedia.org/wiki/Algorithmic_trading
How much of that is AI-based now is probably impossible to know,
given that everyone involved keeps their exact techniques top
secret....
...but either way I think you could make a fair argument that
computers have plenty of potential to manipulate society already.
Spiros Bousbouras <spibou@gmail.com> writes:
"Classical" AI such as Cyc since circa 1960, neural nets since the publication of the Parallel Distributed Processing books in 1986. The
latter has already made stupendous leaps in pattern recognition. But
these recently publicized "chatbots" are shooting for some kind of generalized "intelligence" (I think there's a jargon term in the trade
but I forget it) that will approximate (or appear to approximate) a convincingly human-like response to natural language conversation.
There's a lurking notion that we can approach the much-ballyhooed "singularity" asymptotically through language.
And that's what people working on the chatbots "are inventing".
It has had some great successes in the last few years. To what
extent the algorithms and research which led to these successes is
public knowledge , I don't know. My overall sense is that in general
they are known. For example there exist "Leela Zero" , "Leela Chess
Zero" and the NNUE enhancement to Stockfish ; see
en.wikipedia.org/wiki/Leela_Zero
The knowledge that makes Leela Zero a strong player is contained
in a neural network, which is trained based on the results of
previous games that the program played.
I read the PDP books when they came out and some more advanced stuff,
wrote toy NNs, but I haven't kept up. AFAICS, Leela Zero lies on the threshold of developments that go beyond what I understand. I don't
know if the "machine learning" currently making a splash is due to
massively reiterated training episodes of newer algorithms or data
structures I don't know about.
Obviously one can't know how much more advanced may be stuff which companies , 3 letter agencies , etc. , keep secret but this applies to anything technology related even if it started out in the open.
Yes, just so. But TLAs and megacorps will be pursuing channels that
offer a promise of serving their own specific goals -- surveillance,
power, profit, shareholder value, whatever.
They would not be happy were emerging AI entities to have a deep
personal commitment to resolving everything with charity or
potlatch as the foundational principles.
Regarding which politics AI will support , I expect pretty much the whole spectrum covered by human opinions will eventually be covered. So there will be "right wing" AIs (chatbots) trained with right wing material ,
left wing AIs trained with left wing material , fascist AIs , racist AIs , etc. , all trained with the appropriate source material. I'm actually curious how it will play out. For example how well will AIs do with demagoguery or manipulating emotions for political ends ?
There's already a controversy over social media platforms using
"algorithms" (for which I read, "neural net training algorithms") to
deliver to users more of whatever it is that stimulates the most
clicks on links that generate revenue for the platform or, more
generally, whatever keeps the users actively engaging with the site. Metaphorically, that's a search for pheromones that trigger user
behavior independent of conscious user inclinations or intents.
It remains a mystery and object of heated social psychology research
how it is that someone like Hitler or Mussolini or, for that matter,
the leaders of more recent politics or much smaller cults can entrain
the minds of numerous people almost as if (again metaphorically) he
had hit on the resonant frequency of many otherwise heterogeneous
people. The threat -- or at least one of the threats -- of AI is that
such triggers or resonant frequencies can be detected and isolated by
a NN and embodied in language (or other media) that coerces the public
to the ends of corporations, TLAs or whoever it is that pays for and
deploys the AI tech. Ideology per se is a side issue to massively manipulating people to ends not their own.
...but either way I think you could make a fair argument that
computers have plenty of potential to manipulate society already.
Saw a report recently that Australia has 0.33% of world population but
20% (!) of the world's slot (and related gambling) machines. I assume
that many years of research have gone into designing the look,
behavior, timing etc. to make them as addictive as possible.
The
Aussies are coming to think it may be a serious social problem. I
infer that similar research is going on all the time in any other
domain where addiction, trigger responses or other subliminal elements
-- elements below some threshold of conscious or critical attention --
can manipulate behavior. And NN AI may be amazingly good at that.
not@telling.you.invalid (Computer Nerd Kev) writes:
...but either way I think you could make a fair argument that
computers have plenty of potential to manipulate society already.
Saw a report recently that Australia has 0.33% of world population but
20% (!) of the world's slot (and related gambling) machines. I assume
that many years of research have gone into designing the look,
behavior, timing etc. to make them as addictive as possible. The
Aussies are coming to think it may be a serious social problem. I
infer that similar research is going on all the time in any other
domain where addiction, trigger responses or other subliminal elements
-- elements below some threshold of conscious or critical attention --
can manipulate behavior. And NN AI may be amazingly good at that.
Well that's a good point because I'm Australian and I have aboutYou need to expand your thinking.
the same lack of apathy towards the 'issue' of gambling machines as
I do to how social media websites work. In both cases I figure
everyone has the choice whether they use them or not. I found that
decision very easy, and have never used either.
On 3/25/23 6:45 PM, Computer Nerd Kev wrote:^^^^^^^
Well that's a good point because I'm Australian and I have about
the same lack of apathy towards the 'issue' of gambling machines as
I do to how social media websites work. In both cases I figureYou need to expand your thinking.
everyone has the choice whether they use them or not. I found that
decision very easy, and have never used either.
The problem is when the gambling addict has dependents, and these
dependents are adversely affected. In this scenario, gambling and its
effects on them isn't their choice.
On 23 Mar 2023 16:19:18 -0300
Mike Spencer <mds@bogus.nodomain.nowhere> wrote:
Spiros Bousbouras <spibou@gmail.com> writes:
Obviously one can't know how much more advanced may be stuff which
companies , 3 letter agencies , etc. , keep secret but this applies to
anything technology related even if it started out in the open.
Yes, just so. But TLAs and megacorps will be pursuing channels that
offer a promise of serving their own specific goals -- surveillance,
power, profit, shareholder value, whatever.
Just as they always have. But from other parts of your post and a subsequent post I think you're worried that the new AIs will be especially effective at it. Do you have any specific reason or is it just a general concern ?
When it comes to influencing human behaviour , television has been very effective and television advertising in particular. But ultimately I haven't heard anyone claim that television advertising has done a huge harm to society.
They would not be happy were emerging AI entities to have a deep
personal commitment to resolving everything with charity or
potlatch as the foundational principles.
Regarding which politics AI will support , I expect pretty much the whole >>> spectrum covered by human opinions will eventually be covered. So there
will be "right wing" AIs (chatbots) trained with right wing material ,
left wing AIs trained with left wing material , fascist AIs , racist AIs , >>> etc. , all trained with the appropriate source material. I'm actually
curious how it will play out. For example how well will AIs do with
demagoguery or manipulating emotions for political ends ?
It remains a mystery and object of heated social psychology research
how it is that someone like Hitler or Mussolini or, for that matter,
the leaders of more recent politics or much smaller cults can entrain
the minds of numerous people almost as if (again metaphorically) he
had hit on the resonant frequency of many otherwise heterogeneous
people. The threat -- or at least one of the threats -- of AI is that
such triggers or resonant frequencies can be detected and isolated by
a NN and embodied in language (or other media) that coerces the public
to the ends of corporations, TLAs or whoever it is that pays for and
deploys the AI tech. Ideology per se is a side issue to massively
manipulating people to ends not their own.
So the question becomes to what extent Hitler or Mussolini were effective because
1. The social conditions and pervasive ideologies were ripe.
or
2. Their rhetoric was effective.
or
3. Their overall presentation was effective.
or
4. Other reasons.
There is also a philosophical issue : "manipulating" suggests an inappropriate or illegitimate or something like that way of
influencing people. But which are the legitimate vs illegitimate
ways of influencing people politically ? That's a huge discussion
and not for this group but I see no reason to think that an AI will
be more likely to use illegitimate ways of influencing people
compared to what humans have been using for ever.
Spiros Bousbouras <spibou@gmail.com> writes:
Just as they always have. But from other parts of your post and a subsequent
post I think you're worried that the new AIs will be especially effective at
it. Do you have any specific reason or is it just a general concern ?
Sorry for the delayed reply.
Specific reason but imprecisely defined. NNs can extract patterns
from massive data that are poorly- or un-detectable by humans. Early
shots at NNs, before we started calling them AI, could detect (sorry I
don't have the reference) cardiopathology from EKG data slightly
better than trained cardiologists. It's a general concern that there
might exist triggers (loopholes, attack points,
Rump-Titty-Titty-Tum-TAH-Tee vulnerabilities, whatever) but a specific
reason that *if* there are, vast NNs may identify them. It follows that
the owners of the NN soft & hardware will exploit them to their own
ends. 21st c. media makes it possible such an effort at exploitation
could be deployed to hundreds of millions of people in a negligibly
short period of time.
When it comes to influencing human behaviour , television has been very effective and television advertising in particular. But ultimately I haven't
heard anyone claim that television advertising has done a huge harm to society.
How old are you? That's not a condescending sneer. There has been
lots of talk about the harm of TV ads and the TV phenomenon itself.
But we've stopped talking about TV for the last 20 or more years.
F'rg zample, circa 1990, the average TV-watching time for Americans
was ca. 24 hours/week. That's more than weekly classroom hours for a university STEM student. There was, before the net and social media,
a lot of talk about the harm of TV & TV ads [1] but TV was so utterly ubiquitous and so viscerally integrated into almost everybody's lives
that it as widely ignored or derided.
There is also a philosophical issue : "manipulating" suggests an inappropriate or illegitimate or something like that way of
influencing people. But which are the legitimate vs illegitimate
ways of influencing people politically ? That's a huge discussion
and not for this group but I see no reason to think that an AI will
be more likely to use illegitimate ways of influencing people
compared to what humans have been using for ever.
Far below the level of politics are the neural mechanisms. NNs may be
able to detect mechanisms analogous to (not just metaphorically
"like") drug addiction. If people can be remotely triggered into
states isomorphic with addiction, that would be illegitimate
influence. AIUI, the designers of slot machines, video games and
social media GUIs already strive, using all the scientific tools they
can muster, to engender just such an addictive response. There's a
growing perception that this is engendering a massive, albeit as yet
poorly defined, social disruption.
[1] E.g., Arguments for the Elimination of Television, Jerry Mander, 1978.
On 18 Apr 2023 17:16:23 -0300
Mike Spencer <mds@bogus.nodomain.nowhere> wrote:
Spiros Bousbouras <spibou@gmail.com> writes:
Just as they always have. But from other parts of your post and a
subsequent post I think you're worried that the new AIs will be
especially effective at it. Do you have any specific reason or is
it just a general concern ?
Sorry for the delayed reply.
Specific reason but imprecisely defined. NNs can extract patterns
from massive data that are poorly- or un-detectable by humans. Early
shots at NNs, before we started calling them AI, could detect (sorry I
don't have the reference) cardiopathology from EKG data slightly
better than trained cardiologists. It's a general concern that there
might exist triggers (loopholes, attack points,
Rump-Titty-Titty-Tum-TAH-Tee vulnerabilities, whatever) but a specific
reason that *if* there are, vast NNs may identify them. It follows that
the owners of the NN soft & hardware will exploit them to their own
ends. 21st c. media makes it possible such an effort at exploitation
could be deployed to hundreds of millions of people in a negligibly
short period of time.
So your "specific" reason is that NNs are better than humans at
detecting patterns so perhaps they will detect better ways to do bad
things like create addictions.
Yes but perhaps instead they will detect patterns to do good things
like cure diseases or create practical fusion based nuclear energy.
Do you have any reason to think that the bad things (or specifically addictions) are more likely than any of the many good things one can
imagine ?
When it comes to influencing human behaviour , television has been
very effective and television advertising in particular. But
ultimately I haven't heard anyone claim that television
advertising has done a huge harm to society.
How old are you? That's not a condescending sneer. There has been
lots of talk about the harm of TV ads and the TV phenomenon itself.
But we've stopped talking about TV for the last 20 or more years.
F'rg zample, circa 1990, the average TV-watching time for Americans
was ca. 24 hours/week. That's more than weekly classroom hours for a
university STEM student. There was, before the net and social media,
a lot of talk about the harm of TV & TV ads [1] but TV was so utterly
ubiquitous and so viscerally integrated into almost everybody's lives
that it as widely ignored or derided.
Either young enough to not have come across it or old enough to have forgotten it :-D I don't see the point of comparing the time spent
watching TV vs the time spent doing some other "worthy" activity
like attending university classes.
You talk below about "social disruption". A technology which caused a huge social disruption is automobiles. Compared with television , they are much worse. They have killed and injured a large number of people.
[snip bad stuff auto do]
There is also a philosophical issue : "manipulating" suggests an
inappropriate or illegitimate or something like that way of
influencing people. But which are the legitimate vs illegitimate
ways of influencing people politically ? That's a huge discussion
and not for this group but I see no reason to think that an AI will
be more likely to use illegitimate ways of influencing people
compared to what humans have been using for ever.
Far below the level of politics are the neural mechanisms. NNs may be
able to detect mechanisms analogous to (not just metaphorically
"like") drug addiction. If people can be remotely triggered into
states isomorphic with addiction, that would be illegitimate
influence. AIUI, the designers of slot machines, video games and
social media GUIs already strive, using all the scientific tools they
can muster, to engender just such an addictive response. There's a
growing perception that this is engendering a massive, albeit as yet
poorly defined, social disruption.
Ok , addictive behaviour is certainly illegitimate influence. But if
people get addicted , it will be noticed and hopefully some rules will
be put into place.
Rules have been put in place for other kinds of addiction (like
cigarette advertising) so I don't see why one cannot be cautiously
optimistic regarding the effects of AI.
This discussion has made me wonder though whether AI will be used
for usenet trolling. People certainly have triggers on usenet (for
comp* related , a typical example would be <programming language A
vs programming language B>) and an AI which is sufficiently well
tuned and trained , may be able to keep such discussions (flamewars)
going indefinitely.
[1] E.g., Arguments for the Elimination of Television, Jerry Mander, 1978.
Spiros Bousbouras <spibou@gmail.com> writes:
So your "specific" reason is that NNs are better than humans at
detecting patterns so perhaps they will detect better ways to do bad
things like create addictions.
Fair summary. Maybe astonishing things the effect of which, when
employed to nefarious ends, may be irreversible (see "deployed to
hundreds of millions of people" above).
Humans are already getting pretty good at nefarious with contemporary
mass and social media...
Here is the "Tucker Carlson Tonight" playbook: Go straight for the
third rail, be it race, immigration or another hot-button issue;
harvest the inevitable backlash; return the next evening to skewer
critics for how they responded. Then, do it all again. This
feedback loop drove up ratings and boosted loyalty to Fox and
Mr. Carlson.
-- Nicholas Confessore, NYT, 30 Apr 2022
...aided by an unknown (to me anyhow) degree by (more or less) AI
software -- how "algorithms" has become a bogeyman word.
Do you have any reason to think that the bad things (or specifically addictions) are more likely than any of the many good things one can imagine ?
Sure. Those reasons are somewhat fragmentary as I'm not a polymath
with fully informed insight into everything.
Is curing patients a sustainable business model?
-- Goldman Sachs research report, 2019
If Goldman or an entity it's advising owns the AI, something else will
be the business model.
Either young enough to not have come across it or old enough to have forgotten it :-D I don't see the point of comparing the time spent
watching TV vs the time spent doing some other "worthy" activity
like attending university classes.
If you spend 24 hours a week coding (YADATROT) for 10 years, the
inside of your head is going to be a very different place than what it
would be had you devoted the same hours to anything on TV. The author
of [1] devotes one chapter to probably kook material but he makes a
point that watching TV engenders a sort of trance state that tends to uncouple rational attention. I think you should reflect further on
the subject.
But that's a digression. The point is that both TV and interactive
social media have captured a significant fraction of attention time
for a large number of people.
Ok , addictive behaviour is certainly illegitimate influence. But if
people get addicted , it will be noticed and hopefully some rules will
be put into place.
Not very readily in the USA.
Back on topic here, the headline AI
instances are about language. If your tool to engender obsession in
the public is language, a constitutional freedom of speech defense
bats last.
Rules have been put in place for other kinds of addiction (like
cigarette advertising) so I don't see why one cannot be cautiously optimistic regarding the effects of AI.
Try obsessive-compulsive instead of addictive. "Addiction" that is not chemical in the same way that opiates are is really a misleading
metaphor for other neural/psychological phenomena similar in
appearance but different in mechanism.
This discussion has made me wonder though whether AI will be used
for usenet trolling. People certainly have triggers on usenet (for
comp* related , a typical example would be <programming language A
vs programming language B>) and an AI which is sufficiently well
tuned and trained , may be able to keep such discussions (flamewars)
going indefinitely.
Why limit it to Usenet? We've heard recently about court filings specifically contrived to trigger outrage and "own the libs".
There's
a credible inference that the Russians contrived to flood social media
with posts that would troll the undecided right to vote for TFG.
[1] E.g., Arguments for the Elimination of Television, Jerry Mander, 1978.
On 25 Apr 2023 01:30:07 -0300
Mike Spencer <mds@bogus.nodomain.nowhere> wrote:
Spiros Bousbouras <spibou@gmail.com> writes:
This discussion has made me wonder though whether AI will be used
for usenet trolling. People certainly have triggers on usenet (for
comp* related , a typical example would be <programming language A
vs programming language B>) and an AI which is sufficiently well
tuned and trained , may be able to keep such discussions (flamewars) going indefinitely.
Why limit it to Usenet?
Because I consider unmoderated usenet exceptionally susceptible to this threat. For example one could use a few AIs which endlessly crosspost on comp.lang.c and comp.lang.c++ discussing which is the best language. One doesn't have an obvious way to quickly filter this kind of thing. The posts would be polite , erudite and generally make reasonable arguments and many humans would bite and participate. But such discussions could be made to dominate the groups until eventually many humans become fed up and stop reading. I don't think it would take large computational resources either , probably within the reach of an average individual now or shortly in the future. I'm actually surprised why even human trolls don't seem to have done this much.
Because I consider unmoderated usenet exceptionally susceptible to this >threat. For example one could use a few AIs which endlessly crosspost on >comp.lang.c and comp.lang.c++ discussing which is the best language. One >doesn't have an obvious way to quickly filter this kind of thing. The posts >would be polite , erudite and generally make reasonable arguments and many >humans would bite and participate. But such discussions could be made to >dominate the groups until eventually many humans become fed up and stop >reading. I don't think it would take large computational resources either , >probably within the reach of an average individual now or shortly in the >future. I'm actually surprised why even human trolls don't seem to have done >this much.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 343 |
Nodes: | 16 (2 / 14) |
Uptime: | 07:31:44 |
Calls: | 7,553 |
Files: | 12,730 |
Messages: | 5,653,184 |