Hello, all.
No sooner was Usenet purged of the plague of GoogleGropus SPAM, than
another has appeared on the horison. Since AI in general and LLMs in particular are developing at break-neck speed, social platforms may
soon be infested by intelligent bots that will be rather hard to
distinguish from humans (e.g. when the LLM is uncensored). Will it
be the end of online group-based communication? Is there any hope of preventing or at least staving off this new apocalypse?
I for one have only one idea: a heterarchical redundant mutual cross verification of users by each other via off-line meetings.
Hello, all.
No sooner was Usenet purged of the plague of GoogleGropus
SPAM, than another has appeared on the horison. Since AI in
general and LLMs in particular are developing at break-neck
speed, social platforms may soon be infested by intelligent
bots that will be rather hard to distinguish from humans
(e.g. when the LLM is uncensored). Will it be the end of
online group-based communication? Is there any hope of
preventing or at least staving off this new apocalypse?
I for one have only one idea: a heterarchical redundant
mutual cross verification of users by each other via off-
line meetings.
Hello, all.
No sooner was Usenet purged of the plague of GoogleGropus
SPAM, than another has appeared on the horison. Since AI in
general and LLMs in particular are developing at break-neck
speed, social platforms may soon be infested by intelligent
bots that will be rather hard to distinguish from humans
(e.g. when the LLM is uncensored). Will it be the end of
online group-based communication? Is there any hope of
preventing or at least staving off this new apocalypse?
I for one have only one idea: a heterarchical redundant
mutual cross verification of users by each other via off-
line meetings.
On 3/27/2024 5:57 AM, Anton Shepelev wrote:
Hello, all.
No sooner was Usenet purged of the plague of GoogleGropus
SPAM, than another has appeared on the horison. Since AI in
general and LLMs in particular are developing at break-neck
speed, social platforms may soon be infested by intelligent
bots that will be rather hard to distinguish from humans
(e.g. when the LLM is uncensored). Will it be the end of
online group-based communication? Is there any hope of
preventing or at least staving off this new apocalypse?
I for one have only one idea: a heterarchical redundant
mutual cross verification of users by each other via off-
line meetings.
"The boy was not sure what he was doing in the forest. He had
been hiking for hours and thought he was at the edge of his
endurance. The summer heat and humidity were oppressive and
had left him feeling weak. He was seeking peace and quiet, a
place to meditate and escape the distractions of his busy life.
Maybe he was looking for treasure, but he did not know it.
He was annoyed that his cell phone had no signal, but he was
even more upset that his GPS had malfunctioned, and he had lost his way.
He thought he should be back at his vehicle by now. Unfortunately,
he was not sure where he was, and he was becoming increasingly
frustrated. He started to worry that he was lost. He was not
worried about being eaten by wild animals. There were none in
this part of the forest. He was, however, concerned that the
sun would soon set and that he would become disoriented and lost at night.
He was feeling a bit less confident than he usually did when
he was on a mountain hike. He had felt more at home in the rugged,
beautiful surroundings of the Alps, but he was not sure that
he had the endurance to blast his way out of this particular
situation. He was happy to traverse the rugged trails of the
mountains, but he was not convinced that he could battle his
way out of this. He was grateful that he was a healthy man,
but he was not sure that he had the strength to hike his way out
of the jungle.
"
That's the current state of AI for you.
https://en.wikipedia.org/wiki/I_Can't_Believe_It's_Not_Butter!
Paul
In comp.misc Anton Shepelev <anton.txt@g{oogle}mail.com> wrote:
Hello, all.
No sooner was Usenet purged of the plague of GoogleGropus SPAM, than
another has appeared on the horison. Since AI in general and LLMs in
particular are developing at break-neck speed, social platforms may
soon be infested by intelligent bots that will be rather hard to
distinguish from humans (e.g. when the LLM is uncensored). Will it
be the end of online group-based communication? Is there any hope of
preventing or at least staving off this new apocalypse?
Removing the profit motive from the spammers. So long as gullible
users buy the wares offered, or hand money over to the scams, the
spammers have a profit motive to continue to work around all attempts
to thwart them.
Hello, all.
No sooner was Usenet purged of the plague of GoogleGropus
SPAM, than another has appeared on the horison. Since AI in
general and LLMs in particular are developing at break-neck
speed, social platforms may soon be infested by intelligent
bots that will be rather hard to distinguish from humans
(e.g. when the LLM is uncensored). Will it be the end of
online group-based communication? Is there any hope of
preventing or at least staving off this new apocalypse?
I for one have only one idea: a heterarchical redundant
mutual cross verification of users by each other via off-
line meetings.
In message <uu19el$2sn32$1@dont-email.me>, Rich <rich@example.invalid>
writes
In comp.misc Anton Shepelev <anton.txt@g{oogle}mail.com> wrote:
Hello, all.
No sooner was Usenet purged of the plague of GoogleGropus SPAM,
than another has appeared on the horison. Since AI in general and
LLMs in particular are developing at break-neck speed, social
platforms may soon be infested by intelligent bots that will be
rather hard to distinguish from humans (e.g. when the LLM is
uncensored). Will it be the end of online group-based
communication? Is there any hope of preventing or at least staving
off this new apocalypse?
Removing the profit motive from the spammers. So long as gullible
users buy the wares offered, or hand money over to the scams, the
spammers have a profit motive to continue to work around all attempts
to thwart them.
And what where the motive isn't directly financial, e,g, disinformation
?
Hello, all.
No sooner was Usenet purged of the plague of GoogleGropus
SPAM, than another has appeared on the horison. Since AI in
general and LLMs in particular are developing at break-neck
speed, social platforms may soon be infested by intelligent
bots that will be rather hard to distinguish from humans
(e.g. when the LLM is uncensored). Will it be the end of
online group-based communication? Is there any hope of
preventing or at least staving off this new apocalypse?
I for one have only one idea: a heterarchical redundant
mutual cross verification of users by each other via off-
line meetings.
Anton Shepelev <anton.txt@g{oogle}mail.com> wrote in news:20240327125736.af9b279c995077aa3eccfee4@g{oogle}mail.com:
Hello, all.
No sooner was Usenet purged of the plague of GoogleGropus
SPAM, than another has appeared on the horison. Since AI in
general and LLMs in particular are developing at break-neck
speed, social platforms may soon be infested by intelligent
bots that will be rather hard to distinguish from humans
(e.g. when the LLM is uncensored). Will it be the end of
online group-based communication? Is there any hope of
preventing or at least staving off this new apocalypse?
I for one have only one idea: a heterarchical redundant
mutual cross verification of users by each other via off-
line meetings.
There are likely some already. I've met one that is under several names in several groups whose motive is an Eliza-like short disagreement answer to everything.
There's almost always some ultimate financial motive behind even those
things that are "disinformation". Find that underlying motive and snip
it off and the incentives go away. The underlying financial motive can
be difficult to discern in some cases.
But compared to the spammers with clear financial motives (either
direct sales or by scams) the percentage that are 'disinformation' is relatively small vs. the huge pile of clearly sales/scam spamming
occurring. So it would be helpful overall if those had their oxygen
cut off, because that leaves only the smaller set of kooks with their disinformation to actively ignore.
On Wed, 27 Mar 2024 17:14:58 -0000 (UTC)
Rich <rich@example.invalid> wrote:
[]
There's almost always some ultimate financial motive behind even
those things that are "disinformation". Find that underlying motive
and snip it off and the incentives go away. The underlying
financial motive can be difficult to discern in some cases.
But compared to the spammers with clear financial motives (either
direct sales or by scams) the percentage that are 'disinformation'
is relatively small vs. the huge pile of clearly sales/scam
spamming occurring. So it would be helpful overall if those had
their oxygen cut off, because that leaves only the smaller set of
kooks with their disinformation to actively ignore.
But there are also political types and governments pushing their own
agendas. (Propaganda).
There are likely some already. I've met one that is under several names in >> several groups whose motive is an Eliza-like short disagreement answer to
everything.
There is a known drinker who does that, and is also a nym shifter.
No, he's not a bot.
On 3/27/2024 5:57 AM, Anton Shepelev wrote:
Hello, all.
No sooner was Usenet purged of the plague of GoogleGropus
SPAM, than another has appeared on the horison. Since AI in
general and LLMs in particular are developing at break-neck
speed, social platforms may soon be infested by intelligent
bots that will be rather hard to distinguish from humans
(e.g. when the LLM is uncensored). Will it be the end of
online group-based communication? Is there any hope of
preventing or at least staving off this new apocalypse?
I for one have only one idea: a heterarchical redundant
mutual cross verification of users by each other via off-
line meetings.
That's the current state of AI for you.
In comp.misc Kerr-Mudd, John <admin@127.0.0.1> wrote:
On Wed, 27 Mar 2024 17:14:58 -0000 (UTC)
Rich <rich@example.invalid> wrote:
[]
There's almost always some ultimate financial motive behind even
those things that are "disinformation". Find that underlying motive
and snip it off and the incentives go away. The underlying
financial motive can be difficult to discern in some cases.
But compared to the spammers with clear financial motives (either
direct sales or by scams) the percentage that are 'disinformation'
is relatively small vs. the huge pile of clearly sales/scam
spamming occurring. So it would be helpful overall if those had
their oxygen cut off, because that leaves only the smaller set of
kooks with their disinformation to actively ignore.
But there are also political types and governments pushing their own agendas. (Propaganda).
There's also a 'financial' incentive there, in that said
government/political types want to keep either tax revenue flowing in,
or keep their nice cushy job prospects open when they leave their
political seat.
Granted, it is by far much harder to snip off the finances there
(usually involves a revolt and regime change) so those do tend to stay entrenched far longer than one would like.
On Wed, 27 Mar 2024 19:42:32 -0000 (UTC)
Rich <rich@example.invalid> wrote:
In comp.misc Kerr-Mudd, John <admin@127.0.0.1> wrote:
On Wed, 27 Mar 2024 17:14:58 -0000 (UTC)
Rich <rich@example.invalid> wrote:
[]
There's almost always some ultimate financial motive behind even
those things that are "disinformation". Find that underlying motive
and snip it off and the incentives go away. The underlying
financial motive can be difficult to discern in some cases.
But compared to the spammers with clear financial motives (either
direct sales or by scams) the percentage that are 'disinformation'
is relatively small vs. the huge pile of clearly sales/scam
spamming occurring. So it would be helpful overall if those had
their oxygen cut off, because that leaves only the smaller set of
kooks with their disinformation to actively ignore.
But there are also political types and governments pushing their own
agendas. (Propaganda).
There's also a 'financial' incentive there, in that said
government/political types want to keep either tax revenue flowing in,
or keep their nice cushy job prospects open when they leave their
political seat.
Granted, it is by far much harder to snip off the finances there
(usually involves a revolt and regime change) so those do tend to stay
entrenched far longer than one would like.
I was thinking specifically of the Russian attempts at misinformation
about Ukraine. This, ISTM, is more about some "Greater Russia" plan than
pure economics.
Anton Shepelev <anton.txt@g{oogle}mail.com> wrote:
Hello, all.
No sooner was Usenet purged of the plague of GoogleGropus SPAM, than
another has appeared on the horison. Since AI in general and LLMs in
particular are developing at break-neck speed, social platforms may
soon be infested by intelligent bots that will be rather hard to
distinguish from humans (e.g. when the LLM is uncensored). Will it
be the end of online group-based communication? Is there any hope of
preventing or at least staving off this new apocalypse?
Removing the profit motive from the spammers. So long as gullible
users buy the wares offered, or hand money over to the scams, the
spammers have a profit motive to continue to work around all attempts
to thwart them.
I for one have only one idea: a heterarchical redundant mutual cross
verification of users by each other via off-line meetings.
I.e., the pgp web-of-trust. It technically worked well. In reality
it did not live up to its true value due to the need for those
"off-line" meetings to truly make it workable.
So I see no reason to expect a new variant will fare any better.
In comp.misc Kerr-Mudd, John <admin@127.0.0.1> wrote:
On Wed, 27 Mar 2024 19:42:32 -0000 (UTC)
Rich <rich@example.invalid> wrote:
In comp.misc Kerr-Mudd, John <admin@127.0.0.1> wrote:
On Wed, 27 Mar 2024 17:14:58 -0000 (UTC)
Rich <rich@example.invalid> wrote:
[]
There's almost always some ultimate financial motive behind even
those things that are "disinformation". Find that underlying
motive and snip it off and the incentives go away. The
underlying financial motive can be difficult to discern in some
cases.
But compared to the spammers with clear financial motives (either
direct sales or by scams) the percentage that are
'disinformation' is relatively small vs. the huge pile of
clearly sales/scam spamming occurring. So it would be helpful
overall if those had their oxygen cut off, because that leaves
only the smaller set of kooks with their disinformation to
actively ignore.
But there are also political types and governments pushing their
own agendas. (Propaganda).
There's also a 'financial' incentive there, in that said
government/political types want to keep either tax revenue flowing
in, or keep their nice cushy job prospects open when they leave
their political seat.
Granted, it is by far much harder to snip off the finances there
(usually involves a revolt and regime change) so those do tend to
stay entrenched far longer than one would like.
I was thinking specifically of the Russian attempts at misinformation
about Ukraine. This, ISTM, is more about some "Greater Russia" plan
than pure economics.
However, a "Greater Russia" plan does bring more money to both the
Russian leaders (i.e. Putin and others) and the Russian Oligarchs
that support them. If "Russia" is "greater" then more money will flow
into the pockets of Putin and his allies, so there is still a
financial incentive at play.
This, however, is one of those financial incentives that is harder to
"cut off" without a lot of violence.
On 27/03/2024 09:57, Anton Shepelev wrote:
Hello, all.
No sooner was Usenet purged of the plague of GoogleGropus
SPAM, than another has appeared on the horison. Since AI in
general and LLMs in particular are developing at break-neck
speed, social platforms may soon be infested by intelligent
bots that will be rather hard to distinguish from humans
(e.g. when the LLM is uncensored). Will it be the end of
online group-based communication? Is there any hope of
preventing or at least staving off this new apocalypse?
I for one have only one idea: a heterarchical redundant
mutual cross verification of users by each other via off-
line meetings.
You need to fight them with your bow and arrows like tribesmen used to
do when fighting white Europeans who went to colonize them :).
AI is here and Usenet/newsgroups are not able to defend themselves.
Sooner or later one has to disappear from the surface of this planet. We don't have tribesmen still fighting with their rudimentary weapons. Even Islamists who are still living in caves have bombs and guns to fight imperialists who try to disrupt their way of living.
Oil and gas. Russia owns Europe when it comes to that.
grinch wrote:
Oil and gas. Russia owns Europe when it comes to that.
Not true. Since Russia started the war on Ukraine most European
countries completely stopped importing Russian gas, with the exception
of - funnily enough - small Austria.
-jw-
So I see no reason to expect a new variant will fare any better.
grinch wrote:
Oil and gas. Russia owns Europe when it comes to that.
Not true. Since Russia started the war on Ukraine most European
countries completely stopped importing Russian gas, with the exception
of - funnily enough - small Austria.
Not true. Since Russia started the war on Ukraine most European
countries completely stopped importing Russian gas, with the exception
of - funnily enough - small Austria.
This is true but it's been a hell of a sacrifice to do so, and a lot
of people in some countries are complaining.
On 27 Mar 2024, Rich <rich@example.invalid> posted some news:uu1sr8$31f26$1@dont-email.me:
In comp.misc Kerr-Mudd, John <admin@127.0.0.1> wrote:
On Wed, 27 Mar 2024 17:14:58 -0000 (UTC)
Rich <rich@example.invalid> wrote:
[]
There's almost always some ultimate financial motive behind even
those things that are "disinformation". Find that underlying motive
and snip it off and the incentives go away. The underlying
financial motive can be difficult to discern in some cases.
But compared to the spammers with clear financial motives (either
direct sales or by scams) the percentage that are 'disinformation'
is relatively small vs. the huge pile of clearly sales/scam
spamming occurring. So it would be helpful overall if those had
their oxygen cut off, because that leaves only the smaller set of
kooks with their disinformation to actively ignore.
But there are also political types and governments pushing their own
agendas. (Propaganda).
There's also a 'financial' incentive there, in that said
government/political types want to keep either tax revenue flowing in,
or keep their nice cushy job prospects open when they leave their
political seat.
Term limits and a two year hard ban from lobbying once exiting office.
If you want to cripple a country these days, just fire
some rockets into data centers. That is the inherent
weakness of the "cloud".
Hey there! I've got some great news to share with you all.
I've just added a brand new header to my posts, and I'm
planning to start incorporating it into the body of my
content as well. This header is going to indicate whether
the post was written by me or by a chatbot.
X-Content-By-Chatbot: False -- it was not written by a chatbot X-Content-By-Chatbot: True -- it was written by a chatbot X-Content-By-Chatbot: Edited -- it was written by a chatbot and
then edited by me
If you want to cripple a country these days, just fire some rockets into
data centers. That is the inherent weakness of the "cloud".
Stefan Ram:
Hey there! I've got some great news to share with you all.
I've just added a brand new header to my posts, and I'm
planning to start incorporating it into the body of my
content as well. This header is going to indicate whether
the post was written by me or by a chatbot.
X-Content-By-Chatbot: False -- it was not written by a chatbot
X-Content-By-Chatbot: True -- it was written by a chatbot
X-Content-By-Chatbot: Edited -- it was written by a chatbot and
then edited by me
Great: deception through inconspicuous placement of vital
information. After reading two more paragraphs, I got a
hunch and checked the headers in your message. Yeah, I was
right.
grinch:
If you want to cripple a country these days, just fire
some rockets into data centers. That is the inherent
weakness of the "cloud".
Then Russia has failed miserably in crippling Ukraine,
despite its overwhelming advantage in ballistic missiles. Or
did not try to.
Nearly 100% of nocem listings since google left is of
computer generated posts, but these posts started before
22 Feb.
I for one have only one idea: a heterarchical redundant
mutual cross verification of users by each other via
off- line meetings.
That's a great way to meet a lot of fbi agents.
There's so much propaganda that people don't understand
what it really is and what it can do and not do.
Retro Guy to Anton Shepelev:
Nearly 100% of nocem listings since google left is of
computer generated posts, but these posts started before
22 Feb.
I think there were blocked because they contain SPAM rathern
than because they are computer-generaged...
On 27 Mar 2024, Rich <rich@example.invalid> posted some news:uu1sr8$31f26$1@dont-email.me:
In comp.misc Kerr-Mudd, John <admin@127.0.0.1> wrote:
On Wed, 27 Mar 2024 17:14:58 -0000 (UTC)
Rich <rich@example.invalid> wrote:
[]
There's almost always some ultimate financial motive behind even
those things that are "disinformation". Find that underlying motive
and snip it off and the incentives go away. The underlying
financial motive can be difficult to discern in some cases.
But compared to the spammers with clear financial motives (either
direct sales or by scams) the percentage that are 'disinformation'
is relatively small vs. the huge pile of clearly sales/scam
spamming occurring. So it would be helpful overall if those had
their oxygen cut off, because that leaves only the smaller set of
kooks with their disinformation to actively ignore.
But there are also political types and governments pushing their own
agendas. (Propaganda).
There's also a 'financial' incentive there, in that said
government/political types want to keep either tax revenue flowing in,
or keep their nice cushy job prospects open when they leave their
political seat.
Term limits and a two year hard ban from lobbying once exiting office.
Pass the insider trading ban for everyone in government service, no exceptions.
Granted, it is by far much harder to snip off the finances there
(usually involves a revolt and regime change) so those do tend to stay
entrenched far longer than one would like.
Cap government jobs at 20 years. Eliminate government hog trough pensions where they get paid 130% of what they were making before retirement. Cap government pensions at 80% max.
Ballistic missiles are an end all. Russia does not want that as it
ruins their expansion plans for thousands of years. They are using
Ukraine and Israel to repay the USA for what Reagan did to them
Russia has failed miserably in crippling Ukraine,Why do we share something as vital to national security as our SPACE
despite its overwhelming advantage in ballistic missiles. Or
did not try to.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 388 |
Nodes: | 16 (3 / 13) |
Uptime: | 06:06:24 |
Calls: | 8,221 |
Calls today: | 19 |
Files: | 13,122 |
Messages: | 5,872,364 |
Posted today: | 1 |