========================================================================
Once I started to research the possibility that LLM interactions were a variation on the psychic's con, I began to see parallels everywhere in
the field of "AI".
* Hooking a language model up to an MRI and claiming that it can read
minds.
* Claiming to be able to discern criminality based on facial
expressions and gait.
* Proposing magical solutions to health problems.
* Literal predictions of the future.
* Claiming to be able to discern the honesty of potential employees.
All of these are proposed applications of "AI" systems, but they are
also all common psychic scams. Mind reading, police assistance, faith healing, prophecy, and even psychic employee vetting are all right out
of the mentalist playbook.
Even though I have no doubts that these efforts are sincere, it's
becoming more and more obvious that the tech industry has given itself wholesale to superstition and pseudoscience. They keep ignoring the
warnings coming from other fields and the concerns from critics in their
own camp.
Large Language Models don't have the functionality or features to make
up for this wave of superstition.
* "Hallucinations" are a pervasive flaw that's baked into how LLMs
work.
<https://needtoknow.fyi/card/hallucinations/>
* Summarisations are error-prone and prone to generalising about the
text being summarised.
<https://www.baldurbjarnason.com/2023/ai-summaries-unreliable/>
* Their "reasoning" is a statistical illusion.
* Their performance at natural language processing tasks is only
marginally better than that of smaller language models.
<http://opensamizdat.com/posts/chatgpt_survey/>
* They tend to memorise and copy text without attribution.
<https://needtoknow.fyi/card/copyright/>
Taken together, these flaws make LLMs look less like an information technology and more like a modern mechanisation of the psychic hotline.
Delegating your decision-making, ranking, assessment, strategising,
analysis, or any other form of reasoning to a chatbot becomes the
functional equivalent to phoning a psychic for advice.
Imagine Google or a major tech company trying to fix their search engine
by adding a psychic hotline to their front page? That's what they're
doing with Bard.
* "Our university students can't make heads nor tails of our website.
Let's add a psychic hotline!"
* "We need to improve our customer service portal. Let's add a
psychic hotline!"
* "We've added a psychic hotline button to your web browser! No, you
can't get rid of it. You're welcome!"
* "Can't understand a thing in our technical docs? Refer to our fancy
new psychic hotline!"
The AI bubble is going to be a tough one to weather.
More on "AI"
============
I've spent some time writing about the many flaws of language models and generative "AI".
* I've written about how language models are a backward-facing tool
in a novelty-seeking industry and why I think using language models
for programming is a bad idea.
<https://softwarecrisis.dev/letters/ai-code-quality/>
<https://softwarecrisis.dev/letters/ai-and-software-quality/>
* "AI" summaries are inherently unreliable.
<https://www.baldurbjarnason.com/2023/ai-summaries-unreliable/>
* Their tendency towards shortcuts makes them dangerous in healthcare.
<https://www.baldurbjarnason.com/2023/ai-in-healthcare/>
* Most of the research indicating a productivity benefit to "AI" is,
at best, flawed, and at worst are completely detached from the
reality of modern office work.
<https://www.baldurbjarnason.com/2023/ignore-most-ai-research/>
<https://www.baldurbjarnason.com/2023/ai-research-again/>
* AI vendors have a history of pseudoscience and snake oil.
<https://www.baldurbjarnason.com/2023/beware-of-ai-snake-oil/>
* Even if you do think that a language model's unsolvable tendency
towards ‘hallucinations' doesn't disqualify the technology from
replacing search engines, the many security issues that language
models suffer from should. The "write a prompt; get the output"
model is inherently insecure. These systems are also vulnerable to
a form of keyword manipulation exploit that's impossible to prevent.
<https://softwarecrisis.dev/letters/prompts-are-not-fit-for-purpose/>
<https://softwarecrisis.dev/letters/google-bard-seo/>
I've come to the conclusion that a language model is almost always the
wrong tool for the job.
***
I strongly advise against integrating an LLM or chatbot into your
product, website, or organisational processes.
***
If you do have to use generative AI, either because it's a mandate from
above your pay grade or some other requirement, I have written a book
that's specifically about the issues with using generative "AI" for
work:
The Intelligence Illusion: a practical guide to the business risks of Generative AI.
<https://illusion.baldurbjarnason.com/>
It's only $35 USD for EPUB and PDF, which is only 15% of the $240 USD
cost of twelve months of ChatGPT Plus.
But, again, I'd much rather you just avoid using a language model in
the first place and save both the cost of the ebook and the ChatGPT subscription.
References on the Psychic's Con
===============================
* Cold reading (Wikipedia)
<https://en.wikipedia.org/wiki/Cold_reading>
* How to Become Psychic and Cold Read People
<http://positivelybrainwashed.com/
how-to-become-psychic-and-cold-read-people/>
* Derren Brown Cold Reading revealed
<https://secrets-explained.com/derren-brown/cold-reading>
* Cold reading (Rational Wiki)
<https://secrets-explained.com/derren-brown/cold-reading>
* 7 Tricks Psychics Bullshit People With That Everyone Should Know
<https://www.thrillist.com/culture/7-tricks-psychics-and-mediums-use-
how-psychics-use-cold-reading-the-forer-effect>
* Should You Believe in Psychics? Psychology and logic join forces to
debunk psychics (Psychology Today)
<https://www.psychologytoday.com/us/blog/hot-thought/201904/
should-you-believe-in-psychics>
* Motivated reasoning (Wikipedia)
<https://en.wikipedia.org/wiki/Motivated_reasoning>
* Cold Reading: How I Made Others Believe I Had Psychic Powers
<https://medium.com/@chris.kirsch/cold-reading-how-i-
made-others-believe-i-had-psychic-powers-dc184879d264>
* Cold reading (Sceptic's Dictionary)
<https://www.skepdic.com/coldread.html>
* Subjective validation (Sceptic's Dictionary)
<https://www.skepdic.com/subjectivevalidation.html>
* Subjective validation (Wikipedia)
<https://en.wikipedia.org/wiki/Subjective_validation>
* Coincidences: Remarkable or Random?
<https://skepticalinquirer.org/1998/09/
coincidences-remarkable-or-random/>
* Psychic Experiences: Psychic Illusions
<https://www.susanblackmore.uk/articles/
psychic-experiences-psychic-illusions/>
* Guide to Cold Reading
<https://www.skeptics.com.au/resources/articles/
guide-to-cold-reading-ray-hyman/>
* The Cold Reading Technique
<http://www.denisdutton.com/cold_reading.htm>
* Forer effect (Sceptic's Dictionary)
<https://www.skepdic.com/forer.html>
* Tricks of the Psychic Trade (Psychology Today)
<https://www.psychologytoday.com/us/blog/speaking-in-tongues/
201201/tricks-the-psychic-trade>
* Psychic Scams
<https://www.aarp.org/money/scams-fraud/info-2022/psychic.html>
* Ten Tricks of the Psychics I Bet You Didn't Know (You Won't Believe #6!)
<https://skepticalinquirer.org/exclusive/
ten-tricks-of-the-psychics-i-bet-you-didnrsquot-know/>
From: <https://softwarecrisis.dev/letters/llmentalist/>
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 403 |
Nodes: | 16 (2 / 14) |
Uptime: | 40:30:05 |
Calls: | 8,407 |
Calls today: | 2 |
Files: | 13,171 |
Messages: | 5,904,849 |