The LLMentalist Effect: how chat-based Large Language Models rep…
The new era of tech seems to be built on superstitious behaviour
The intelligence illusion seems to be based on the same mechanism as that of a psychic’s con, often called cold reading. It looks like an accidental automation of the same basic tactic.
The chatbot gives the impression of an intelligence that is specifically engaging with you and your work, but that impression is nothing more than a statistical trick.
People sceptical about "AI" chatbots are less likely to use them. Those who actively don't disbelieve the possibility of chatbot "intelligence" won't get pulled in by the bot. The most active audience will be early adopters, tech enthusiasts, and genuine believers in AGI who will all generally be less critical and more open-minded.
The chatbot’s answers sound extremely specific to the current context but are in fact statistically generic. The mathematical model behind the chatbot delivers a statistically plausible response to the question. The marks that find this convincing get pulled in.
The warnings also play a role in setting the stage. “It’s early days” means that when the statistically generic nature of the response is spotted, it’s easily dismissed as an “error”. Anthropomorphising concepts such as using “hallucination” as a term help dismiss the fact that statistical responses are completely disconnected from meaning and facts. The hype and mythology of AI primes the audience to think of these systems as persons to be understood and engaged with, all but guaranteeing subjective validation.