Andrew Paul: Relax, Google’s LaMDA chatbot is nowhere near sentient (Inverse)
“Honestly if this system wasn’t just a stupid statistical pattern associator it would be like a sociopath, making up imaginary friends and uttering platitudes in order to sound cool,” AI developer and NYU Professor Emeritus Gary Marcus tweeted yesterday.Marcus also laid out a detailed rebuttal to Lemoine’s sentience claims in a blog post, dispelling widespread misassumptions regarding the nature of “self-awareness” and our tendency to ascribe it to clever computer programs capable of mimicry. “To be sentient is to be aware of yourself in the world; LaMDA simply isn’t,” he writes. “It’s just an illusion, in the grand history of ELIZA, a 1965 piece of software that pretended to be a therapist (managing to fool some people into thinking it was human), and Eugene Goostman, a wise-cracking 13-year-old-boy impersonating chatbot that won a scaled-down version of the Turing Test.”“Six decades (from Eliza to LaMDA) have taught us that ordinary humans just aren’t that good at seeing through the ruses of AI,” Marcus told me over Twitter DM. “Experts would (or should) want to know how an allegedly sentient system operates, what it knows about the world, what it represents internally, and how it processes the information that comes in.”
Unfortunately, all the theatrics and shallow coverage do a disservice to the actual problematic consequences that can (and will) arise from LaMDA and similar AI software. If this kind of chatbot can fool even a handful of Google’s supposedly expert employees, then what that kind of impact can that technology have on a more general populace? AI impersonations of humans lend themselves to all sorts of scam potentials, con jobs, and misinformation. Something like LaMDA won’t end up imprisoning us all in the Matrix, but it can conceivably convince you that it’s your mom who needs your Social Security number for keeping the family’s records up-to-date. That alone is enough to make us wary of the humanity (or lack thereof) at the other end of the chat line.
Then there are the very serious, well-documented issues regarding built-in human biases and prejudices that plague so many of Big Tech’s rapidly advancing AI systems. These are problems that the industry — and, by extension, the public — are grappling with at this very moment, and they must be properly addressed before we even begin to approach the realms of artificial sentience. The day may or may not come when when AI make solid cases for their personal rights beyond simply responding in the affirmative, but until then, it’s as important as it is ironic that we don’t get let our emotions cloud our logic and judgment calls. Humans are fallible enough as it is, we don’t need clever computer programs making that any worse.