Found 5 bookmarks
Custom sorting
Andrew Paul: Relax, Google’s LaMDA chatbot is nowhere near sentient (Inverse)
Andrew Paul: Relax, Google’s LaMDA chatbot is nowhere near sentient (Inverse)
“Honestly if this system wasn’t just a stupid statistical pattern associator it would be like a sociopath, making up imaginary friends and uttering platitudes in order to sound cool,” AI developer and NYU Professor Emeritus Gary Marcus tweeted yesterday.Marcus also laid out a detailed rebuttal to Lemoine’s sentience claims in a blog post, dispelling widespread misassumptions regarding the nature of “self-awareness” and our tendency to ascribe it to clever computer programs capable of mimicry. “To be sentient is to be aware of yourself in the world; LaMDA simply isn’t,” he writes. “It’s just an illusion, in the grand history of ELIZA, a 1965 piece of software that pretended to be a therapist (managing to fool some people into thinking it was human), and Eugene Goostman, a wise-cracking 13-year-old-boy impersonating chatbot that won a scaled-down version of the Turing Test.”“Six decades (from Eliza to LaMDA) have taught us that ordinary humans just aren’t that good at seeing through the ruses of AI,” Marcus told me over Twitter DM. “Experts would (or should) want to know how an allegedly sentient system operates, what it knows about the world, what it represents internally, and how it processes the information that comes in.”
Unfortunately, all the theatrics and shallow coverage do a disservice to the actual problematic consequences that can (and will) arise from LaMDA and similar AI software. If this kind of chatbot can fool even a handful of Google’s supposedly expert employees, then what that kind of impact can that technology have on a more general populace? AI impersonations of humans lend themselves to all sorts of scam potentials, con jobs, and misinformation. Something like LaMDA won’t end up imprisoning us all in the Matrix, but it can conceivably convince you that it’s your mom who needs your Social Security number for keeping the family’s records up-to-date. That alone is enough to make us wary of the humanity (or lack thereof) at the other end of the chat line.
Then there are the very serious, well-documented issues regarding built-in human biases and prejudices that plague so many of Big Tech’s rapidly advancing AI systems. These are problems that the industry — and, by extension, the public — are grappling with at this very moment, and they must be properly addressed before we even begin to approach the realms of artificial sentience. The day may or may not come when when AI make solid cases for their personal rights beyond simply responding in the affirmative, but until then, it’s as important as it is ironic that we don’t get let our emotions cloud our logic and judgment calls. Humans are fallible enough as it is, we don’t need clever computer programs making that any worse.
·inverse.com·
Andrew Paul: Relax, Google’s LaMDA chatbot is nowhere near sentient (Inverse)
Colin Meloy: I had ChatGPT write a Decemberists song
Colin Meloy: I had ChatGPT write a Decemberists song
For the record, this is a remarkably mediocre song. I wouldn’t say it’s a terrible song, though it really flirts with terribleness. No, it’s got some basics down: it (mostly) rhymes in all the right places (though that last couplet is a real doozy), it uses a chord progression (I-V-vi-IV) that is enshrined in more hits from the western pop canon than I care to count. But I think you’d agree that there’s something lacking, beyond the little obvious glitches — the missed or repeated rhymes, the grammatical mistakes, the overall banality of the content. Getting the song down, I had to fight every impulse to better the song, to make it resolve where it doesn’t otherwise, to massage out the weirdnesses. I wanted to stay as true to its creator’s vision as possible, and at the end, there’s just something missing. I want to say that ChatGPT lacks intuition. That’s one thing an AI can’t have, intuition. It has data, it has information, but it has no intuition. One thing I learned from this exercise: so much of songwriting, of writing writing, of creating, comes down to the creator’s intuition, the subtle changes that aren’t written as a rule anywhere — you just know it to be right, to be true. That’s one thing an AI can’t glean from the internet.
·colinmeloy.substack.com·
Colin Meloy: I had ChatGPT write a Decemberists song
Guy Hoffman: Why I Don't Care if Students Use GPT
Guy Hoffman: Why I Don't Care if Students Use GPT
“It's like a calculator” is a common quote I hear about ChatGPT. As if the idea is what matters and writing it down is just a necessary evil or technical chore that needs to be done by someone or somecode. But anyone who writes for a living knows that in many ways writing is thinking. The process of translating vague ideas into a coherent text helps structure ideas and make connections. The time spent editing and re-editing weeds out important ideas from marginal ones. The effort to address an imaginary reader, to clarify things to them, helps eliminate unnecessary style decisions. Finding your own voice helps you understand yourself and your contribution to the world better.
·write.guyhoffman.com·
Guy Hoffman: Why I Don't Care if Students Use GPT
Pareidolia, face detection on grains of sand, installation, Driessens & Verstappen, 2019
Pareidolia, face detection on grains of sand, installation, Driessens & Verstappen, 2019
In the artwork Pareidolia* facial detection is applied to grains of sand. A fully automated robot search engine examines the grains of sand in situ. When the machine finds a face in one of the grains, the portrait is photographed and displayed on a large screen. Pareidolia was developed for Sea Art on the isle of Texel, commissioned by SEA - Science Encounters Art. The production was supported by the Creative Industries Fund NL. Photo Heleen Vink, SEA Art, church De Burght, Den Burg, Texel, 2019
·notnot.home.xs4all.nl·
Pareidolia, face detection on grains of sand, installation, Driessens & Verstappen, 2019