David Chalmers Discusses the Hard Problem of Consciousness
What exactly is consciousness, and why is it such a hard problem to solve? Neil deGrasse Tyson and co-hosts Chuck Nice and Gary O’Reilly take you deep into the mysteries of consciousness and objective reality, David Chalmers, a philosopher and cognitive scientist.
How do we understand the core of subjective experience? We explore the philosophical and neuroscientific angles of consciousness. Why do some experts believe consciousness arises from the sensory cortex, while others point to the prefrontal cortex? And what does this mean for how we understand animals, AI, or even the very nature of our own minds? We question how consciousness evolved to discussions about the levels of consciousness in animals, babies, and humans under anesthesia.
As the conversation turns to AI, the group contemplates the future of artificial consciousness. Could large language models, like today's AI, eventually develop consciousness? If not now, will they in the near future? And if we can simulate human consciousness in machines, are we creating philosophical zombies—entities that behave like us but lack awareness? These questions spiral into debates about reality, VR, and whether we're already living in a simulation.
Could there be laws of consciousness waiting to be discovered, like the laws of physics? Or is consciousness simply an illusion, as some scientists propose? What are the core features of consciousness that scientists agree on? Buckle up for a fascinating journey that will leave you questioning not just what consciousness is, but whether any of this is even real.
Thanks to our Patrons Jay, Gregory Aronoff, Tom B. Night, Barnsley, Glenn, Hibachi Flamethrower, Crescencio Maximilian joseph Martinez, Micheal Gomez, Matthew Deane, James, Joe Chillemi, Thomas van Cleave, Kelsey Plugge, Jeff Jones, William Hamilton, and Kevin Cosg. for supporting us this week.
Check out our second channel, @StarTalkPlus
Get the NEW StarTalk book, 'To Infinity and Beyond: A Journey of Cosmic Discovery' on Amazon: https://amzn.to/3PL0NFn
Support us on Patreon: https://www.patreon.com/startalkradio
FOLLOW or SUBSCRIBE to StarTalk:
Twitter: http://twitter.com/startalkradio
Facebook: https://www.facebook.com/StarTalk
Instagram: https://www.instagram.com/startalk
About StarTalk:
Science meets pop culture on StarTalk! Astrophysicist & Hayden Planetarium director Neil deGrasse Tyson, his comic co-hosts, guest celebrities & scientists discuss astronomy, physics, and everything else about life in the universe. Keep Looking Up!
#StarTalk #neildegrassetyson
Timestamps:
00:00 - Introduction: David Chalmers
04:00 - What is Consciousness?
06:50 - Where is the Seat of Consciousness?
14:30 - Are Large Language Models Conscious?
22:47 - The Laws of Consciousness
31:40 - Does Consciousness Really Exist?
34:50 - The Reality of Virtual Reality
42:40 - The Future of Reality
Roaming RAG – Make the Model Find the Answers - Arcturus Labs
Roaming RAG offers a fresh take on Retrieval-Augmented Generation, letting LLMs navigate well-structured documents like a human—exploring outlines and diving into sections to find answers. Forget complex retrieval setups and vector databases; this streamlined approach delivers rich context and reliable answers with less hassle. It’s perfect for structured content like technical manuals, product guides, or the innovative llms.txt format designed to make websites LLM-friendly.
These shortcomings have led to sharp, even caustic criticism that AI cannot rival the human mind—the models are merely “stochastic parrots,” in Bender’s famous words, or supercharged versions of “autocomplete,” to quote the AI critic Gary Marcus.
I've been putting the [new o1 model](https://openai.com/index/openai-o1-system-card/) from OpenAI through its paces, in particular for code. I'm very impressed - it feels like it's giving me a similar code quality …
Don’t Throw the Baby Out With the Generative AI Bullshit Bathwater
If I had wanted to write a column about presidential pardons, I’d find ChatGPT’s assistance a far better starting point than I’d have gotten through any general web search. But to quote Reagan: “Trust, but verify.”
Revealed: bias found in AI system used to detect UK benefits fraud
Exclusive: Age, disability, marital status and nationality influence decisions to investigate claims, prompting fears of ‘hurt first, fix later’ approach
Before we get going — please enjoy my speech from Web Summit, Why Are All Tech Products Now Shit? I didn’t write the title.
What if what we're seeing today isn't a glimpse of the future, but the new terms of the present? What if artificial intelligence isn't actually capable
AI Hallucinations: Why Large Language Models Make Things Up (And How to Fix It) - kapa.ai - Instant AI answers to technical questions
Kapa.ai turns your knowledge base into a reliable and production-ready LLM-powered AI assistant that answers technical questions instantly. Trusted by 100+ startups and enterprises incl. OpenAI, Docker, Mapbox, Mixpanel and NextJS.
Study of ChatGPT citations makes dismal reading for publishers | TechCrunch
As more publishers cut content licensing deals with ChatGPT-maker OpenAI, a study put out this week by the Tow Center for Digital Journalism -- looking at
I've been having fun playing with this new vision model from the Hugging Face team behind [SmolLM](https://simonwillison.net/2024/Nov/2/smollm2/). They describe it as: > [...] a 2B VLM, SOTA for its memory …
Model Context Protocol (MCP) is an open-source standard released by Anthropic in November 2024 that enables AI models to interact with external data sources through a unified interface.
Ask questions of SQLite databases and CSV/JSON files in your terminal
I built a new plugin for my sqlite-utils CLI tool that lets you ask human-language questions directly of SQLite databases and CSV/JSON files on your computer. It’s called sqlite-utils-ask. Here’s …
This "natural language interface for computers" open source ChatGPT Code Interpreter alternative has been around for a while, but today I finally got around to trying it out. Here's how …
Misinformation expert cites non-existent sources in Minnesota deep fake case • Minnesota Reformer
A leading misinformation expert is being accused of citing non-existent sources to defend Minnesota’s new law banning election misinformation. Professor Jeff Hancock, founding director of the Stanford Social Media Lab, is “well-known for his research on how people use deception with technology,” according to his Stanford biography. At the behest of Minnesota Attorney General Keith […]