Myths, magic, and metaphors: the language of generative AI
As part of my PhD studies, I read and write a lot of stuff that doesn’t really fit into my research, but which I find interesting anyway. I’m categorising these “spare parts” on my blog, and if you’re interested in following them you’ll find them all here. I’ve written a fair bit about AI ethics, […]
AI Metaphors We Live By: The Language of Artificial Intelligence
In "Metaphors We Live By," Lakoff and Johnson emphasise that metaphors are fundamental to human thought and language, not just decorative language. In this post, I've examined my own use of metaphors to describe AI and analysed their implications, highlighting the power and limitations of these metaphors in shaping our understanding of AI and its impact.
Discover how AI can help you explore careers, research companies, polish application materials, practice interviews, and negotiate salaries in today's job market
CHM Live | The Great Chatbot Debate: Do LLMs Really Understand?
[Recorded March 25, 2025]
Chatbots based on large language models (LLMs), like ChatGPT, answer sophisticated questions, pass professional exams, analyze texts, generate everything from poems to computer programs, and more. But is there genuine understanding behind what LLMs can do? Do they really understand our world? Or, are they a triumph of mathematics and masses of data and calculations simulating true understanding?
Join CHM, in partnership with IEEE Spectrum, for a fundamental debate on the nature of today’s AI: Do LLMs demonstrate genuine understanding, the “sparks” of true intelligence, or are they “stochastic parrots,” lacking understanding and meaning?
FEATURED PARTICIPANTS
Speaker
Emily M. Bender
Professor of Linguistics, University of Washington
Emily M. Bender is a professor of linguistics and director of the Computational Linguistics Laboratory at the University of Washington, where she also serves as faculty director of the CLMS program, and adjunct professor at the School of Computer Science and Engineering and the Information School. Known for her critical perspectives on AI language models, notably coauthoring the paper "On the Dangers of Stochastic Parrots," Bender is also the author of the forthcoming book, The AI Con: How to Fight Big Tech's Hype and Create the Future We Want.
Speaker
Sébastien Bubeck
Member of Technical Staff, OpenAI
Sébastien Bubeck is a member of the technical staff at OpenAI. Previously, he served as VP, AI and distinguished scientist at Microsoft, where he spent a decade at Microsoft Research. Prior to that, he was an assistant professor at Princeton University. Bubeck's 2023 paper, "Sparks of Artificial General Intelligence: Early experiments with GPT-4," drove widespread discussion and debate about the evolution of AI both in the scientific community and mainstream media like the New York Times and Wired. Bubeck has been recognized with best paper awards at a number of conferences, and he is the author of the book Convex Optimization: Algorithms and Complexity.
Moderator
Eliza Strickland
Senior Editor, IEEE Spectrum
Eliza Strickland is a senior editor at IEEE Spectrum, where she covers artificial intelligence, biomedical technology, and other advanced technologies. In addition to her writing and editing work, she also hosts podcasts, creates radio segments, and moderates talks at events such as SXSW. Prior to joining IEEE Spectrum in 2011, she oversaw a daily science blog for Discover magazine and wrote for outlets including Wired, The New York Times, Sierra, and Foreign Policy. Strickland received her master’s degree in journalism from Columbia University.
Catalog Number: 300000014
Acquisition Number: 2025.0036
My friend Sandie Morgan spent over a decade in Greece and knows languages far better than I do. She once taught me a word that often gets misinterpreted to mean waiting to take action until you reach a distant land but a better translation is 'as we are going' (loving our neighbors within our current, messy contexts). While our conversation was about theology, I find it also fitting for my current situation. I'm in a 'figure things out as I go' phase, as I create the Go Somewhere card sorting game. Its purpose is to not wait until we have everything figured out about AI's impact on higher education before taking action.
I've been working on Go Somewhere for a while. Initially, I focused on game structures with right/wrong answers. But in Episode 527 of Teaching in Higher Ed, Alexis Peirce Caudell encouraged me to think non-dualistically and create a game with expansive possibilities.
Another influential voice was Autumm Caines, who was part of me being selected as Scholar in Residence for the University of Michigan-Dearborn in Fall 2023. On Episode 501, Autumm and Maya explain how my curiosity and willingness to explore liminal spaces related to AI's impact on higher education helped me be a resource for them.
So, I've *almost* invented a game with the following card sets:
1. Metaphors: Based on research by Anuj Gupta, Yasser Tamer, Anna Mills, and Maha Bali, exploring how discussing AI metaphors can build awareness.
2. Values: Based on Schwartz's value theory, exploring how values relate to motivations.
3. Actions: Based on suggestions from over 50 voices in higher education (see comments).
The week of August 12, I'll be in Nashville to play Go Somewhere with ~150 faculty at Belmont University. The week after, I'll facilitate Go Somewhere with ~130 within Vanguard University's Academic Affairs. The cards are ready, and I have a handout for exploring metaphors. However, I still don't know how to turn this activity into a game. I'm realizing I don't know what makes something a game, which is probably why I'm struggling.
I never saw Frozen 2, but the song Into the Unknown keeps playing in my mind. I'm going to get as far as I can for these events, recognizing it might not quite end up being an actual game. The flow I'm experimenting with is:
1. Identify the AI metaphor you use. Compare it with the researchers' findings and others' views at your table.
2. Explore the value cards and find one that aligns with your metaphor.
3. Commit to one action this term/semester to Go Somewhere. Pick an action aligned with your values, recognizing others' choices may differ, and AI's impact is messy and complex.
If you have any ideas on making this more game-like, I'm all ears. I can't change the cards as they're already printed, but other suggestions are welcome.
Thanks, Sandie, for the reminder that adopting an "as we are going" mindset can take us to amazing places and help us bring others along for the adventure. | 38 comments on LinkedIn
Prompts: "turn this paper into an interactive fun game, make sure the game mechanics are both fun and reflect key points from the paper [pasted in full text]"
"this is kinda lame though, there are no trade offs, no story, and no cool graphics"
(22) Cognitive Laziness: The Real Risk of AI | LinkedIn
When it comes to AI in education, I've been seeing more and more conversations pop up about how we can prevent cheating, or even if that's possible anymore. But what if, in our focus on anti-cheating practices, we're missing something deeper? Something that has been on my mind even more than the iss
As we approach May, alarm bells are ringing for all colleges and universities to ensure that AI literacy programs have been completed by learners who plan to enter the job market this year and in the future.
Teaching Fact-Checking Through Deliberate Errors: An Essential AI Literacy Skill
This teaching resource focuses on cultivating AI literacy by teaching students to critically evaluate AI-generated content with deliberate inaccuracies. It emphasizes fact-checking skills through s…
To Ask or Not to Ask the Question in ChatGPT – Best Practices
This is the second post in the To Ask or Not Ask the Question in ChatGPT series. The first post focused on using ChatGPT to ask questions and gather information, while this post highlights best pra…