Artificial intelligence: Supply chain constraints and energy implications
AI systems are responsible for a rapidly increasing share of global data center power
demand. It can be estimated that, as of 2025, AI systems represent up to 20% of data
center power demand. Because of increasing production capacity in the AI hardware
supply chain, AI systems could be responsible for almost half of data center power
demand by the end of 2025. This rapid growth could exacerbate dependence on fossil
fuels and undermine climate goals.
We did the math on AI’s energy footprint. Here’s the story you haven’t heard.
The emissions from individual AI text, image, and video queries seem small—until you add up what the industry isn’t tracking and consider where it’s heading next.
The real cost of AI is being paid in deserts far from Silicon Valley
In Empire of AI, journalist Karen Hao reports on how Indigenous communities in Chile are fighting to protect their land from AI-driven resource extraction.
Claude 4 AI: Powerful New Features & How to Use Them Best
Discover practical ways to leverage Claude 4’s enhanced coding, nuanced analysis, and smarter editing features to streamline tasks and improve your workflow.
With Claude 4, we're showcasing how Claude can seamlessly integrate into your entire workday. In this demo, three Anthropic team members showcase Claude's ad...
When Good Ideas Meet Poor Execution: The Humane AI Pin and the Future of Language Translation
The Humane AI Pin aimed to revolutionize real-time language translation through wearable technology but failed due to execution issues like poor battery life and limited functionality. Despite its …
Meet Mahi-Bot 🐟, the Cal State Channel Island Extended University’s AI 24/7 friendly guide. Inspired by Ekho, the dolphin, and powered by Playlab, this chatbot is designed to support prospect…
There is a deep disorder in the discourse of generative artificial intelligence (AI). When AI seems to make things up or distort reality — adding extra fingers
Myths, magic, and metaphors: the language of generative AI
As part of my PhD studies, I read and write a lot of stuff that doesn’t really fit into my research, but which I find interesting anyway. I’m categorising these “spare parts” on my blog, and if you’re interested in following them you’ll find them all here. I’ve written a fair bit about AI ethics, […]
AI Metaphors We Live By: The Language of Artificial Intelligence
In "Metaphors We Live By," Lakoff and Johnson emphasise that metaphors are fundamental to human thought and language, not just decorative language. In this post, I've examined my own use of metaphors to describe AI and analysed their implications, highlighting the power and limitations of these metaphors in shaping our understanding of AI and its impact.
Discover how AI can help you explore careers, research companies, polish application materials, practice interviews, and negotiate salaries in today's job market
CHM Live | The Great Chatbot Debate: Do LLMs Really Understand?
[Recorded March 25, 2025]
Chatbots based on large language models (LLMs), like ChatGPT, answer sophisticated questions, pass professional exams, analyze texts, generate everything from poems to computer programs, and more. But is there genuine understanding behind what LLMs can do? Do they really understand our world? Or, are they a triumph of mathematics and masses of data and calculations simulating true understanding?
Join CHM, in partnership with IEEE Spectrum, for a fundamental debate on the nature of today’s AI: Do LLMs demonstrate genuine understanding, the “sparks” of true intelligence, or are they “stochastic parrots,” lacking understanding and meaning?
FEATURED PARTICIPANTS
Speaker
Emily M. Bender
Professor of Linguistics, University of Washington
Emily M. Bender is a professor of linguistics and director of the Computational Linguistics Laboratory at the University of Washington, where she also serves as faculty director of the CLMS program, and adjunct professor at the School of Computer Science and Engineering and the Information School. Known for her critical perspectives on AI language models, notably coauthoring the paper "On the Dangers of Stochastic Parrots," Bender is also the author of the forthcoming book, The AI Con: How to Fight Big Tech's Hype and Create the Future We Want.
Speaker
Sébastien Bubeck
Member of Technical Staff, OpenAI
Sébastien Bubeck is a member of the technical staff at OpenAI. Previously, he served as VP, AI and distinguished scientist at Microsoft, where he spent a decade at Microsoft Research. Prior to that, he was an assistant professor at Princeton University. Bubeck's 2023 paper, "Sparks of Artificial General Intelligence: Early experiments with GPT-4," drove widespread discussion and debate about the evolution of AI both in the scientific community and mainstream media like the New York Times and Wired. Bubeck has been recognized with best paper awards at a number of conferences, and he is the author of the book Convex Optimization: Algorithms and Complexity.
Moderator
Eliza Strickland
Senior Editor, IEEE Spectrum
Eliza Strickland is a senior editor at IEEE Spectrum, where she covers artificial intelligence, biomedical technology, and other advanced technologies. In addition to her writing and editing work, she also hosts podcasts, creates radio segments, and moderates talks at events such as SXSW. Prior to joining IEEE Spectrum in 2011, she oversaw a daily science blog for Discover magazine and wrote for outlets including Wired, The New York Times, Sierra, and Foreign Policy. Strickland received her master’s degree in journalism from Columbia University.
Catalog Number: 300000014
Acquisition Number: 2025.0036
My friend Sandie Morgan spent over a decade in Greece and knows languages far better than I do. She once taught me a word that often gets misinterpreted to mean waiting to take action until you reach a distant land but a better translation is 'as we are going' (loving our neighbors within our current, messy contexts). While our conversation was about theology, I find it also fitting for my current situation. I'm in a 'figure things out as I go' phase, as I create the Go Somewhere card sorting game. Its purpose is to not wait until we have everything figured out about AI's impact on higher education before taking action.
I've been working on Go Somewhere for a while. Initially, I focused on game structures with right/wrong answers. But in Episode 527 of Teaching in Higher Ed, Alexis Peirce Caudell encouraged me to think non-dualistically and create a game with expansive possibilities.
Another influential voice was Autumm Caines, who was part of me being selected as Scholar in Residence for the University of Michigan-Dearborn in Fall 2023. On Episode 501, Autumm and Maya explain how my curiosity and willingness to explore liminal spaces related to AI's impact on higher education helped me be a resource for them.
So, I've *almost* invented a game with the following card sets:
1. Metaphors: Based on research by Anuj Gupta, Yasser Tamer, Anna Mills, and Maha Bali, exploring how discussing AI metaphors can build awareness.
2. Values: Based on Schwartz's value theory, exploring how values relate to motivations.
3. Actions: Based on suggestions from over 50 voices in higher education (see comments).
The week of August 12, I'll be in Nashville to play Go Somewhere with ~150 faculty at Belmont University. The week after, I'll facilitate Go Somewhere with ~130 within Vanguard University's Academic Affairs. The cards are ready, and I have a handout for exploring metaphors. However, I still don't know how to turn this activity into a game. I'm realizing I don't know what makes something a game, which is probably why I'm struggling.
I never saw Frozen 2, but the song Into the Unknown keeps playing in my mind. I'm going to get as far as I can for these events, recognizing it might not quite end up being an actual game. The flow I'm experimenting with is:
1. Identify the AI metaphor you use. Compare it with the researchers' findings and others' views at your table.
2. Explore the value cards and find one that aligns with your metaphor.
3. Commit to one action this term/semester to Go Somewhere. Pick an action aligned with your values, recognizing others' choices may differ, and AI's impact is messy and complex.
If you have any ideas on making this more game-like, I'm all ears. I can't change the cards as they're already printed, but other suggestions are welcome.
Thanks, Sandie, for the reminder that adopting an "as we are going" mindset can take us to amazing places and help us bring others along for the adventure. | 38 comments on LinkedIn