If you teach on a college campus, you likely have access to a slew of generative AI tools or features that have been quietly embedded in applications you use each day.
Artificial intelligence: Supply chain constraints and energy implications
AI systems are responsible for a rapidly increasing share of global data center power
demand. It can be estimated that, as of 2025, AI systems represent up to 20% of data
center power demand. Because of increasing production capacity in the AI hardware
supply chain, AI systems could be responsible for almost half of data center power
demand by the end of 2025. This rapid growth could exacerbate dependence on fossil
fuels and undermine climate goals.
We did the math on AI’s energy footprint. Here’s the story you haven’t heard.
The emissions from individual AI text, image, and video queries seem small—until you add up what the industry isn’t tracking and consider where it’s heading next.
The real cost of AI is being paid in deserts far from Silicon Valley
In Empire of AI, journalist Karen Hao reports on how Indigenous communities in Chile are fighting to protect their land from AI-driven resource extraction.
CHM Live | The Great Chatbot Debate: Do LLMs Really Understand?
[Recorded March 25, 2025]
Chatbots based on large language models (LLMs), like ChatGPT, answer sophisticated questions, pass professional exams, analyze texts, generate everything from poems to computer programs, and more. But is there genuine understanding behind what LLMs can do? Do they really understand our world? Or, are they a triumph of mathematics and masses of data and calculations simulating true understanding?
Join CHM, in partnership with IEEE Spectrum, for a fundamental debate on the nature of today’s AI: Do LLMs demonstrate genuine understanding, the “sparks” of true intelligence, or are they “stochastic parrots,” lacking understanding and meaning?
FEATURED PARTICIPANTS
Speaker
Emily M. Bender
Professor of Linguistics, University of Washington
Emily M. Bender is a professor of linguistics and director of the Computational Linguistics Laboratory at the University of Washington, where she also serves as faculty director of the CLMS program, and adjunct professor at the School of Computer Science and Engineering and the Information School. Known for her critical perspectives on AI language models, notably coauthoring the paper "On the Dangers of Stochastic Parrots," Bender is also the author of the forthcoming book, The AI Con: How to Fight Big Tech's Hype and Create the Future We Want.
Speaker
Sébastien Bubeck
Member of Technical Staff, OpenAI
Sébastien Bubeck is a member of the technical staff at OpenAI. Previously, he served as VP, AI and distinguished scientist at Microsoft, where he spent a decade at Microsoft Research. Prior to that, he was an assistant professor at Princeton University. Bubeck's 2023 paper, "Sparks of Artificial General Intelligence: Early experiments with GPT-4," drove widespread discussion and debate about the evolution of AI both in the scientific community and mainstream media like the New York Times and Wired. Bubeck has been recognized with best paper awards at a number of conferences, and he is the author of the book Convex Optimization: Algorithms and Complexity.
Moderator
Eliza Strickland
Senior Editor, IEEE Spectrum
Eliza Strickland is a senior editor at IEEE Spectrum, where she covers artificial intelligence, biomedical technology, and other advanced technologies. In addition to her writing and editing work, she also hosts podcasts, creates radio segments, and moderates talks at events such as SXSW. Prior to joining IEEE Spectrum in 2011, she oversaw a daily science blog for Discover magazine and wrote for outlets including Wired, The New York Times, Sierra, and Foreign Policy. Strickland received her master’s degree in journalism from Columbia University.
Catalog Number: 300000014
Acquisition Number: 2025.0036
Gemini Live: Breaking Educational Barriers with AI
Gemini Live is Google’s new conversational AI assistant that responds to voice commands in real-time. Unlike text-based interactions, Gemini Live allows for natural, flowing conversations. Th…
A scoping review on how generative artificial intelligence transforms assessment in higher education - International Journal of Educational Technology in Higher Education
Generative artificial intelligence provides both opportunities and challenges for higher education. Existing literature has not properly investigated how this technology would impact assessment in higher education. This scoping review took a forward-thinking approach to investigate how generative artificial intelligence transforms assessment in higher education. We used the PRISMA extension for scoping reviews to select articles for review and report the results. In the screening, we retrieved 969 articles and selected 32 empirical studies for analysis. Most of the articles were published in 2023. We used three levels—students, teachers, and institutions—to analyses the articles. Our results suggested that assessment should be transformed to cultivate students’ self-regulated learning skills, responsible learning, and integrity. To successfully transform assessment in higher education, the review suggested that (i) teacher professional development activities for assessment, AI, and digital literacy should be provided, (ii) teachers’ beliefs about human and AI assessment should be strengthened, and (iii) teachers should be innovative and holistic in their teaching to reflect the assessment transformation. Educational institutions are recommended to review and rethink their assessment policies, as well as provide more inter-disciplinary programs and teaching.
Ignite Innovation: Faculty Showcase Course Design Series Tuesday, April 1, 2025 Link to these slides: https://docs.google.com/presentation/d/1Ezr6HgVb7IoMrIWJEsMIlp93HPig_FK08gXIYhlawgo/edit?usp=sharing
OpenAI’s Sora Is Plagued by Sexist, Racist, and Ableist Biases
WIRED tested the popular AI video generator from OpenAI and found that it amplifies sexist stereotypes and ableist tropes, perpetuating the same biases already present in AI image tools.
OpenAI Has Improved Its Image Gen - But Do We Need More "Offensive" AI?
OpenAI's recent updates to DALL-E 3 aim to enhance its image generation capabilities, but the focus on relaxing restrictions for "offensive" content raises concerns. The integration of more violent and explicit imagery could contribute to misinformation, and doesn't contribute to the future of the technology.