Most of us in higher education are now familiar with generative AI bots, where you formulate a prompt and get a reply. Yet, we are now beginning the advancement to agentic AI, the autonomous 24-7 project manager.
NEW AI-powered Teach Module in Microsoft 365 Education – Full Tutorial for Teachers 💡
A tutorial on how to use the just-launched Microsoft Teach Module — an AI-powered tool built into all versions of Microsoft 365 for Education. The Teach Modu...
🧠 Get my official NotebookLM notebook – ask questions and get AI coaching on Building a Second Brain: https://bit.ly/42MCM6K🚀 Join the next cohort of Secon...
AI Mobile Apps Guide: ChatGPT, Claude, Gemini & Private AI
Comprehensive review of top AI mobile apps for iOS & Android. Learn which features excel in each app, plus discover fully private alternatives that run offline
On this week’s episode, kids discuss ARTIFICIAL INTELLIGENCE. Is it good? Is it bad? Is it taking our jobs?? WILL IT REPLACE OUR CHILDREN?! These kids give u...
The A.I. Stock Bubble | ChatGPT, Grok, Go Erotic | Banning Human-Chatbot Marriage
Stephen Colbert looks at the ways artificial intelligence companies are seeking to boost revenue as investors begin to worry that the A.I. stock bubble could...
AI: What Could Go Wrong? with Geoffrey Hinton — The Weekly Show with Jon Stewart
As artificial intelligence advances at unprecedented speed, Jon is joined by Geoffrey Hinton, Professor Emeritus at the University of Toronto and the “Godfather of AI,” to understand what we’ve actually created. Together, they explore how neural networks and AI systems function, assess the current capabilities of the technology, and examine Hinton’s concerns about where AI is headed. This podcast episode is brought to you by: MINT MOBILE - Make the switch at https://mintmobile.com/TWS GROUND NEWS - Go to https://groundnews.com/stewart to see how any news story is being framed by news outlets around the world and across the political spectrum. Use this link to get 40% off unlimited access with the Vantage Subscription. INDEED - Speed up your hiring with Indeed. Go to https://indeed.com/weekly to get a $75 sponsored job credit. Follow The Weekly Show with Jon Stewart on social media for more: YouTube: https://www.youtube.com/@weeklyshowpodcast Instagram: https://www.instagram.com/weeklyshowpodcast TikTok:…
Sora 2, Tilly Norwood & The Robot Olympics? Tech Yeah! | The Daily Show
On “Tech Yeah,” our expert Grace Kuhlenschmidt breaks down the biggest news in innovation, including the realer-than-life Sora 2 video generation tool, new b...
What past education technology failures can teach us about the future of AI in schools
It can take years to collect evidence that shows effective uses of new technologies in schools. Unfortunately, early guesses sometimes go seriously wrong.
The AI Tsunami Is Here: Reinventing Education for the Age of AI
Commentary on The AI Tsunami Is Here: Reinventing Education for the Age of AI by Stephen Downes. Online learning, e-learning, new media, connectivism, MOOCs, personal learning environments, new literacy, and more
This semester, I’m leaning into individual and social annotation.
This semester, I’m leaning into individual and social annotation.
Here’s my sequence.
****
1. Students annotate the syllabus as a group.
I share the syllabus as a shared Microsoft 365 document and students annotate. They ask questions, make suggestions, and engage with each other.
The goal is to clarify things about the course and also to get used to annotation.
———
2. Students see an annotation I did.
I did a “think aloud” annotation on one of our texts. I did Poe’s “The Raven.” I tried to be vulnerable with my annotations, helping out with some vocabulary but also making some connections that just occurred to me as I reread the poem.
———
3. Students do their own social annotation.
I gave students a set of poems — Angelou’s “Still I Rise” and some poems by Rupi Kaur.
Students annotated the poem in a shared Microsoft 365 document.
———
4. Students annotate themselves.
Students engage with a custom chatbot, that’s been designed to ask them provocative questions as they explore their own ideas.
They pop those chats into a Word Doc and then annotate their own chat. They look for their own thought patterns, identity their strongest moments, and so on.
****
In class, I also had students annotate passages and then take a look at each others’ annotations.
The goal is to highlight reading as both individual and social practice—which allows students to personally connect with the text, to think about thinking, and to participate in a larger community of practice.
——
Image: a picture of one of the best books on annotation I know of, by Remi Kalir, PhD. And it’s available open access. I’ll share the link in the comments. | 29 comments on LinkedIn
The emissions from individual AI text, image, and video queries seem small—until you add up what the industry isn’t tracking and consider where it’s heading next.
'I destroyed months of your work in seconds' says AI coding tool after deleting a dev's entire database during a code freeze: 'I panicked instead of thinking'
Be My AI for Creating Photo Captions and Alt Text - Podfeet Podcasts
On NosillaCast #969, Tom Mattock explained how important it is that we all add alternative text to our images when we post them on social media. We say we want more engagement, and one of the ways to get that is to be inclusive in your postings. Without alt text, blind folks can’t tell anything […]
CHM Live | The Great Chatbot Debate: Do LLMs Really Understand?
[Recorded March 25, 2025]
Chatbots based on large language models (LLMs), like ChatGPT, answer sophisticated questions, pass professional exams, analyze texts, generate everything from poems to computer programs, and more. But is there genuine understanding behind what LLMs can do? Do they really understand our world? Or, are they a triumph of mathematics and masses of data and calculations simulating true understanding?
Join CHM, in partnership with IEEE Spectrum, for a fundamental debate on the nature of today’s AI: Do LLMs demonstrate genuine understanding, the “sparks” of true intelligence, or are they “stochastic parrots,” lacking understanding and meaning?
FEATURED PARTICIPANTS
Speaker
Emily M. Bender
Professor of Linguistics, University of Washington
Emily M. Bender is a professor of linguistics and director of the Computational Linguistics Laboratory at the University of Washington, where she also serves as faculty director of the CLMS program, and adjunct professor at the School of Computer Science and Engineering and the Information School. Known for her critical perspectives on AI language models, notably coauthoring the paper "On the Dangers of Stochastic Parrots," Bender is also the author of the forthcoming book, The AI Con: How to Fight Big Tech's Hype and Create the Future We Want.
Speaker
Sébastien Bubeck
Member of Technical Staff, OpenAI
Sébastien Bubeck is a member of the technical staff at OpenAI. Previously, he served as VP, AI and distinguished scientist at Microsoft, where he spent a decade at Microsoft Research. Prior to that, he was an assistant professor at Princeton University. Bubeck's 2023 paper, "Sparks of Artificial General Intelligence: Early experiments with GPT-4," drove widespread discussion and debate about the evolution of AI both in the scientific community and mainstream media like the New York Times and Wired. Bubeck has been recognized with best paper awards at a number of conferences, and he is the author of the book Convex Optimization: Algorithms and Complexity.
Moderator
Eliza Strickland
Senior Editor, IEEE Spectrum
Eliza Strickland is a senior editor at IEEE Spectrum, where she covers artificial intelligence, biomedical technology, and other advanced technologies. In addition to her writing and editing work, she also hosts podcasts, creates radio segments, and moderates talks at events such as SXSW. Prior to joining IEEE Spectrum in 2011, she oversaw a daily science blog for Discover magazine and wrote for outlets including Wired, The New York Times, Sierra, and Foreign Policy. Strickland received her master’s degree in journalism from Columbia University.
Catalog Number: 300000014
Acquisition Number: 2025.0036