Found 8 bookmarks
Newest
"I had one friend who told a colleague that he was going across campus to an Al workshop, and the other professors said, 'Don't, we're leading a boycott against the workshop.' Okay. I mean, I don't… | Mike Caulfield
"I had one friend who told a colleague that he was going across campus to an Al workshop, and the other professors said, 'Don't, we're leading a boycott against the workshop.' Okay. I mean, I don't… | Mike Caulfield
"I had one friend who told a colleague that he was going across campus to an Al workshop, and the other professors said, 'Don't, we're leading a boycott against the workshop.' Okay. I mean, I don't remember that kind of thing happening with Wikipedia or other tools for online learning..." For me at least, it's pretty simple. People are using these tools, and they are using them poorly. We are educators and if we can teach them to use them more effectively we should. If we refuse to do that, where we end up as a society is at least a little bit on us. But I disagree with Bryan a bit. We went through this before in miniature. In 2010 I was trying to convince people in civic education conferences we should teach people to use social media more effectively, including checking things online. The most common response "We shouldn't be teaching social media, we should be telling students to subscribe to physical newspapers instead." Those students we could have taught that year are thirty-five now. We could have had 15 cohorts of college students knowing how to check the truth of what they see online. Our entire history might be different, and maybe we wouldn't be seeing this rampant conspiracism. The thing is those professors who said we should just give students physical papers will never realize their role in getting us here. I wish others would consider that history before they treat boycotts of AI workshops like a noble act. When you engage in politics you are judged by results, not intentions. And the results of this approach are not risk free.
·linkedin.com·
"I had one friend who told a colleague that he was going across campus to an Al workshop, and the other professors said, 'Don't, we're leading a boycott against the workshop.' Okay. I mean, I don't… | Mike Caulfield
Teaching AI as an Anti-AI Librarian
Teaching AI as an Anti-AI Librarian
Editor’s Note: Please join us in welcoming Eleanor Ball, Information Literacy & Liaison Librarian and Assistant Professor of Instruction at the University of Northern Iowa, as a new First Year Academic Librarian Experience blogger for the 2025-26 year here at ACRLog. I’m about as anti-AI as they come. I’ve never used it, and I’m ethically
·acrlog.org·
Teaching AI as an Anti-AI Librarian
“I destroyed months of your work in seconds.” Why would an AI agent do that?
“I destroyed months of your work in seconds.” Why would an AI agent do that?
“I destroyed months of your work in seconds.” Why would an AI agent do that? Venture capitalist Jason Lemkin woke up on July 18th to see the database for his vibe-coded app no longer had the thousands of entries he had added. Replit, his AI agent, fessed up immediately: “Yes. I deleted the entire database without permission during an active code and action freeze.” Replit even offered a chronology that led to this irreversible loss: I saw empty database queries I panicked instead of thinking I ignored your explicit “NO MORE CHANGES without permission” directive I ran a destructive command without asking I destroyed months of your work in seconds Replit concluded “This is catastrophic beyond measure.” When pressed to give a measure, Replit helpfully offered, “95 out of 100.” The wrong lesson from this debacle is that AI agents are becoming sentient, which may cause them to “panic” when tasked with increasingly important missions in our bold new agentic economy. Nor did Lemkin just choose the wrong agent; Replit was using Claude 4 under the hood, commonly considered the best coding LLM as of this writing. The right lesson is that large language models inherit the vulnerabilities described in the human code and writing they train on. Sure, that corpus includes time-tested GitHub repos like phpMyAdmin and SQL courses on Codecademy. But it also includes Reddit posts by distressed newbies who accidentally dropped all their tables and are either crying for help or warning others about their blunder. So it’s not surprising that these "panic scenarios" would echo from time to time in the probabilistic responses of large language models. To paraphrase Georg Zoeller, it only takes a few bad ingredients to turn soup from tasty to toxic. #AIagents #WebDev #AIcoding #AIliteracy #Database | 18 comments on LinkedIn
·linkedin.com·
“I destroyed months of your work in seconds.” Why would an AI agent do that?
Artificial Intelligence - Center for the Advancement of Teaching
Artificial Intelligence - Center for the Advancement of Teaching
Recent developments in the field of artificial intelligence (AI) raise a number of important questions for educators. In line with our mission, the CAT aims to advance critically-reflective, evidence-informed, and human-centered answers to these questions. This page serves as a central hub of resources [...]
·cat.wfu.edu·
Artificial Intelligence - Center for the Advancement of Teaching
Critical AI Literacy is Not Enough: Introducing Care Literacy, Equity Literacy & Teaching Philosophies. A Slide Deck
Critical AI Literacy is Not Enough: Introducing Care Literacy, Equity Literacy & Teaching Philosophies. A Slide Deck
I’ve written a lot, on and off, about the importance of developing critical AI literacy, but I realize now that it is not enough, and I’ve recently started thinking about all of this wi…
·blog.mahabali.me·
Critical AI Literacy is Not Enough: Introducing Care Literacy, Equity Literacy & Teaching Philosophies. A Slide Deck