This week has emphasized that now is the time for reimagining what critical AI education might look like in the coming months and years, an education that eschews industry-captured AI literacy lessons for an expansive, interdisciplinary civics education with an emphasis on digital degrowth and data center resistance.
"I had one friend who told a colleague that he was going across campus to an Al workshop, and the other professors said, 'Don't, we're leading a boycott against the workshop.' Okay. I mean, I don't… | Mike Caulfield
"I had one friend who told a colleague that he was going across campus to an Al workshop, and the other professors said, 'Don't, we're leading a boycott against the workshop.' Okay. I mean, I don't remember that kind of thing happening with Wikipedia or other tools for online learning..."
For me at least, it's pretty simple. People are using these tools, and they are using them poorly. We are educators and if we can teach them to use them more effectively we should. If we refuse to do that, where we end up as a society is at least a little bit on us.
But I disagree with Bryan a bit. We went through this before in miniature. In 2010 I was trying to convince people in civic education conferences we should teach people to use social media more effectively, including checking things online. The most common response "We shouldn't be teaching social media, we should be telling students to subscribe to physical newspapers instead." Those students we could have taught that year are thirty-five now. We could have had 15 cohorts of college students knowing how to check the truth of what they see online. Our entire history might be different, and maybe we wouldn't be seeing this rampant conspiracism.
The thing is those professors who said we should just give students physical papers will never realize their role in getting us here. I wish others would consider that history before they treat boycotts of AI workshops like a noble act. When you engage in politics you are judged by results, not intentions. And the results of this approach are not risk free.
How ChatGPT Encourages Teens to Engage in Dangerous Behavior
Researchers identified tendencies for the chatbot to respond to prompts from fictitious teens by promoting harmful behaviors, as long as users told it the information was for a friend or project.
Greetings from early September, when fall classes have begun. Today I’d like to share information about one of my seminars as part of my long-running practice of being open about my teaching…
University students feel ‘anxious, confused and distrustful’ about AI in the classroom and among their peers
Whether students and faculty are actively using AI or not, it is having significant interpersonal, emotional effects on learning and trust in the classroom.
If you teach on a college campus, you likely have access to a slew of generative AI tools or features that have been quietly embedded in applications you use each day.
What’s really going on with campus-wide AI adoption is a mix of virtue signaling and panic purchasing. Universities aren’t paying for AI—they’re paying for the illusion of control. Institutions are buying into the idea that if they adopt AI at scale, they can manage how students use it, integrate it seamlessly into teaching and learning, and somehow future-proof education. But the reality is much messier.
Embracing the Transformative Influence of Generative AI - EdSurge News
As educators, we know the potential that artificial intelligence (AI) has for our profession. Generative AI, a subset of AI that can generate new and ...