I’m now more convinced than at any previous point that governments will start regulating generative AI via limiting access to tools like ChatGPT based on a user’s age, and I’m not sure that’s a good thing.
Some Things Need to Be Grown, Not Graded, and Definitely Not Automated
The very real frustration teachers have over rampant AI usage in classrooms is growing. I think it is fair to say higher education lacks any sort of vision or collective point of view about generative AI’s place on college campuses.
If you teach on a college campus, you likely have access to a slew of generative AI tools or features that have been quietly embedded in applications you use each day.
Like so many things in our world, our well-intentioned efforts to solve one problem usher in a legion of new challenges, and AI detection is no different.
Right now, an entire generation of young users is coming of age with generative technology. How do you think they’re going to view this technology when they’re adults if their main interaction with GenAI was as a cheating tool in the classroom, a NSFW bot they used to bully one another, or for generating pornography?
What’s really going on with campus-wide AI adoption is a mix of virtue signaling and panic purchasing. Universities aren’t paying for AI—they’re paying for the illusion of control. Institutions are buying into the idea that if they adopt AI at scale, they can manage how students use it, integrate it seamlessly into teaching and learning, and somehow future-proof education. But the reality is much messier.
h/t Audrey Watters - who writes: Marc Watkins writes about "AI's Illusion of Reason," cautioning that "when we AI systems in humanizing terms, we create false expectations about their capabilities and their limitations." He uses the eighteenth century mechanical Turk as an analogy here – "an automated marvel" that appeared to play chess but in the end was a hoax. But there’s a problem with this historical reference, I would argue, when the imperialism, the "exoticized alterity" of this automaton – then and now – are unexamined.
We shouldn’t need any illusions to understand how generative tools might be useful. This obsession with anthropomorphization hinders our ability to understand what these systems can and cannot do, leaving us with a confused and muddled idea of their capabilities. An LLM’s ability to predict patterns is impressive and quite useful in many contexts, but that doesn't make it conscious.
I had the privilege of moderating a discussion between Josh Eyler and Robert Cummings about the future of AI in education at the University of Mississippi’s recent AI Winter Institute for Teachers.
Last week, Jefferey Young from Edsurge published a podcast episode When the Teaching Assistant is an ‘AI’ Twin of the Professor with an interview from me where I pushed back on this emerging trend of educators uploading their own writing to chatbots to create a “digital twin” for students to interact with.
This post is the third in the Beyond ChatGPT series about generative AI’s impact on learning. In the previous posts, I discussed how generative AI has moved beyond text generation and is starting to impact critical skills like reading and note-taking. In this post, I’ll cover how the technology is marketed to students and educators to automate feedback. The goal of this series is to explore AI beyond ChatGPT and consider how this emerging technology is transforming not simply writing, but many of the skills we associate with learning. Educators must shift our discourse away from ChatGPT’s disruption of assessments and begin to grapple with what generative AI means for teaching and learning.
Giving feedback on writing shouldn't consist primarily of fixing problems.
Giving feedback on writing shouldn't consist primarily of fixing problems. Good feedback depends upon a holistic understanding of the context, the writer, and…
My sincere thanks to those who helped support the creation of The Beyond ChatGPT series by subscribing to my newsletter. Your continued support has helped me carve out the time to research AI’s impact on skill development this summer. I’m committed to keeping the content of this series open to all. Moving forward, I’ll be revisiting each one of these past essays to explore ways educators can ethically use AI with students to help them learn or ways they can include intentional friction in the learning process to counter AI’s marketing promise of a frictionless learning experience.