AI & LLMs Teaching & Education

AI & LLMs Teaching & Education

89 bookmarks
Custom sorting
Why We’re Not Using AI in This Course, Despite Its Obvious Benefits
Why We’re Not Using AI in This Course, Despite Its Obvious Benefits
On the other hand, universities aren’t vocational schools, merely training students for future jobs.
There’s a big difference between learning how to generate credible philosophical content and learning how to think like a philosopher. In one future where AI use is everywhere, it could be enough to generate content without actually understanding it, but in another future, you will still need to know stuff.
And because teachers are role models for what genuine intellectual engagement looks like, both online and in real life.
. When we are stuck and can’t defend our positions with reasons, that’s a pretty good sign that our positions are weak or indefensible and should be reconsidered.
but to give up on reason and whatever other intellectual powers we possess is to give up on being human.
Out of all the things in this world that you could have been, you’re a person. This is a rare, special opportunity that shouldn’t be wasted; as far as we know, the light of human-level rationality and creativity exists only on one planet in this universe. So, it’s fine to disagree with one another, including your own instructors, but disagreements must be resolved with reason, not shouting, intimidation, cheating, trickery, corruption, combat, or other things from the worst of human nature. You’re here at a university to develop as a human being—to become a better, more educated person and citizen of the world—and learning how to productively disagree (and to resolve that) is a critical part of education.
Out of all the things in this world that you could have been, you’re a person
Shout this from the mountaintops - this is what we should say in every class, every day
the things you’re supposed to be learning in the university you had worked so hard to get into.
Do they really want new college graduates who are less educated than their peers who had actually put in the work?
peers who had actually put in the work?
Argument against AI cheating, it might get you the credential, but not the skills that others are getting by doing the work
just because it may be important to know how to use AI in the future doesn’t necessarily mean that university courses should be redesigned to give you training or practice at that.
Is that really the future you’re preparing for, where you have no competitive advantage in a generic job market? If so, your future is already lost, and you’re just studying to become another overworked cog in a machine. If AI becomes common in the future workplace, then you may become the tool for AI, instead of the other way around. To avoid that dystopia, a better strategy would be to develop your competitive advantage as early as you can, right now in college.
Having a different, special perspective will separate you from the masses who are using AI to produce more or less the same content with the same, ordinary, and generic voice. Being different and authentic would best help you to contribute new ideas to your work, not old ideas that are recycled and repackaged by AI, as well as to demonstrate your uniqueness that will be hard to replace.
This sounds like a hallmark card/wall stencil - but it is still true. But that sound is what makes it hard to sell to high school students
Manipulated AI implies manipulated human audiences, and this violates our autonomy.
is that really necessary for everyone to know how to do basic math? If not, then does everyone really need to know how to read, write, and think for themselves?
·emergingethics.substack.com·
Why We’re Not Using AI in This Course, Despite Its Obvious Benefits
I Set A Trap To Catch My Students Cheating With AI. The Results Were Shocking. | HuffPost HuffPost Personal
I Set A Trap To Catch My Students Cheating With AI. The Results Were Shocking. | HuffPost HuffPost Personal
But I don’t know if I agree with the AHA. Let me tell you why the Trojan horse worked. It is because students do not know what they do not know. My hidden text asked them to write the paper “from a Marxist perspective.” Since the events in the book had little to do with the later development of Marxism, I thought the resulting essay might raise a red flag with students, but it didn’t.
I explained my disappointment to the students, pointing out that they cheated on a paper about a rebellion of the enslaved — people who sacrificed their lives in pursuit of freedom, including the freedom to learn to read and write. In fact, Virginia made it even harder for them to do so after the rebellion was put down.
I’m a historian. I am trained and paid to teach students how to understand a narrative, to derive meaning from it with textual analysis and to communicate that meaning in written word. I cannot force them to do any of those things, but I won’t be complicit in exposing them to even more AI in my classroom.
Regardless of their awareness or lack thereof, each one of my students made the decision to skip one of the many challenges of earning a degree — assuming they are only here to buy it (a very different cultural conversation we need to have). They also chose to actively avoid learning because it’s boring and hard.
They also chose to actively avoid learning because it’s boring and hard.
A college degree is not just about a job afterward — you have to be able to think, solve problems and apply those solutions, regardless of the field. How do we teach that without institutional support? How do we teach that when a student doesn’t want to and AI enables it?
But what we learn from history is that progress requires failure. It requires reflection. Students are not just undermining their ability to learn, but to someday lead.
academic freedom itself is under assault by the weakest minds among us. AI has only made this worse. It is a crisis.
History shows us that the right to literacy came at a heavy cost for many Americans, ranging from ostracism to death. Those in power recognized that oppression is best maintained by keeping the masses illiterate, and those oppressed recognized that literacy is liberation. To my students and to anyone who might listen, I say: Don’t surrender to AI your ability to read, write and think when others once risked their lives and died for the freedom to do so.
·huffpost.com·
I Set A Trap To Catch My Students Cheating With AI. The Results Were Shocking. | HuffPost HuffPost Personal
Statement on Educational Technologies and AI Agents | Modern Language Association
Statement on Educational Technologies and AI Agents | Modern Language Association
The recent and hasty integration of generative AI features into those systems is already redefining student and instructor relationships, evaluative standards, and instructional outcomes—with no compelling evidence that any of it is for the better
If we do not act, we risk seeing the development of a fully automated loop in which assignments are generated by AI with the support of a learning management system, AI-generated content is submitted by an agentic AI on behalf of the student, and AI-driven metrics evaluate the work on behalf of the instructor.
·mla.org·
Statement on Educational Technologies and AI Agents | Modern Language Association
10 AI Prompts for Academics
10 AI Prompts for Academics
Interesting for High School teachers to check out this list of AI prompts that professors use
·profserious.substack.com·
10 AI Prompts for Academics
The Case Against Generative AI
The Case Against Generative AI

The AI promoters and Admins don't teach, so AI seems like an easy fit into teaching. After 30 years of tech infusion, no school wants to miss the boat so they're jumping in without thinking about it much. It's new! It looks cool! It's the wave of the future!
This long article examines the business hype and failure to bring real results. Viewed through this lens, the AI in education rush seems like a considerable waste of time and money.

Throughout this era, investors and the media spoke with a sense of inevitability that they never really backed up with data. It was an era based on confidently-asserted “vibes.”
Artificial Intelligence is built and sold on not just faith, but a series of myths that the AI boosters expect us to believe with the same certainty that we treat things like gravity, or the boiling point of water.
Across the board, the people being “replaced” by AI are the victims of lazy, incompetent cost-cutters who don’t care if they ship poorly-translated text. To quote Merchant again, “[AI hype] has created the cover necessary to justify slashing rates and accepting “good enough” automation output for video games and media products.”
The problem is that most jobs are not output-driven at all, and what we’re buying from a human being is a person’s ability to think.
Every CEO talking about AI replacing workers is an example of the real problem: that most companies are run by people who don’t understand or experience the problems they’re solving, don’t do any real work, don’t face any real problems, and thus can never be trusted to solve them.
A writer doesn’t just “write words.” They jostle ideas and ideals and emotions and thoughts and facts and feelings into a condensed piece of text, explaining both what’s happening and why it’s happening from their perspective, finding nuanced ways to convey large topics, none of which is the result of a single (or many) prompts but the ever-shifting sand of a writer’s brain.  Good writing is a fight between a bunch of different factors: structure, style, intent, audience, and prioritizing the things that you (or your client) care about in the text. It’s often emotive — or at the very least, driven or inspired by a given emotion — which is something that an AI simply can’t replicate in a way that’s authentic and believable.
The same extends to some members of the business and tech media that have, for the most part, gotten by without having to think too hard about the actual things the companies are saying.
This is nowhere more true than in education
This is how the core myths of generative AI were built: by executives saying stuff and the media publishing it without thinking too hard.
Over half a trillion dollars has gone into an entire industry without a single profitable company developing models or products built on top of models
these models cannot think and do not know anything.
·wheresyoured.at·
The Case Against Generative AI
Search LibGen, the Pirated-Books Database That Meta Used to Train AI
Search LibGen, the Pirated-Books Database That Meta Used to Train AI
AI tools use LLMs that are built on stolen material. Schools, teachers and students who promote the use of AI to learn are telling students that it is okay to use something that was stolen from someone else
·theatlantic.com·
Search LibGen, the Pirated-Books Database That Meta Used to Train AI
3 Ways to Save Yourself From AI’s Critical Thinking Decline - Psychology Today
3 Ways to Save Yourself From AI’s Critical Thinking Decline - Psychology Today
AI makes thinking easier and our ability to do so weaker. Debate, philosophy, and creative problem-solving can save your brain from the critical thinking slide. Why are teachers incorporating a tool that Psychology Today says causes critical thinking decline
·psychologytoday.com·
3 Ways to Save Yourself From AI’s Critical Thinking Decline - Psychology Today
Guidelines AI use at Penn State
Guidelines AI use at Penn State

Like many AI materials and policies, some of what you find is useful and applicable, and some isn't. In this case the General Guidance section is concise and on point.

And the icon/images are excellent to copy and paste directly into assignments

Penn State encourages safe exploration and use of generative AI tools to further our teaching, research, and service mission. Keep these guidelines in mind:
·ai.psu.edu·
Guidelines AI use at Penn State
AGAINST AI -
AGAINST AI -
Work in progress as of Aug 21, but shows promist
·against-a-i.com·
AGAINST AI -
YouTube Is Using AI to Alter Content (and not telling us)
YouTube Is Using AI to Alter Content (and not telling us)
14 minute video of YouTube creator showing how YouTube is infusing AI editing into content uploaded to its platform. The important piece for teachers is his explanation of how he never uses AI to help him produce content because it is not him, it is not authentic. Could teachers help students think about their own writing the same way?
·youtube.com·
YouTube Is Using AI to Alter Content (and not telling us)
A.I. Is Poised to Rewrite History. Literally. - The New York Times
A.I. Is Poised to Rewrite History. Literally. - The New York Times
whose training allows it to understand almost the whole internet, as ChatGPT does, even if it constrains its answers to the uploaded sources
Fred Turner,
ommunication department at Stanford.
Mark Humphries, a professor at Wilfrid Laurier University in Ontario,
used A.I. to analyze tens of thousands of handwritten records about late 18th- and early 19th-century fur trading, in order to better understand the far-flung community of traders
The goal is to find not just one-to-one transactions between specific voyageurs but chains of interconnection that would be hard for human researchers to make quickly.
— Steven Johnson, the technology journalist and historian
Humphries — like Steven Johnson — is swimming in the deep end of A.I experimentation
Jefferson Cowie of Vanderbilt, who won a Pulitzer Prize in 2023 for his book “Freedom’s Dominion: A Saga of White Resistance to Federal Power.”
that their attitudes toward A.I. lived in the shadow of their students’ cheating with it, which simultaneously made them reluctant to touch it but also seemed to have made them understand just how powerful it was as a tool.
I am haunted by the fact that it would be hypocritical for me to use A.I. given how concerned I am with my students’ use of it,”
Charles C. Mann, author of “1491” and, most recently, “The Wizard and the Prophet,
That’s what A.I. can’t do. It has no bullshit detector.”
While the tool does occasionally misrepresent what’s in its sources (and passes along errors from those sources without much ability to fact-check them), constraining the research material does seem to cut down on the types of whole-cloth fabrications that still emerge from the major chatbots.
Stacy Schiff, the author of decorated biographies of Cleopatra and Véra Nabokov,
To turn to A.I. for structure seems less like a cheat than a deprivation, like enlisting someone to eat your hot fudge sundae for you.”
third century B.C., when Callimachus wrote his “Pinakes,”
a series of books (now lost) cataloging the holdings of the famous library (now lost) in Alexandria, humanity has devised increasingly sophisticated systems for navigating pools of information too large for any one individual to take in.
The rise of computers and the internet were of course an unprecedented turning point in the history of tools for writing history — exponentially increasing the quantity of information about the past and, at the same time, our power to sift and search that information. Psychologically, digital texts and tools have thrown us into an era, above all, of “availability”: both in the colloquial sense of that word (everything’s seemingly available) and in the social-scientific sense of “availability bias,” whereby we can fool ourselves into thinking that we have a clear and complete picture of a topic, buffaloed by the sheer quantity of supporting facts that can spring up with a single, motivated search.
“Technology has exploded the scope and speed of discovery. But our ability to read accurately the sources we find, and evaluate their significance, cannot magically accelerate apace. The more far-flung the locales linked through our discoveries, the less consistent our contextual knowledge. The place-specific learning that historical research in a predigital world required is no longer baked into the process. We make rookie mistakes.”
University of Pittsburgh historian Lara Putnam
But she worried about what was being lost, especially given that the pool of digitized sources, even as it keeps growing, remains stubbornly unrepresentative: biased toward the English language and toward wealthy nations over poor ones, but biased especially toward “official” sources (those printed rather than written, housed in institutional rather than smaller or less formal archives
Gazing at the past through the lens of the digitizable,” Putnam notes, “makes certain phenomena prominent and others less so, renders certain people vividly visible and others vanishingly less so.”
composing texts that are just plausible enough to make human work irrelevant.
The individual sources would fade yet further into the background, as users trust tools like NotebookLM to offer cogent-seeming summaries of enormous troves of texts without much attention to their origins or agendas. What becomes staggeringly “cheap,” in such a world, is work that attempts to synthesize astonishing amounts of material, perhaps drawing on sources far beyond what a single human could process in a lifetime, ranging promiscuously across languages, borders and time periods, at a speed that would allow a single human to complete multiple such projects in a career.
Josh Woodward, the head of Google Labs,
Beyond that, many of them — the mind-boggling video-generation engines, in particular — seem likely to accelerate the cultural changes that have made serious writing less and less relevant in the internet era. Perhaps it was naïve to even worry about A.I.’s competing with historians, when the typical user, amid a life increasingly consumed with other, nonverbal diversions, is satisfied to receive facts on demand in bite-size bursts.
What if e-books of history came enhanced with a NotebookLM-like interface?
That way, “instead of just a bibliography, you have a live collection of all the original sources” for a chatbot to explore: delivering timelines, “mind maps,” explanations of key themes, anything you can think to ask.
·nytimes.com·
A.I. Is Poised to Rewrite History. Literally. - The New York Times
Survey: 60% of Teachers Used AI This Year and Saved up to 6 Hours of Work a Week – The 74
Survey: 60% of Teachers Used AI This Year and Saved up to 6 Hours of Work a Week – The 74
Almost hidden in this article is the survey result that 57% of teacher respondents said that AI would decrease independent thinking, and 52% said it would reduce critical thinking.
At least once a month, 37% of educators take advantage of tools to prepare to teach, including creating worksheets, modifying materials to meet student needs, doing administrative work and making assessments, the survey found. Less common uses include grading, providing one-on-one instruction and analyzing student data.
About 61% said they receive better insights about student learning or achievement data, while 57% said the tools help improve their grading and student feedback.
"They say" they receive better insights - but by what measure is this belief assessed?
But 57% said it would decrease students’ independent thinking, and 52% said it would decrease critical thinking. Nearly half said it would decrease student persistence in solving problems, ability to build meaningful relationships and resilience for overcoming challenges.
·the74million.org·
Survey: 60% of Teachers Used AI This Year and Saved up to 6 Hours of Work a Week – The 74
4 Principles for Classroom AI, From an Experienced Educator (Opinion)
4 Principles for Classroom AI, From an Experienced Educator (Opinion)
Ignoring AI, even if it intimidates you, isn't the path to take for you or your students. Instead, consider this advice - make exploration and reflection the centerpiece of AI use in your classroom. Make it the subject of critical thinking conversations
So when I talk to students about a writing assignment where they will be allowed to incorporate AI, I try to be brutally honest about my own experience and inexperience with this technology, always emphasizing what matters to me about their writing.
There is warmth in classroom communities where students know that their teacher knows their voice, where students get to know each other’s voices, so investing in learning how each student sounds when expressing themselves early in the academic term is more valuable in the age of AI than ever.
4. Be ruthlessly reflective. Many of my colleagues fear that artificial intelligence will ultimately erode human intelligence. I understand their fear. But if we require ourselves and our students to pause and reflect on each use, even write by hand some musings about the experience, we devote our brainpower to a substantive and important academic and life skill.
·edweek.org·
4 Principles for Classroom AI, From an Experienced Educator (Opinion)
Trep artificial intelligence and academic professions
Trep artificial intelligence and academic professions
AAUP report makes case for professors and teachers to be at the center of AI decisions at colleges and universities, consistent with the school's mission. Yet, as with other tech products over the years, school administrations are making decisions guided by different goals
·aaup.org·
Trep artificial intelligence and academic professions
Opinion | The Seductions of A.I. for the Writer’s Mind
Opinion | The Seductions of A.I. for the Writer’s Mind
We need to reckon with what ChatGPT is doing to the classroom and to human expression. Meghan O'Rourke's essay is the most thoughtful and comprehensive I've yet to see myself. It is worth a read
·nytimes.com·
Opinion | The Seductions of A.I. for the Writer’s Mind
Five things I believe about actually-existing AI today.
Five things I believe about actually-existing AI today.
This essay is helpful for curriculum designers and teachers because it identifies the ability of AI to quickly produce products that are "good enough" to suffice for a given purpose. In that sense it helps to crank out charts, worksheets and questions that aren't necessarily great, but are good enough. Students looking for quick, good enough essays will always find AI attractive
Satisficing
·davekarpf.substack.com·
Five things I believe about actually-existing AI today.
Fuel of delusions
Fuel of delusions
Teachers should read this essay because it suggests a question they may not have considered - how many of their lonely students will be talking to chatbots more than humans? What changes can this bring?
404 Media reported that moderators of a pro-AI Reddit group devoted to “the Singularity” had to remove more than 100 members for posting delusional content. Likewise, Rolling Stone recently ran a story headlined: “People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies: Self-styled prophets are claiming they have ‘awakened’ chatbots and accessed the secrets of the universe through ChatGPT.
Alex Hanna, director of research at the Distributed AI Research Institute
DAIR receives five to ten emails per day from people suffering from AI-related delusions.
François Chollet, formerly of Google DeepMind
·buildcognitiveresonance.substack.com·
Fuel of delusions
Teaching with AI: strategies against cheating
Teaching with AI: strategies against cheating
Bryan Alexander's summary of the current state of the academic integrity trench war is conscience and comprehensive, describing the advantages and disadvantages of each approach
My gut tells me that we need to rethink assessment from top to bottom, but that nearly all colleges lack the resources to do this.
Otherwise I think we’re in serious trouble. It’s possible that no combination of these practices will significantly dent the AI-enabled cheating problem, which will mean more students will fail to learn, which in turn depresses the value of post-secondary education.
·aiandacademia.substack.com·
Teaching with AI: strategies against cheating
AI competency framework for students
AI competency framework for students
UNESCO's guide define 12 competencies across four dimensions (human centered mindset, ethics of AI, AI techniques and applications, and AI system design. Worth looking at quickly to see how it aligns with other similar documents. It's probably best for stressing human agency.
·unesdoc.unesco.org·
AI competency framework for students
Meta staff torrented nearly 82TB of pirated books for AI training — court records reveal copyright violations
Meta staff torrented nearly 82TB of pirated books for AI training — court records reveal copyright violations
The same teachers tell students that they should never steal anyone else's work are using AI tools that are built on mountains of stolen work. The moral equation of using AI needs to be acknowledged
Aside from those messages, documents also revealed that the company took steps so that its infrastructure wasn’t used in these downloading and seeding operations so that the activity wouldn’t be traced back to Meta. The court documents say that this constitutes evidence of Meta’s unlawful activity, which seems like it’s taking deliberate steps to circumvent copyright laws.
·tomshardware.com·
Meta staff torrented nearly 82TB of pirated books for AI training — court records reveal copyright violations