There's a lot of confusion about AI and lesson planning.
There's a lot of confusion about AI and lesson planning.
Here's a super simple strategy (that I use every day):
My AI lesson planning revolves around this principle.
AI is great at tasks but bad at jobs.
What's the difference between tasks and jobs?
A "job" is the whole thing you’re trying to do. Each job is made up of "tasks" - the little steps within the job.
Moving through the planing process task by task is the best way to ensure you create high-quality, relevant resources.
These are the tasks I do to complete to plan my lesson:
- Engaging hook
- Direct instruction content
- Reading/video on the topic
- Differentiate the reading
- Retrieval practice questions
- Sentence starters for writing
- Closing discussion questions
When I use AI to plan my lessons, I go one task at a time.
This is the first prompt I use with ChatGPT
(Role) You are an expert [INSERT SUBJECT] teacher.
(Task) Help me create a lesson plan step-by-step. I will ask for various resources. Every time I ask you for something, give me 3 versions of that thing. I will choose the best one and we will continue to plan from there. Start by giving me 3 lesson hooks for a lesson on [INSERT TOPIC].
(Format) Give me the resources I ask for at [INSERT READING LEVEL].
I start choosing the best hook of the 3 provided.
Then I go step-by-step and build the resources I need.
Now, using AI, in just a matter of minutes we can create lessons that are
- Pedagogically rich
- Well resourced
- Ready to teach
Give it a try and let me know how you go!
P.S. Any other AI lesson planning tips you could share? Let me know!
| 128 comments on LinkedIn
How SIFT Toolbox helps chatbots with fact-checking
It's not often I get really excited about a new prompt. This one gives me hope that chatbots can be used in support of information literacy and a concern for truth. It highlights how a chatbot that can browse can assist us in sifting through competing claims from vetted sources to get perspective on a claim.
With his prompt, Mike Caulfield illustrates the principles in his recent The Atlantic article. He explains, "SIFT Toolbox is a lengthy instruction prompt...You paste it in at the beginning of a chat session... With the prompt in place, your LLM will come to better conclusions, hallucinate less, and source conflicting perspectives more systematically. It also models an approach that is less chatbot, and more research assistant in a way that is appropriate for student researchers, who can use it to aid research while coming to their own conclusions."
Go to the website https://lnkd.in/gYBvdSA4 and copy all the text in that tiny little box on the right. Paste it into Claude (free version is okay).
Then, give it a claim, any claim. Maybe one you hear a lot and question. I tried "Exposure to a big change in temperature like a sudden cold snap can make someone more likely to catch a cold" because I've long wondered if there was anything to that. Here's the result: https://lnkd.in/g--mWnfy
#informationliteracy #digitalliteracy #factchecking #ailiteracy #LLMs #chatbots #CCCAILearn | 18 comments on LinkedIn
Teaching AI Ethics 2025: Introduction - Leon Furze
Over the next few months, I’ll be updating my 2023 Teaching AI Ethics collection. In this post, I’ll explain why the updates are necessary and give a recap on the nine original areas from the series. When I wrote the original series in 2023, ChatGPT was only just on people’s radars. I had started my […]
AI Assessment Scale (AIAS) Translations: 2025 Updates
This is an update of an earlier post on translations of the AI Assessment Scale. Please keep sending us your translations as we keep these resources up to date! The AI Assessment Scale has been adopted across the world in both the original (traffic lights) version, and the updated (bubblegum) version. We have been amazed […]
How might digital technology and notably smart technologies based on artificial intelligence (AI), learning analytics, robotics, and others transform education? This book explores such question. It focuses on how smart technologies currently change education in the classroom and the management of educational organisations and systems.
Digital Sobriety: An Ingredient for Success in Higher Education? | Observatoire sur la réussite en enseignement supérieur
Behavioural (lifestyle), physiological, biochemical or cognitive mechanisms are responsible for these impacts (Lemétayer, 2023). Most often, these mechanisms are interrelated, as in the case of sleep disorders, which can be caused by the encroachment of screen time on sleep time, by the effects of blue light and by cognitive overstimulation. Research also shows the psychosocial […]
Sobriety is rooted in an ethical reflection on digital technology, “so that individual actions are consistent with values such as preserving the environment, fighting climate change, and ensuring the physical and mental health of populations
We discuss how the human agency behind AI systems gets masked, and we've got a tool for you to play with that rephrases dodgy headlines
All AI systems are designed by humans, are programmed and calibrated to achieve certain results, and the outputs they provide are therefore the result of multiple human decisions.
overcoming bias in artificial intelligence begins with rigorous and honest critique, and a recognition of “the long history of western technology being used to the disadvantage of Indigenous people, Black people, poor people, and so on.”
Digital sobriety: how can we adapt our uses for a positive impact on the environment?
Digital sobriety is an approach that aims to reduce the environmental impact of digital technology. The French expression “la sobriété numérique” was coined in 2008 by the association 
AI Is Dangerous, but Not for the Reasons You Think | Sasha Luccioni | TED
AI won't kill us all — but that doesn't make it trustworthy. Instead of getting distracted by future existential risks, AI ethics researcher Sasha Luccioni thinks we need to focus on the technology's current negative impacts, like emitting carbon, infringing copyrights and spreading biased information. She offers practical solutions to regulate our AI-filled future — so it's inclusive and transparent.
Free, online. Learning outcomes
Construct a Definition of Artificial Intelligence
Discover the Societal Impact of Artificial Intelligence
Describe the Inner Workings of an Artificial Intelligence Project
First Nations and Artificial Intelligence Research Paper - Chiefs of Ontario
The Chiefs of Ontario’s Research and Data Management Sector has released a research paper analyzing the effects of Artificial Intelligence (AI) on First Nations in Ontario. AI is a powerful and disruptive technology. It has a great deal of potential, but this potential comes paired with serious risks for First Nations. This paper lays out
Human-centered AI refers to the development of AI technologies that prioritize human needs, values, and capabilities at the core of their design and operation.
Abundant Intelligences is an Indigenous-led research program that conceptualizes, designs, develops, and deploys Artificial Intelligence based on Indigenous Knowledge systems.