_EduAI

525 bookmarks
Newest
Why AI is So Hard (For Education)
Why AI is So Hard (For Education)
What habits of inquiry, dialogue, and courage can we cultivate now so our students are ready to design the next civilization?
·stefanbauschard.substack.com·
Why AI is So Hard (For Education)
‘Opposing the inevitability of AI at universities is possible and necessary’ | Radboud University
‘Opposing the inevitability of AI at universities is possible and necessary’ | Radboud University
‘Our role is to foster critical thinking, not to follow industry trends uncritically. The uncritical adoption of AI can lead to students not developing essential academic skills such as critical thinking and writing. If students are taught to learn through automation, without learning about how and why things work, they won’t be able to solve problems when something actually breaks – which will be often, based on the AI output we now see.’
·ru.nl·
‘Opposing the inevitability of AI at universities is possible and necessary’ | Radboud University
Stop Saying “Let’s Just Be Flexible with AI”
Stop Saying “Let’s Just Be Flexible with AI”

the tricky part is that AI changes weekly. So how can we be concrete about something so fluid?

Here’s how I’ve started to think about it: Be flexible about tools, but concrete about values.

Students don’t need us to predict the future of AI. They need us to articulate the principles that guide our choices. That might be things like:

Transparency: Always disclose when AI is used. Integrity: Use AI to assist thinking, not replace it. Learning: Choose methods that strengthen your own skills. When students internalize these values, they can adapt them to whatever new tool emerges next semester: Claude, Gemini, Perplexity, or something we haven’t heard of yet.

A good AI policy, like a good syllabus, isn’t a list of prohibitions. It’s a shared framework for reasoning through change.

the tricky part is that AI changes weekly. So how can we be concrete about something so fluid?Here’s how I’ve started to think about it:Be flexible about tools, but concrete about values.Students don’t need us to predict the future of AI. They need us to articulate the principles that guide our choices. That might be things like:Transparency: Always disclose when AI is used.Integrity: Use AI to assist thinking, not replace it.Learning: Choose methods that strengthen your own skills.When students internalize these values, they can adapt them to whatever new tool emerges next semester: Claude, Gemini, Perplexity, or something we haven’t heard of yet.A good AI policy, like a good syllabus, isn’t a list of prohibitions. It’s a shared framework for reasoning through change.
·substack.com·
Stop Saying “Let’s Just Be Flexible with AI”
Responsible AI in Research: Highlights from the NCRM Annual Lecture 2025
Responsible AI in Research: Highlights from the NCRM Annual Lecture 2025
The NCRM Annual Lecture 2025 explored the topic of responsible AI in research. The free event took place on Wednesday, 1 October 2025 at The British Academy in London and was streamed online. Four panellists offered expert insight on this crucial topic and answered questions from the audience. The panellists were: Professor Dame Wendy Hall of the University of Southampton, Professor David De Roure of the University of Oxford, Dr Zeba Khanam of BT and Dr Mark Carrigan of The University of Manchester. This video features some of the highlights of the event. Please note: we may be unable to respond to individual questions on this video. The National Centre for Research Methods (NCRM) delivers research methods training through short courses and free online resources. - Visit the NCRM website: https://www.ncrm.ac.uk - Browse our short courses: https://www.ncrm.ac.uk/training/ncrm-courses - Find online resources: https://www.ncrm.ac.uk/resources Follow NCRM on social media: - X: https://x.com/NCRMUK - LinkedIn: https://www.linkedin.com/company/ncrmuk - Bluesky: https://bsky.app/profile/ncrm.ac.uk - Facebook: https://www.facebook.com/ncrmuk - YouTube: https://www.youtube.com/user/NCRMUK Subscribe to the NCRM monthly newsletter: https://www.ncrm.ac.uk/news/subscribe
·m.youtube.com·
Responsible AI in Research: Highlights from the NCRM Annual Lecture 2025
Greentime
Greentime

We help classroom and environmental educators ethically use AI to create human-centered learning experiences.

We help classroom and environmental educators ethically use AI to create human-centered learning experiences.
·greentime.ai·
Greentime
On Bubbles and Burners: Teaching for Cognitive Friction in the Age of AI
On Bubbles and Burners: Teaching for Cognitive Friction in the Age of AI
We’re entering a moment in education where the learning process itself is up for renegotiation. With generative AI now accessible to every student with a keypad, the temptation is real: skip the ha…
·disruptedhistory.com·
On Bubbles and Burners: Teaching for Cognitive Friction in the Age of AI
Knowledge Boosters – TCEA
Knowledge Boosters – TCEA
Discover how TCEA helps educators transform teaching and learning with Knowledge Boosters. Explore resources, strategies, and innovative tools for education.
·tcea.org·
Knowledge Boosters – TCEA
How People Around the World View AI
How People Around the World View AI

Pew Research Center’s survey of adults in 25 countries shows concern outweighs enthusiasm toward AI’s growing presence in daily life. A median 34% are more concerned than excited, while only 16% are more excited than concerned. Awareness is broad but uneven, with 34% hearing a lot about AI and 47% hearing a little, heavily skewed toward higher-income countries. For regulation, 53% trust the EU, 37% trust the U.S., 27% trust China, and confidence in national governments ranges from 89% in India to 22% in Greece. Younger adults, men, the highly educated and heavy internet users report higher awareness and greater excitement than older, less-connected groups. Political alignment also matters: U.S. Republicans and Europe’s right-leaning voters show more faith in the U.S., while younger respondents in 19 nations place greater trust in China as an AI regulator.

·pewresearch.org·
How People Around the World View AI
Inside San Francisco’s new AI school: is this the future of US education?
Inside San Francisco’s new AI school: is this the future of US education?

Alpha School promises kids can learn twice as fast with just two hours of daily academics powered by AI, but experts say the evidence is thin, the benefits uneven, and equity concerns loom.

More Insights:

AI mainly personalizes pacing and assignments; it’s guide-led, not chatbot-taught.

Model echoes older self-directed approaches (e.g., Montessori) and long-used tools (IXL, Khan, Duolingo, Math Academy).

Claims of top 1–2% scores and 90% satisfaction face selection-bias questions given affluent demographics and sky-high SF tuition.

Researchers urge rigorous trials and warn about hallucinations, bias, and risks for less-motivated or younger learners.

Public districts are cautiously integrating AI literacy and pilots, signaling inevitability—but not a one-size-fits-all solution.

·theguardian.com·
Inside San Francisco’s new AI school: is this the future of US education?
How to Use ChatGPT for AI Note Taking (2025)
How to Use ChatGPT for AI Note Taking (2025)
Learn how to effectively to convert lecture audio into clean, structured notes with ChatGPT—then generate study guides, quizzes, lesson plans, & flashcards.
·kangaroos.ai·
How to Use ChatGPT for AI Note Taking (2025)
🚀 Unlock the Power of #AI in #Education! #discount #PleaseShareAndRepost
🚀 Unlock the Power of #AI in #Education! #discount #PleaseShareAndRepost
Limited Time Offer: Get the TCEA AI Tools for Educators course for just $20! 🎉 Normally priced at $49, this incredible course is now available at a special promotional price. Don’t miss out o…
·mguhlin.org·
🚀 Unlock the Power of #AI in #Education! #discount #PleaseShareAndRepost
AI-Generated Lesson Plans Fall Short On Inspiring Students, Promoting Critical Thinking - Slashdot
AI-Generated Lesson Plans Fall Short On Inspiring Students, Promoting Critical Thinking - Slashdot
An anonymous reader quotes a report from The Conversation: When teachers rely on commonly used artificial intelligence chatbots to devise lesson plans, it does not result in more engaging, immersive or effective learning experiences compared with existing techniques, we found in our recent study. Th...
·news.slashdot.org·
AI-Generated Lesson Plans Fall Short On Inspiring Students, Promoting Critical Thinking - Slashdot
Another AI Side Effect: Erosion of Student-Teacher Trust (Greg Toppo)
Another AI Side Effect: Erosion of Student-Teacher Trust (Greg Toppo)

teachers can lessen the allure of taking shortcuts by solving for these conditions — figuring out, for instance, how to intrinsically motivate students to study by helping them connect with the material for its own sake. They can also help students see how an assignment will help them succeed in a future career. And they can design courses that prioritize deeper learning and competence. To alleviate testing pressure, teachers can make assignments more low-stakes and break them up into smaller pieces. They can also give students more opportunities in the classroom to practice the skills and review the knowledge being tested. And teachers should talk openly about academic honesty and the ethics of cheating. “I’ve found in my own teaching that if you approach your assignments in that way, then you don’t always have to be the police,” he said. Students are “more incentivized, just by the system, to not cheat.” With writing, teachers can ask students to submit smaller “checkpoint” assignments, such as outlines and handwritten notes and drafts that classmates can review and comment on. They can also rely more on oral exams and handwritten blue book assignments.

·larrycuban.wordpress.com·
Another AI Side Effect: Erosion of Student-Teacher Trust (Greg Toppo)
Learning Networks and the Age of AI with Stephen Downes
Learning Networks and the Age of AI with Stephen Downes
Commentary on Learning Networks and the Age of AI with Stephen Downes by Stephen Downes. Online learning, e-learning, new media, connectivism, MOOCs, personal learning environments, new literacy, and more
·downes.ca·
Learning Networks and the Age of AI with Stephen Downes
AI as a Classroom Teaching Assistant | edCircuit
AI as a Classroom Teaching Assistant | edCircuit
AI is emerging as a classroom assistant, from lesson planning to tutoring. Educators stress it should amplify, not replace, the teacher’s voice.
·edcircuit.com·
AI as a Classroom Teaching Assistant | edCircuit
Share of teens using ChatGPT for schoolwork doubled from 2023 to 2024…
Share of teens using ChatGPT for schoolwork doubled from 2023 to 2024…

The share of teens who say they use ChatGPT for their schoolwork has risen to 26%, according to a Pew Research Center survey of U.S. teens ages 13 to 17. That’s up from 13% in 2023. Still, most teens (73%) have not used the chatbot in this way. Teens’ use of ChatGPT for schoolwork increased across demographic groups. Black and Hispanic teens (31% each) are more likely than White teens (22%) to say they have used ChatGPT for their schoolwork. In 2023, similar shares of White (11%), Black (13%) and Hispanic teens (11%) said they used the chatbot for schoolwork.Just over half of teens say it’s acceptable to use ChatGPT to research new topics (54%). Only 9% say it is not acceptable to use it for this. Far fewer support using the chatbot to do math or write essays: 29% of teens say it’s acceptable to use ChatGPT to solve math problems, while 28% say it’s not acceptable. 18% say it’s acceptable to use ChatGPT to write essays, and 42% say it’s not acceptable. Another 15% to 21% of teens are unsure whether it’s acceptable to use ChatGPT for these tasks.

The share of teens who say they use ChatGPT for their schoolwork has risen to 26%, according to a Pew Research Center survey of U.S. teens ages 13 to 17. That’s up from 13% in 2023. Still, most teens (73%) have not used the chatbot in this way. Teens’ use of ChatGPT for schoolwork increased across demographic groups.
·archive.md·
Share of teens using ChatGPT for schoolwork doubled from 2023 to 2024…
What Counts as Cheating with AI? Teachers Are Grappling with How to Draw the Line (Howard Blume and Jocelyn Gecker)
What Counts as Cheating with AI? Teachers Are Grappling with How to Draw the Line (Howard Blume and Jocelyn Gecker)

The Stanford researchers concluded that cheating was common before AI — and it remains so. It is the nature of cheating that is evolving.

“This year’s data is showing a decline in copying off a peer and it seems there is more use of AI instead,” said Lee, an associate professor at the Stanford Graduate School of Education.

In these surveys, about 3 in 4 students reported behaviors in the last month that qualify as cheating, figures similar to what was reported prior to AI.

·larrycuban.wordpress.com·
What Counts as Cheating with AI? Teachers Are Grappling with How to Draw the Line (Howard Blume and Jocelyn Gecker)
Prompt Like a Pro: A Teacher’s Guide - Educators Technology
Prompt Like a Pro: A Teacher’s Guide - Educators Technology
Artificial intelligence has become part of our daily teaching routines, from planning lessons and creating quizzes to writing emails and generating classroom materials. Yet, the difference between a useful AI response and a confusing one often comes down to a single factor: how you prompt. Prompting remains one of the most important skills teachers can…
·educatorstechnology.com·
Prompt Like a Pro: A Teacher’s Guide - Educators Technology