Found 1208 bookmarks
Newest
EduGems
EduGems
Welcome to EduGems! This is a growing collection of pre-made prompts ("Gems") for educators to use with Google Gemini. 💎 Click on any item below to get details on that Gem, with options to use or copy the Gem. 💎 When you click to use a Gem, it will open in Gemini. 💎 Before you begin interacting
·edugems.ai·
EduGems
A while back I released my "get it in, track it down, follow up" framework for teaching students to use AI to assist with critical thinking.
A while back I released my "get it in, track it down, follow up" framework for teaching students to use AI to assist with critical thinking.
A while back I released my "get it in, track it down, follow up" framework for teaching students to use AI to assist with critical thinking. The framework is meant to address three major issues: * Help students mitigate confirmation bias and sycophancy, * Make sure that their answers are grounded in appropriate sources outside the LLM, * and turn LLM sessions into interactive critical thinking exercises that not only mitigate the harms of cognitive off-loading, but scaffold their critical thinking development. “Get it in” reflects two principles. First, just make the first step — as I say, the most important part of a gym routine is walking into the gym. You want to make it as fluid as possible to start. But the second part deals with sycophancy and confirmation bias. I’ve found in general that a practice of just putting the claim in, either bare or with a dry “analyze this claim” is a good way to avoid the pitfalls of inadvertently signal you want it to take your side. Track it down reflects my observation that when we use AI for information-seeking it is best conceptualized as a “portal, not a portrait”. LLMs don’t return answers, exactly. They return knowledge maps, representations of discourse. For anything with stakes, you are going to want to ground your knowledge outside the LLM. You need to follow the links, you need to check the summaries. I sometimes use the metaphor of those little mapping drones in science fiction that fly into a ship or set of caves and produce a detailed map before Sigourney Weaver (if you’re lucky) or Vin Diesel (if you’re not) goes in. Like that little drone (which I guess is science fact, now, isn’t it) a search-assisted LLM goes out and maps the discourse space, providing a representation of what people are saying (or would tend to say) about certain subjects. It’s a map of the discourse “out there”. But it’s still just a map. You’ve ultimately got to take it in hand and venture out, click the links, check the summaries, see if the map matches the reality. You’ve got to get to real sources, written by real people. Track it down! The final element, follow up, captures at the highest level that you have to steer the LLM as a tool or craft. Many people don’t like the idea of of LLMs as “partners”, being too anthropomorphic. Fine. This undersells it, but sometimes I think of them as “Excel for critical thinking”. What do I mean by that? Just as if you know the right formulas in Excel (and understand them) you can model out different scenarios and shape presentation outputs, with LLMs you can use follow-ups to try different approaches to the information environment. This can all seem very abstract, which is why I've created over 25 videos showing me walking through example information-seeking problems and showing how these "moves" are applied. Check out the link in the comments for links to the videos, and more explanation.
·linkedin.com·
A while back I released my "get it in, track it down, follow up" framework for teaching students to use AI to assist with critical thinking.
How AI is fueling an existential crisis in education — Decoder with Nilay Patel
How AI is fueling an existential crisis in education — Decoder with Nilay Patel
We keep hearing over and over that generative AI is causing massive problems in education, both in K-12 schools and at the college level. Lots of people are worried about students using ChatGPT to cheat on assignments, and that is a problem. But really, the issues go a lot deeper, to the very philosophy of education itself. We sat down and talked to a lot of teachers — you’ll hear many of their voices throughout this episode — and we kept hearing one cri du coeur again and again: What are we even doing here? What’s the point? Links: Majority of high school students use gen AI for schoolwork | College Board Quarter of teens have used ChatGPT for schoolwork | Pew Research Your brain on ChatGPT | MIT Media Lab My students think it’s fine to cheat with AI. Maybe they’re on to something. | Vox How children understand & learn from conversational AI | McGill University ‘File not Found’ | The Verge Subscribe to The Verge to access the ad-free version of Decoder! Credits: Decoder is a production of The Verge and part…
·overcast.fm·
How AI is fueling an existential crisis in education — Decoder with Nilay Patel
#dariobütler | Joshua Weidlich | 13 comments
#dariobütler | Joshua Weidlich | 13 comments
Which feedback do students appreciate most — from teachers, peers, or large language models like ChatGPT? Which type actually helps them improve their work? And how do students’ feedback literacy and motivation influence these effects? The answers we found in our randomized, blinded field experiment at Universität Zürich are now published open access at Computers and Education Open: https://lnkd.in/eQgnNu5z Thanks to my coauthors for the stellar collaboration Flurin Gotsch Kai Schudel Claudia Marusic Jennifer Mazzarella-Konstantynova Hannah Bolten #DarioBütler Simon Luger Bettina Wohlfender Katharina Maag Merki | 13 comments on LinkedIn
·linkedin.com·
#dariobütler | Joshua Weidlich | 13 comments