AI

AI

1257 bookmarks
Newest
A while back I released my "get it in, track it down, follow up" framework for teaching students to use AI to assist with critical thinking.
A while back I released my "get it in, track it down, follow up" framework for teaching students to use AI to assist with critical thinking.
A while back I released my "get it in, track it down, follow up" framework for teaching students to use AI to assist with critical thinking. The framework is meant to address three major issues: * Help students mitigate confirmation bias and sycophancy, * Make sure that their answers are grounded in appropriate sources outside the LLM, * and turn LLM sessions into interactive critical thinking exercises that not only mitigate the harms of cognitive off-loading, but scaffold their critical thinking development. “Get it in” reflects two principles. First, just make the first step — as I say, the most important part of a gym routine is walking into the gym. You want to make it as fluid as possible to start. But the second part deals with sycophancy and confirmation bias. I’ve found in general that a practice of just putting the claim in, either bare or with a dry “analyze this claim” is a good way to avoid the pitfalls of inadvertently signal you want it to take your side. Track it down reflects my observation that when we use AI for information-seeking it is best conceptualized as a “portal, not a portrait”. LLMs don’t return answers, exactly. They return knowledge maps, representations of discourse. For anything with stakes, you are going to want to ground your knowledge outside the LLM. You need to follow the links, you need to check the summaries. I sometimes use the metaphor of those little mapping drones in science fiction that fly into a ship or set of caves and produce a detailed map before Sigourney Weaver (if you’re lucky) or Vin Diesel (if you’re not) goes in. Like that little drone (which I guess is science fact, now, isn’t it) a search-assisted LLM goes out and maps the discourse space, providing a representation of what people are saying (or would tend to say) about certain subjects. It’s a map of the discourse “out there”. But it’s still just a map. You’ve ultimately got to take it in hand and venture out, click the links, check the summaries, see if the map matches the reality. You’ve got to get to real sources, written by real people. Track it down! The final element, follow up, captures at the highest level that you have to steer the LLM as a tool or craft. Many people don’t like the idea of of LLMs as “partners”, being too anthropomorphic. Fine. This undersells it, but sometimes I think of them as “Excel for critical thinking”. What do I mean by that? Just as if you know the right formulas in Excel (and understand them) you can model out different scenarios and shape presentation outputs, with LLMs you can use follow-ups to try different approaches to the information environment. This can all seem very abstract, which is why I've created over 25 videos showing me walking through example information-seeking problems and showing how these "moves" are applied. Check out the link in the comments for links to the videos, and more explanation.
·linkedin.com·
A while back I released my "get it in, track it down, follow up" framework for teaching students to use AI to assist with critical thinking.
How AI is fueling an existential crisis in education — Decoder with Nilay Patel
How AI is fueling an existential crisis in education — Decoder with Nilay Patel
We keep hearing over and over that generative AI is causing massive problems in education, both in K-12 schools and at the college level. Lots of people are worried about students using ChatGPT to cheat on assignments, and that is a problem. But really, the issues go a lot deeper, to the very philosophy of education itself. We sat down and talked to a lot of teachers — you’ll hear many of their voices throughout this episode — and we kept hearing one cri du coeur again and again: What are we even doing here? What’s the point? Links: Majority of high school students use gen AI for schoolwork | College Board Quarter of teens have used ChatGPT for schoolwork | Pew Research Your brain on ChatGPT | MIT Media Lab My students think it’s fine to cheat with AI. Maybe they’re on to something. | Vox How children understand & learn from conversational AI | McGill University ‘File not Found’ | The Verge Subscribe to The Verge to access the ad-free version of Decoder! Credits: Decoder is a production of The Verge and part…
·overcast.fm·
How AI is fueling an existential crisis in education — Decoder with Nilay Patel
#dariobütler | Joshua Weidlich | 13 comments
#dariobütler | Joshua Weidlich | 13 comments
Which feedback do students appreciate most — from teachers, peers, or large language models like ChatGPT? Which type actually helps them improve their work? And how do students’ feedback literacy and motivation influence these effects? The answers we found in our randomized, blinded field experiment at Universität Zürich are now published open access at Computers and Education Open: https://lnkd.in/eQgnNu5z Thanks to my coauthors for the stellar collaboration Flurin Gotsch Kai Schudel Claudia Marusic Jennifer Mazzarella-Konstantynova Hannah Bolten #DarioBütler Simon Luger Bettina Wohlfender Katharina Maag Merki | 13 comments on LinkedIn
·linkedin.com·
#dariobütler | Joshua Weidlich | 13 comments
Academic Libraries Embrace AI
Academic Libraries Embrace AI
Libraries worldwide are exploring or ramping up their use of artificial intelligence, according to a new report by Clarivate, a global information services company.
·insidehighered.com·
Academic Libraries Embrace AI
How Silicon Valley enshittified the internet — Decoder with Nilay Patel
How Silicon Valley enshittified the internet — Decoder with Nilay Patel
This is Sarah Jeong, features editor at The Verge. I’m standing in for Nilay for one final Thursday episode here as he settles back into full-time hosting duties. Today, we’ve got a fun one. I’m talking to Cory Doctorow, prolific author, internet activist, and arguably one of the fiercest tech critics writing today. He has a new book out called Enshittifcation: Why Everything Suddenly Got Worse and What to Do About It. So I sat down with Cory to discuss what enshittification is, why it’s happening, and how we might fight it. Links: Enshittification | Macmillan Why every website you used to love is getting worse | Vox The age of Enshittification | The New Yorker Yes, everything online sucks now — but it doesn’t have to | Ars Technica The enshittification of garage-door openers reveals vast, deadly rot | Cory Doctorow Mark Zuckerberg emails outline plan to neutralize competitors | The Verge Google gets to keep Chrome, judge rules in antitrust case | The Verge How Amazon wins: by steamrolling rivals and…
·overcast.fm·
How Silicon Valley enshittified the internet — Decoder with Nilay Patel
Where does human thinking end and AI begin? An AI authorship protocol aims to show the difference
Where does human thinking end and AI begin? An AI authorship protocol aims to show the difference
Students – and all manner of professionals – are tempted to outsource their thinking to AI, which threatens to undermine learning and credibility. A philosophy professor offers a solution.
·theconversation.com·
Where does human thinking end and AI begin? An AI authorship protocol aims to show the difference
#facultydevelopment #ai #edtech | Daniel Stanford
#facultydevelopment #ai #edtech | Daniel Stanford
In January 2024, I asked an auditorium full of community college instructors to tell me which stage of grief resonated most when they thought about their relationship with AI. Here's what they said: 7%: Denial - AI is overrated. I'll just wait it out. 15%: Anger - AI is running amok and undermines critical thinking. 38%: Bargaining - I'd learn more about AI if I wasn't so darn busy. 3%: Depression - What I love about teaching is slipping away. 38%: Acceptance - I'm ready! Where's my AI teaching assistant? How do you think the results would shift if you posed this question to faculty today? Do you think your colleagues are feeling significantly *more* or *less* optimistic about AI than they were a year or two ago? If so, why? #facultydevelopment #ai #edtech
·linkedin.com·
#facultydevelopment #ai #edtech | Daniel Stanford
Can AI Avatars Make Class Time More Human? — Learning Curve
Can AI Avatars Make Class Time More Human? — Learning Curve
Colleges are experimenting with making online teaching videos featuring AI avatar versions of professors. Some students find the simulated likenesses of their instructors a bit creepy, but proponents say the technology could be key to making college courses more active and human. The idea is that AI will make it easy to make personalized teaching videos so that more teachers can adopt a “flipped classroom” approach — where students watch video lecturers as homework so class time is spent on discussion or projects.
·overcast.fm·
Can AI Avatars Make Class Time More Human? — Learning Curve