Found 42 bookmarks
Newest
"I had one friend who told a colleague that he was going across campus to an Al workshop, and the other professors said, 'Don't, we're leading a boycott against the workshop.' Okay. I mean, I don't… | Mike Caulfield
"I had one friend who told a colleague that he was going across campus to an Al workshop, and the other professors said, 'Don't, we're leading a boycott against the workshop.' Okay. I mean, I don't… | Mike Caulfield
"I had one friend who told a colleague that he was going across campus to an Al workshop, and the other professors said, 'Don't, we're leading a boycott against the workshop.' Okay. I mean, I don't remember that kind of thing happening with Wikipedia or other tools for online learning..." For me at least, it's pretty simple. People are using these tools, and they are using them poorly. We are educators and if we can teach them to use them more effectively we should. If we refuse to do that, where we end up as a society is at least a little bit on us. But I disagree with Bryan a bit. We went through this before in miniature. In 2010 I was trying to convince people in civic education conferences we should teach people to use social media more effectively, including checking things online. The most common response "We shouldn't be teaching social media, we should be telling students to subscribe to physical newspapers instead." Those students we could have taught that year are thirty-five now. We could have had 15 cohorts of college students knowing how to check the truth of what they see online. Our entire history might be different, and maybe we wouldn't be seeing this rampant conspiracism. The thing is those professors who said we should just give students physical papers will never realize their role in getting us here. I wish others would consider that history before they treat boycotts of AI workshops like a noble act. When you engage in politics you are judged by results, not intentions. And the results of this approach are not risk free.
·linkedin.com·
"I had one friend who told a colleague that he was going across campus to an Al workshop, and the other professors said, 'Don't, we're leading a boycott against the workshop.' Okay. I mean, I don't… | Mike Caulfield
A while back I released my "get it in, track it down, follow up" framework for teaching students to use AI to assist with critical thinking.
A while back I released my "get it in, track it down, follow up" framework for teaching students to use AI to assist with critical thinking.
A while back I released my "get it in, track it down, follow up" framework for teaching students to use AI to assist with critical thinking. The framework is meant to address three major issues: * Help students mitigate confirmation bias and sycophancy, * Make sure that their answers are grounded in appropriate sources outside the LLM, * and turn LLM sessions into interactive critical thinking exercises that not only mitigate the harms of cognitive off-loading, but scaffold their critical thinking development. “Get it in” reflects two principles. First, just make the first step — as I say, the most important part of a gym routine is walking into the gym. You want to make it as fluid as possible to start. But the second part deals with sycophancy and confirmation bias. I’ve found in general that a practice of just putting the claim in, either bare or with a dry “analyze this claim” is a good way to avoid the pitfalls of inadvertently signal you want it to take your side. Track it down reflects my observation that when we use AI for information-seeking it is best conceptualized as a “portal, not a portrait”. LLMs don’t return answers, exactly. They return knowledge maps, representations of discourse. For anything with stakes, you are going to want to ground your knowledge outside the LLM. You need to follow the links, you need to check the summaries. I sometimes use the metaphor of those little mapping drones in science fiction that fly into a ship or set of caves and produce a detailed map before Sigourney Weaver (if you’re lucky) or Vin Diesel (if you’re not) goes in. Like that little drone (which I guess is science fact, now, isn’t it) a search-assisted LLM goes out and maps the discourse space, providing a representation of what people are saying (or would tend to say) about certain subjects. It’s a map of the discourse “out there”. But it’s still just a map. You’ve ultimately got to take it in hand and venture out, click the links, check the summaries, see if the map matches the reality. You’ve got to get to real sources, written by real people. Track it down! The final element, follow up, captures at the highest level that you have to steer the LLM as a tool or craft. Many people don’t like the idea of of LLMs as “partners”, being too anthropomorphic. Fine. This undersells it, but sometimes I think of them as “Excel for critical thinking”. What do I mean by that? Just as if you know the right formulas in Excel (and understand them) you can model out different scenarios and shape presentation outputs, with LLMs you can use follow-ups to try different approaches to the information environment. This can all seem very abstract, which is why I've created over 25 videos showing me walking through example information-seeking problems and showing how these "moves" are applied. Check out the link in the comments for links to the videos, and more explanation.
·linkedin.com·
A while back I released my "get it in, track it down, follow up" framework for teaching students to use AI to assist with critical thinking.
I see some instructors on here joking how they are going to add prompt injections into assignments as a defense against agentive browsers.
I see some instructors on here joking how they are going to add prompt injections into assignments as a defense against agentive browsers.
I see some instructors on here joking how they are going to add prompt injections into assignments as a defense against agentive browsers. I get the frustration and maybe it's just jokes? Maybe I need to lighten up? But just in case: prompt injection applied to people to whom you have a duty of care is not funny, it's not resistance, it's deeply messed up behavior that involves using your power to hijack the computer of people you force to consume your compromised materials, and if that seems reasonable to you, you need to touch some grass.
·linkedin.com·
I see some instructors on here joking how they are going to add prompt injections into assignments as a defense against agentive browsers.
Using AI Mode: Two Moons
Using AI Mode: Two Moons
Do we have two moons? We explore that question while using a follow-up about definitions and measurement and singing badly.As usual, everything I say here is...
·youtube.com·
Using AI Mode: Two Moons