Critical Thinking with AI Mode #44: Florida coastline
A prediction about Florida's coastline is wrongly portrayed by people engage in climate change denial -- but AI Mode does mess up the summary in a way that r...
Critical Thinking with AI Mode #29: Consumer Sentiment
Is consumer sentiment really near an all time low? And what does that mean? We get to use (at the very end) our definitional follow-up, which reveals an asto...
A while back I released my "get it in, track it down, follow up" framework for teaching students to use AI to assist with critical thinking.
A while back I released my "get it in, track it down, follow up" framework for teaching students to use AI to assist with critical thinking. The framework is meant to address three major issues:
* Help students mitigate confirmation bias and sycophancy,
* Make sure that their answers are grounded in appropriate sources outside the LLM,
* and turn LLM sessions into interactive critical thinking exercises that not only mitigate the harms of cognitive off-loading, but scaffold their critical thinking development.
“Get it in” reflects two principles. First, just make the first step — as I say, the most important part of a gym routine is walking into the gym. You want to make it as fluid as possible to start. But the second part deals with sycophancy and confirmation bias. I’ve found in general that a practice of just putting the claim in, either bare or with a dry “analyze this claim” is a good way to avoid the pitfalls of inadvertently signal you want it to take your side.
Track it down reflects my observation that when we use AI for information-seeking it is best conceptualized as a “portal, not a portrait”. LLMs don’t return answers, exactly. They return knowledge maps, representations of discourse. For anything with stakes, you are going to want to ground your knowledge outside the LLM. You need to follow the links, you need to check the summaries. I sometimes use the metaphor of those little mapping drones in science fiction that fly into a ship or set of caves and produce a detailed map before Sigourney Weaver (if you’re lucky) or Vin Diesel (if you’re not) goes in.
Like that little drone (which I guess is science fact, now, isn’t it) a search-assisted LLM goes out and maps the discourse space, providing a representation of what people are saying (or would tend to say) about certain subjects. It’s a map of the discourse “out there”. But it’s still just a map. You’ve ultimately got to take it in hand and venture out, click the links, check the summaries, see if the map matches the reality. You’ve got to get to real sources, written by real people. Track it down!
The final element, follow up, captures at the highest level that you have to steer the LLM as a tool or craft. Many people don’t like the idea of of LLMs as “partners”, being too anthropomorphic. Fine. This undersells it, but sometimes I think of them as “Excel for critical thinking”.
What do I mean by that? Just as if you know the right formulas in Excel (and understand them) you can model out different scenarios and shape presentation outputs, with LLMs you can use follow-ups to try different approaches to the information environment.
This can all seem very abstract, which is why I've created over 25 videos showing me walking through example information-seeking problems and showing how these "moves" are applied. Check out the link in the comments for links to the videos, and more explanation.
In this video we show, for what feels like the hundredth time, that AI Overview is a product that should be taken off the market but that AI Mode is fairly r...
Chatbots Are Pushing Sanctioned Russian Propaganda
ChatGPT, Gemini, DeepSeek, and Grok are serving users propaganda from Russian-backed media when asked about the invasion of Ukraine, new research finds.
Chatbots Are Pushing Sanctioned Russian Propaganda
ChatGPT, Gemini, DeepSeek, and Grok are serving users propaganda from Russian-backed media when asked about the invasion of Ukraine, new research finds.
I investigate using the foundational question "Is this what people think it is?" as a follow-up -- and get strikingly good results. For people needing quick,...
I usually encourage people to jump straight to AI Mode, but you don't have to be an absolute robot about it. In this example we get such a great set of sourc...
I show a bit of how I make these examples, and then we get to the real question, which is "What did we do to upset Mister Rogers so much?"As usual, everythin...
A video exploring the etymology of "hoe down" (while debunking a folk etymology). A bit of "track it down" in it, as usual. And a good result!As usual, every...
Do we have two moons? We explore that question while using a follow-up about definitions and measurement and singing badly.As usual, everything I say here is...
Another video primarily about tracking it down. I use both follow-ups to request better links and use the little citation links to get me there. I eventually...
A simple one today, tracing down a quote by Emma Goldman. Since it turns out not to be hard we mostly use it to show how to use the little quote icons in the...
On this page you’ll find information about deepfakes and other forms of synthetic media, including AI images, audio, and video. Before you go any further, make sure to grab the free 20+ page resource How to Spot a Deepfake by signing up here: What is a deepfake? A deepfake is an emergent type of synthetic […]
Is the LLM response wrong, or have you just failed to iterate it?
Many "errors" in search-assisted LLMs are not errors at all, but the result of an investigation aborted too soon. Here's how to up your LLM-based verification game by going to round two.
Live demo of Deep Background for Claude and ChatGPT o3
A walk through using this tool as a student, with some discussion on what excites me about it. Prompt available for free here: https://checkplease.neocities....