The future is already here! 'Agentic' means making the suffix of your… | Daniel Terhorst-North | 15 comments
The future is already here!
'Agentic' means making the suffix of your bash script '.ai' instead of '.sh', and piping through a non-deterministic LLM rather than sed.
You've got this. | 15 comments on LinkedIn
The future is already here! 'Agentic' means making the suffix of your… | Daniel Terhorst-North | 15 comments
The future is already here!
'Agentic' means making the suffix of your bash script '.ai' instead of '.sh', and piping through a non-deterministic LLM rather than sed.
You've got this. | 15 comments on LinkedIn
Teaching students to use Google and digital archives.
Everybody wants to talk about teaching students how to use AI responsibly in research projects, and nobody wants to talk about how to teach students to use Google advanced search tools to find results from a specific website or date range; how to save a webpage as a PDF so you can access it later if a paywall goes up; or how to explore a digital newspaper archives. Yet that's how I spent this week in my ninth-grade world history course, and I think maybe that was the better use of time.
Kids need fewer intellectual black boxes in their lives, and more understanding of how digital tools can help them break free of them. | 55 comments on LinkedIn
Signal Desktop now includes support for a new “Screen security” setting that is designed to help prevent your own computer from capturing screenshots of your Signal chats on Windows. This setting is automatically enabled by default in Signal Desktop on Windows 11. If you’re wondering why we’re on...
The Veil of Ignorance is a device for helping people more fairly envision a fair society by pretending that they are ignorant of their personal circumstances.
This Protest Starter Pack has been on my mind for weeks, so I've finally drawn it!
The AI generated starter packs so many people have been posting only exist because our copyrighted and private data was trained without consent or compensation, leading to AI companies trying to replace creators with their own work (!), wasting resources, reproducing bias, inviting fraud and congesting the internet. Their users support this with money and data.
Hashtags featuring self-portraits have been around for decades, and it's been such a delight to see human creators reclaim the stolen trend with illustrations that are thought through down to the very detail, as unique as their creators! AI-free starter packs not only showcase the skill, style and inner world of human artists, but also their wonderful, deeply human sense of humour.
So, here's my own take on the topic, the Protest Starter Pack:
"Creators and Allies unite, stand up and fight!
★ All major AI generators are trained on stolen content. That’s unethical and unfair.
★ They’re built to deskill and replace human creators. With their own work.
★ AI users support theft, data laundering and broligarchy. Don’t feed the bullies!
★ It’s not “just” artists. They steal your data as well, and will replace your job too.
★ We need consent, compensation and transparency. It’s called Opt-In.
★ AI enables fraud & destroys the web with false facts. Stop the slop!
★ If it’s in the training data, AI reproduces bias, stereotypes and prejudice.
★ Care about the climate? Stop AI wasting the resources of our planet!
★ It is not “time to adapt or die”. It is time to fight – or adapt and die.
★ So let’s unite, petition, protect, inform, explain and call out! Together, we are strong.
★ Thank you for fighting unethical generative AI! Or at least not using it.
Artists fight against generative AI, which – unlike technical or scientific AI – is trained on copyrighted and private data without consent or compensation, and used commercially, in unfair competition with our own work."
–––
#NoAIProtestPack #ProtestPack #StarterPack #StarterPackNoAIi #NoAIStarterPack #BuchBrauchtMensch #ZusammenGegenKI #MachMitGegenKI #SupportHumanArtists #ArtistsAgainstAI #NoAIArt #CreateDontScrape #MyArtMyChoice #OptInStattOptOut #CreativesAgainstAI #IllustratorenGegenKI #StopAI #StopTheSlop | 41 comments on LinkedIn
Generalization bias in large language model summarization of scientific research | Royal Society Open Science
Artificial intelligence chatbots driven by large language models (LLMs) have the potential
to increase public science literacy and support scientific research, as they can quickly
summarize complex scientific information in accessible terms. However, when ...
Combining Psychology with Artificial Intelligence: What could possibly go wrong?
The current AI hype cycle combined with Psychology's various crises make for a perfect storm. Psychology, on the one hand, has a history of weak theoretical foundations, a neglect for computational and formal skills, and a hyperempiricist privileging of experimental tasks and testing for effects. Artificial Intelligence, on the other hand, has a history of conflating artifacts for theories of cognition, or even minds themselves, and its engineering offspring likes to move fast and break things. Many of our contemporaries now want to combine the worst of these two worlds. What could possibly go wrong? Quite a lot. Does this mean that Psychology and Artificial Intelligence can best part ways? Not at all. There are very fruitful ways in which the two disciplines can interact and theoretically inform the interdisciplinary study of cognition. But to reap the fruits one needs to understand how to steer clear of potential traps.
'I am not a robot' isn't what you think.Remove your personal information from the web at https://JoinDeleteMe.com/chuppl20 and use code CHUPPL20 for 20% offS...
What went (methodologically) wrong with the ChatGPT in education meta-studies
This continues my post that analyzed the Weng&Fan meta-analysis of the impacts of ChatGPT on "learning performance," "learning perception," and "critical thinking." Weng&Fan and the earlier Deng et al.
There's a tendency to write about technological change as an "all of a sudden" occurrence – even if you try to offer some background, some precursors, some concurrent events, or a longer, broader perspective, people still often read "all of a sudden" into any discussion about the arrival of a new technology. That it even has something like an arrival. A recent oral history, published in Quanta Magazine, of what happened to the field of natural language processing with the release of GPT-3 is, ar
I invited 31 researchers to test AI research synthesis by running the exact same prompt. They learned LLM analysis is overhyped, but evaluating it is something you can do yourself.
Last month I ran an #AI for #userresearch workshop with Rosenfeld Media. Our first cohort was full of smart, thoughtful researchers (if you participated in the workshop, I hope you’ll tag yourself and weigh in in the comments!).
A major limitation of a lot of AI for UXR “thought leadership” right now is that too much of it is anecdotal: researchers run datasets a few times through a commercial tool and decide whether or not the output is good enough based on only a handful of results.
But for nondeterministic systems like generative AI, repeated testing under controlled conditions is the only way to know how well they actually work. So that’s what we did in the workshop.
Our workshop participants produced a lot of interesting findings about qualitative research synthesis with AI:
1️⃣ LLMs can product vastly different output even with the exact same prompt and data. The number of themes alone ranged from 5 to 18, with a median of about 10.5.
2️⃣ Our AI-generated themes mapped pretty well to human-generated themes, but there were some notable differences. This led to a discussion of whether mapping to human themes is even the right metric to use to evaluate AI synthesis (how are we evaluating whether the human-generated themes were right in the first place?).
3️⃣ The bigger concern for the researchers in the workshop was the lack of supporting evidence for themes. The supporting quotes the LLM provided looked okay superficially, but on closer investigation *every single participant* found examples of data being misquoted or entirely fabricated. One person commented that validating the output was ultimately more work than performing the analysis themselves.
Now, I want to acknowledge that this is one dataset, one prompt (although, a carefully vetted one, written by an industry expert), and one model (GPT 4o 2024-11-20). Some researchers claim that GPT 4o is worse for research hallucinations–and perhaps it is–but it is still a heavily utilized model in current off-the-shelf AI research tools (and if you’re using off-the-shelf tools, you won’t always know which models they’re using unless you read a whole lot of fine print).
But the point is–I think this is exactly the level at which we should be scrutinizing the output of *all* LLMs in research.
AI absolutely has its place in the modern researcher’s toolkit. But until we systematically evaluate its strengths and weaknesses, we're rolling the dice every time we use it.
We'll be running a second round of my workshop in June as part of Rosenfeld Media’s Designing with AI conference (ticket prices go up tomorrow; register with code PAINE-DWAI2025 for a discount). Or, to hear about other upcoming workshops and events from me, sign up for my mailing list (links below). | 178 comments on LinkedIn
The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: insights from a meta-analysis
Humanities and Social Sciences Communications - The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: insights from a meta-analysis
More than 160 new AI data centers have been built across the US in the past three years in regions already grappling with scarce water resources, a Bloomberg News analysis finds.
I believe that nature has incontrovertibly told us that diversity survives, and monocultures fail. Same with scientific method: diverse thinking and approaches gets us to closer to quality truth; and…