AI for a planet under pressure | Bugge Holm Hansen
Artificial intelligence (AI) is already driving scientific breakthroughs in a variety of research fields, ranging from the life sciences to mathematics.
🥇This is Gold! just dropped by Carnegie Mellon University! It’s one of the most honest looks yet at how “autonomous” agents actually perform in the real world.
👇
The study analyzed AI agents across 50+ occupations, from software engineering to marketing, HR, and design, and compared how they completed human workflows end to end.
What they found is both exciting and humbling:
• Agents “code everything.”
Even in creative or administrative tasks, AI agents defaulted to treating work as a coding problem. Instead of drafting slides or writing strategies, they generated and ran code to produce results, automating processes that humans usually approach through reasoning and iteration.
• They’re faster and cheaper, but not better.
Agents completed tasks 4 – 8× faster and at a fraction of the cost, yet their outputs showed lower quality, weak tool use, and frequent factual errors or hallucinations.
• Human–AI teaming consistently outperformed solo AI.🔥
When humans guided or reviewed the agent’s process, acting more like a “manager” or “co-pilot”, the results improved dramatically.
🧠 My take:
The race toward “fully autonomous AI” is missing the real opportunity, co-intelligence.
Right now, the biggest ROI in enterprises isn’t from replacing humans.
It’s from augmenting them.
✅ Use AI to translate intent into action, not replace decision-making.
✅ Build copilots before colleagues, co-workers who understand your workflow, not just your prompt.
✅ Redesign processes for hybrid intelligence, where AI handles execution and humans handle ambiguity.
The future of work isn’t humans or AI. (for the next 5 years IMO)
It’s humans with AI, working in a shared cognitive space where each amplifies the other’s strengths.
Because autonomy without alignment isn’t intelligence, it’s chaos.
Autonomous AI isn’t replacing human work, it’s redistributing it.
Humans shifted from doing to directing, while agents handled repetitive, programmable layers.
Maybe we are just too fast to shift from "uncool" Copilot to sth more exciting called "Fully Autonomous AI", WDYT? | 36 comments on LinkedIn
This article dropped a few days ago 👉 https://lnkd.in/djktVNKi Main talking points:
💡 Companies are adopting AI like crazy, but they should invest in preparing people to work with AI just as much. Apparently, that doesn't happen nearly enough as it should
💡 The research presented in the article highlights that Get AI Tutors outperform classroom training by 32% on personalization and 17% on feedback relevance.
💡 Gen AI Tutors create space for self-reflection, which is awesome
💡 Learners finished training 23% faster while achieving the same results
💡 Frontline workers, culture change, and building AI competence were mentioned as applications for Gen AI
My thoughts:
💭 I think one of the hardest decisions we will face is where we should use Gen AI Tutors and where we should keep human interaction as part of learning
💭 The "results" in the research presented were, mostly, imho, still vanity metrics. I'm looking forward to seeing research done where analysis of results is more comprehensive (spanning a longer timeline, with clear leading indicators, etc). Until then, I can fully be convinced of the fact that Gen AI Tutors truly perform better on growing cognitive & behavioral skills
💭 While I find the culture change application interesting, I do hope Gen AI Tutors won't be used to absolve leaders of the responsibility THEY have for building cultures. I can't see a good result coming out of this.
Very curious to hear your thoughts 👀
#learninganddevelopment | 14 comments on LinkedIn
So, New Zealanders and Australians are the most similar to AI, and Ethopians and Pakistanis the least.
So, New Zealanders and Australians are the most similar to AI, and Ethopians and Pakistanis the least. This study shows that WEiRD (Western, Educated, Industrialized, Rich, and Democratic) populations are the most aligned with ChatGPT.
It is intriguing why some nations, including Netherlands and Germany, are more GPT-similar than Americans.
The paper uses cognitive tasks such as the “triad task,” which distinguishes between analytic (category-based) and holistic (relationship-based) thinking, GPT tends to analytic thinking, which aligns with countries like the Netherlands and Sweden, that value rationality. This contrasts with holistic thinkers found in many non-WEIRD cultures.
GPT tends to describe the "average human" in terms that are aligned with WEIRD norms.
In short, with the overwhelmingly skewed data used to train AI and the outputs, it's “WEIRD in, WEIRD out”.
The size of the models or training data is not the issue, it's the diversity and representativeness of the data.
All of which underlines the value and importance of sovereign AI, potentially for regions or values-aligned cultures, not just at the national level. | 18 comments on LinkedIn
New research finally offers a robust answer to the question, "Does using AI make our Instructional Designs BETTER, or just faster?"
👉 In a controlled test, 27 Instructional Design postgrads at Carnegie Mellon created designs both with & without GPT-4 assistance.
👉 Every design was blind-scored on quality by expert instructors.
👉The result: Design with AI was not not just faster, but produced better quality designs in 100% of the cases.
But the detail is where it gets interesting...👇
The research also revealed a "capability frontier"—a clear boundary between where AI helps Instructional Design quality most, and where it might actually compromise it.
TLDR:
🚀 USE AI FOR: Designs which use well-established design methodologies, step-by-step processes & widely-discussed topics.
❌ BE MORE CAUTIOUS WHEN USING AI FOR: Designs on niche, novel & complex topics which use less well-established design methodologies.
💡Bonus insight: In line with broader research on the impact of AI on knowledge work, the research also suggests that novice Instructional Designers benefit *most* from AI design assistance (but only when we are strict on what sorts of tasks they use it for).
To learn more about the research & what it tells us about how to work with AI in our day to day work, check out my latest blog post (link in comments).
Happy innovating!
Phil 👋
KI hebt den Boden an – und verschiebt die Karten am Arbeitsmarkt!
KI hebt den Boden an – und verschiebt die Karten am Arbeitsmarkt! Eine unerwartete Chance für erfahrene Fachkräfte.
In meinem Blogpost "KI hebt den Boden an" hatte ich darüber geschrieben, wie Künstliche Intelligenz (KI) den Einstieg ins Lernen dramatisch erleichtert. Sie ist ein "Floor Raiser", der uns schneller auf ein produktives Niveau bringt.
Doch die neue "Canaries in the Coal Mine"-Studie des Stanford Digital Economy Lab zeigt nun, dass dieser angehobene "Boden" den Arbeitsmarkt für BerufseinsteigerInnen deutlich verändert – und gleichzeitig neue Türen für erfahrenere Erwerbstätige öffnet.
Die Studie enthüllt, dass BerufseinsteigerInnen im Alter von 22-25 Jahren einen signifikanten Rückgang der Beschäftigung in stark KI-exponierten Berufen erleben. Ein prägnantes Beispiel: "Die Beschäftigung von SoftwareentwicklerInnen im Alter von 22 bis 25 Jahren ist laut ADP-Daten seit ihrem Höchststand Ende 2022 um fast 20 % zurückgegangen.".
Warum trifft es gerade die Jüngsten? Die ForscherInnen erklären, dass KI besonders effektiv "kodifiziertes Wissen" ersetzt – also das "Buchwissen", das frisch von der Universität kommt. Da junge Arbeitskräfte typischerweise mehr kodifiziertes als "stillschweigendes Wissen" (Erfahrung) mitbringen, sind sie anfälliger für die Aufgabenablösung durch KI.
Hier kommt die entscheidende Wendung für alle mit Berufserfahrung:
"Im Gegensatz dazu sind die Beschäftigungstrends für erfahrenere ArbeitnehmerInnen in denselben Berufen [...] stabil geblieben oder weiter gewachsen.".
Die Studie zeigt, dass der Rückgang der Berufseinsteiger-Beschäftigung in Anwendungen von KI stattfindet, die die Arbeit automatisieren, nicht aber dort, wo KI die Arbeit augmentiert (ergänzt).
Erfahrene Fachkräfte besitzen das stillschweigende Wissen – jene unbezahlbaren Tipps, Tricks und das Urteilsvermögen, das sich erst durch jahrelange Praxis ansammelt und von KI nicht ersetzt, sondern ideal ergänzt (augmentiert) werden kann. Die Nutzung von KI als Augmentierung führt sogar zu robustem Beschäftigungswachstum.
Fazit für Führung und Karriereentwicklung: Während KI den "Boden" der Basiskompetenzen anhebt, erschwert sie möglicherweise den Einstieg für diejenigen, die nur auf diesem angehobenen Niveau operieren. Für erfahrene Erwerbstätige ist dies jedoch eine enorme Chance: Ihre gesammelte Erfahrung und die Fähigkeit, KI als mächtiges Augmentationswerkzeug zu nutzen, machen sie zu unverzichtbaren GestalterInnen der zukünftigen Arbeitswelt.
Reflexion: Wie können erfahrene Professionals diese Chance ergreifen und KI gezielt zur Wertsteigerung ihrer Expertise einsetzen? Wie gelingt es, stillschweigendes Wissen aktiv mit KI zu verbinden?
Hier der Link zur Studie: https://lnkd.in/dEArWX58
#KI #Arbeitsmarkt #Führung #Lernen #GenerativeAI #FloorRaiser #Erfahrung #ZukunftderArbeit #Skills #Karriere
Is AI coaching really coaching? I’m not sure it matters. Hiding behind semantics won’t shelter our profession from the coming tidal wave.
Fidji Simo, OpenAI's CEO of Applications, recently shared her vision for the future of AI; including transforming personalized coaching from a "privilege reserved for the few" into an everyday service for everyone. Her dream, inspired by her own transformative relationship with her human coach Katia, poses fascinating questions we're actively exploring at the @Hudson Institute of Coaching. How are we—coaches, leaders, learning professionals, growth-minded individuals—to think of it?
While Prof. Nicky Terblanche (PhD) and other researchers explore the rapidly expanding frontier of AI coaching’s developmental potential, Tatiana Bachkirova and Robert Kemp have brilliantly articulated the unique value of human coaching in transforming individuals and organizations alike.
My latest for Forbes examines the tension between democratization and depth in the age of AI coaching.
Academic research offers a number of valuable insights:
☑️ AI can match human coaches in terms of structured goal-tracking and maintaining momentum.
🔥 The deepest transformation emerges through "heat experiences"—moments of productive discomfort that require genuine human witness and relational risk that an AI cannot replicate.
👥 Professional coaching comprises six essential elements that current AI cannot fully embody: joint inquiry, meaning-for-action, values navigation, contextual understanding, relational attunement, and fostering client autonomy.
I believe the future isn't about choosing sides. Instead, it's about thoughtful integration that preserves what makes human-to-human coaching transformative while exploring technology’s potential to expand access to meaningful development.
The path forward requires care to distinguish what technology can replicate from what only emerges when one human commits to another's growth.
https://lnkd.in/eUV89Vcc
How are you thinking about AI's role in human development? Can we preserve the irreducible power of human presence while making meaningful growth more accessible? | 105 comments on LinkedIn
The AI Hype is a Dead Man Walking. The Math Finally Proves It.
For the past two years, the AI industry has been operating on a single, seductive promise: that if we just keep scaling our current models, we'll eventually arrive at AGI. A wave of new research, brilliantly summarized in a recent video analysis, has finally provided the mathematical proof that this promise is a lie.
This isn't just another opinion; it's a brutal, two-pronged assault on the very foundations of the current AI paradigm:
1. The Wall of Physics:
The first paper reveals a terrifying reality about the economics of reliability. To reduce the error rate of today's LLMs by even a few orders of magnitude—to make them truly trustworthy for enterprise use—would require 10^20 times more computing power. This isn't just a challenge; it's a physical impossibility. We have hit a hard wall where the cost of squeezing out the last few percentage points of reliability is computationally insane. The era of brute-force scaling is over.
2. The Wall of Reason:
The second paper is even more damning. It proves that "Chain-of-Thought," the supposed evidence of emergent reasoning in LLMs, is a "brittle mirage". The models aren't reasoning; they are performing a sophisticated pattern-match against their training data. The moment a problem deviates even slightly from that data, the "reasoning" collapses entirely. This confirms what skeptics have been saying all along: we have built a world-class "statistical parrot," not a thinking machine.
This is the end of the "Blueprint Battle." The LLM-only blueprint has failed. The path forward is not to build a bigger parrot, but to invest in the hard, foundational research for a new architecture. The future belongs to "world models," like those being pursued by Yann LeCun and others—systems that learn from interacting with a real or virtual world, not just from a library of text.
The "disappointing" GPT-5 launch wasn't a stumble; it was the first, visible tremor of this entire architectural paradigm hitting a dead end. The hype is over. Now the real, foundational work of inventing the next paradigm begins. | 554 comments on LinkedIn
Microsoft 𝗷𝘂𝘀𝘁 𝗮𝗻𝗮𝗹𝘆𝘇𝗲𝗱 𝟮𝟬𝟬,𝟬𝟬𝟬 𝗿𝗲𝗮𝗹-𝘄𝗼𝗿𝗹𝗱 𝗔𝗜 𝗰𝗼𝗻𝘃𝗲𝗿𝘀𝗮𝘁𝗶𝗼𝗻𝘀 — 𝗮𝗻𝗱 𝗿𝗮𝗻𝗸𝗲𝗱 𝗵𝗼𝘄 𝗮𝘂𝘁𝗼𝗺𝗮𝘁𝗮𝗯𝗹𝗲 𝘆𝗼𝘂𝗿 𝗷𝗼𝗯 𝗿𝗲𝗮𝗹𝗹𝘆 𝗶𝘀. ⬇️
MS Research studied how people actually use Microsoft Copilot — and what kinds of tasks AI performs best. Then they mapped that usage onto real job data across the occupation classifications.
𝗧𝗵𝗲 𝗿𝗲𝘀𝘂𝗹𝘁?
A first-of-its-kind AI applicability score across 800+ occupations. And some surprising findings. But what does “AI-applicable” even mean? Microsoft used a 3-part score:
→ Coverage – How often AI touches a job’s tasks
→ Completion – How well AI helps with those tasks
→ Scope – How much of the job AI can actually handle
𝗠𝗼𝘀𝘁 𝗔𝗜-𝗮𝗽𝗽𝗹𝗶𝗰𝗮𝗯𝗹𝗲 𝗷𝗼𝗯𝘀?
→ Interpreters, Writers, Historians, Sales Reps, Customer Service, Journalists
𝗟𝗲𝗮𝘀𝘁 𝗔𝗜-𝗮𝗽𝗽𝗹𝗶𝗰𝗮𝗯𝗹𝗲 𝗷𝗼𝗯𝘀?
→ Phlebotomists, Roofers, Ship Engineers, Dishwashers, Tractor Operators
𝗛𝗲𝗿𝗲 𝗮𝗿𝗲 𝘁𝗵𝗲 𝟲 𝗸𝗲𝘆 𝘁𝗮𝗸𝗲𝗮𝘄𝗮𝘆𝘀: ⬇️
1. AI is not doing your job — it’s helping you do it better
→ In 40% of conversations, the AI task and the user’s goal were completely different. People ask AI for help gathering, editing, summarizing. The AI responds by teaching and explaining. This is augmentation at scale.
2. Information work is the real frontier
→ The most common user goals? “Get information” and “Write content.” The most common AI actions? “Provide information,” “Teach others,” and “Advise.”
3. Jobs most affected are not just high-tech — they’re high-communication
→ Interpreters, historians, journalists, teachers, and customer service roles all scored high. Why? Because they involve information, communication, and explanation — all things LLMs are good at.
4. AI can’t replace physical work — and probably won’t
→ The bottom of the list? Roofers, dishwashers, tractor operators. Manual jobs remain least impacted — not because AI can’t help, but because it can’t reach.
5. Wage isn’t a strong predictor of AI exposure
→ Surprising: there’s only a weak correlation (r=0.07) between average salary and AI applicability. In other words: this wave of AI cuts across income levels. It’s not just a C-suite story.
6. Bachelor’s degree jobs are most exposed — but not most replaced
→ Occupations requiring a degree show more AI overlap. But that doesn’t mean these jobs disappear — it means they change. AI is refactoring knowledge work, not deleting.
This transformation is moving faster than most realize. The question isn’t whether AI will change how we work — it already is.
Study in comments. ⬇️
𝗣.𝗦. 𝗜 𝗿𝗲𝗰𝗲𝗻𝘁𝗹𝘆 𝗹𝗮𝘂𝗻𝗰𝗵𝗲𝗱 𝗮 𝗻𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿 𝘄𝗵𝗲𝗿𝗲 𝗜 𝘄𝗿𝗶𝘁𝗲 𝗮𝗯𝗼𝘂𝘁 𝗲𝘅𝗮𝗰𝘁𝗹𝘆 𝘁𝗵𝗲𝘀𝗲 𝘀𝗵𝗶𝗳𝘁𝘀 𝗲𝘃𝗲𝗿𝘆 𝘄𝗲𝗲𝗸 — 𝗔𝗜 𝗮𝗴𝗲𝗻𝘁𝘀, 𝗲𝗺𝗲𝗿𝗴𝗶𝗻𝗴 𝘄𝗼𝗿𝗸𝗳𝗹𝗼𝘄𝘀, 𝗮𝗻𝗱 𝗵𝗼𝘄 𝘁𝗼 𝘀𝘁𝗮𝘆 𝗮𝗵𝗲𝗮𝗱: 𝗵𝘁𝘁𝗽𝘀://𝘄𝘄𝘄.𝗵𝘂𝗺𝗮𝗻𝗶𝗻𝘁𝗵𝗲𝗹𝗼𝗼𝗽.𝗼𝗻𝗹𝗶𝗻𝗲/𝘀𝘂𝗯𝘀𝗰𝗿𝗶𝗯𝗲 | 21 comments on LinkedIn
I was recently looking at how Gen AI shapes and potentially impacts cognitive skills - a topic that matters for education and for work. Here are a few resources I reviewed.
I was recently looking at how Gen AI shapes and potentially impacts cognitive skills - a topic that matters for education and for work.
Here are a few resources I reviewed.
1️⃣ Your Brain on ChatGPT - What Really Happens When Students Use AI
MIT released a study on AI and learning. Findings indicate that students who used ChatGPT for essays showed weaker brain activity, couldn't remember what they'd written, and got worse at thinking over time
https://shorturl.at/qaLie
2️⃣ Cognitive Debt when using AI - Your brain on Chat GPT
There is a cognitive cost of using an LLM vs Search Engine vs our brain in e.g. writing an essay. The study indicates that there is a likely decrease in learning skills, the more we use technology as substantial replacement of our cognitive skills.
https://lnkd.in/drVa_YNg
3️⃣ Teachers warn AI is impacting students' critical thinking
One of many articles about the importance of using Gen AI smartly, in Education but also at work.
https://lnkd.in/dSbGjusu
4️⃣ The Impact of Gen AI on critical thinking
Another interesting study on the same topic.
https://shorturl.at/74OO6
5️⃣ Doctored photographs create false memories
In psychology, research indicated a long time ago that our memory - our recollection of past events - is susceptible to errors, biases, can be fragmentary, contain incorrect details, and, oftentimes, be entirely fictional. Memories are a reconstruction of our past to respond to our need for coherence in life.
A rigorous 2023 study shows that doctored photographs – think Photoshop or today, AI – create false memories. Why it matters? Memory is essential for learning, recall of episodical and factual happenings, and it’s a basis for the integrity of sources of truth in organizations.
https://shorturl.at/hdgtN
6️⃣ The decline of our thinking skills
Another great article on AI and critical thinking from IE University.
https://shorturl.at/rGl99
7️⃣ Context Engineering
Ethan Mollick recently wrote a blog on "context engineering" - how we give AI the data and information it needs to generate relevant output. The comments on the post were even more interesting than the post itself. Personally I think that good part of context engineering is not in organizations documents or processes, it is in peoples ability to think critically and understand relevant parameters of their environment to nurture AI/Gen AI. Gotta follow up on this one ;-)
https://shorturl.at/sfnuV
#GenAI #CriticalThinking #AICognition #AIHuman #ContextEngineering | 29 comments on LinkedIn
I spent my long weekend exploring the 2025 AI-in-Education report - two graphs showed a major disconnect!
We might think we have an AI adoption story, but the reality is different: we still have a huge AI understanding gap!
Here are some key stats from the report that honestly made me do a double-take:
▪️99% of education leaders, 87% of educators worldwide & 93% of US students have already used generative-AI for school at least once or twice!
▪️Yet only 44% of those educators worldwide & 41% of those US students say they “know a lot about AI.”
‼️this means our usage is far outpacing our understanding & that’s a significant gap!
When such powerful tools are used without real fluency, we would see:
▪️complicated implementation with no shared strategy (sounds
familiar?)!
▪️anxious students who’d fear being accused of cheating (I've heard this from so many students!)
▪️overwhelmed teachers who feel alone, unsupported & unprepared (this one is a common concern by some of my teacher friends)!
The takeaway that jumped out at me:
▪️the schools that win won't be the ones that adopt AI the fastest, but the ones that adopt it the wisest!
So here's what I’d think we should consider:
✅building a "learning-first" culture across institutions & understanding when AI supports our learning vs. when it gets in the way!
▪️more like, we need to swap the question "Are we using AI?" for "Can we show any learning gains?"
⚠️so, what shifts does this report data point us to? Here is my takeaway:
✅Building real AI fluency:
▪️moving beyond simple "prompting hacks" to true literacy that includes understanding ethics, biases & pedagogical purposes,
▪️this may need an AI Council of faculty, IT, learners & others working together to develop institution-wide policies on when AI helps or harms our learning,
▪️it's about building shared wisdom, not just industry-ready skills
✅Creating collaborative infrastructure:
▪️the "every teacher for themselves" approach seems to be failing,
▪️shared guidelines, inclusive AI Councils & a culture of open conversation are now needed to bridge this huge gap!
✅Shifting focus from "using AI tools" to "achieving learning outcomes":
▪️this one really resonated with me because unlike other tech rollouts we've witnessed, AI directly affects how our students think & learn,
▪️our institutions need coordinated assessments tracking whether AI use makes our learners better thinkers or just faster task completers!
The goal that keeps coming back to us
▪️isn't to get every student using AI!
▪️but to make sure every learner & teacher really understands it!
⁉️I’m curious, where is your institution on this journey?
1️⃣ individual use: everyone is figuring it out on their own (been there!)
2️⃣ shared guidelines: we have policies, but they're not yet deeply integrated (getting closer!)
3️⃣ fully integrated strategy: we have a unified approach with a learning-first, outcome-tracked focus (this is the goal!) | 24 comments on LinkedIn
Scientists just published something in Nature that will scare every marketer, leader, and anyone else who thinks they understand human choice.
Scientists just published something in Nature that will scare every marketer, leader, and anyone else who thinks they understand human choice.
Researchers created an AI called "Centaur" that can predict human behavior across ANY psychological experiment with disturbing accuracy. Not just one narrow task. Any decision-making scenario you throw at it.
Here's the deal: They trained this AI on 10 million human choices from 160 different psychology experiments. Then they tested it against the best psychological theories we have.
The AI won. In 31 out of 32 tests.
But here's the part that really got me...
Centaur wasn't an algorithm built to study human behavior. It was a language model that learned to read us. The researchers fed it tons of behavioral data, and suddenly it could predict choices better than decades of psychological research.
This means our decision patterns aren't as unique as we think. The AI found the rules governing choices we believe are spontaneous.
Even more unsettling? When they tested it on brain imaging data, the AI's internal representations became more aligned with human neural activity after learning our behavioral patterns. It's not just predicting what you'll choose, it's learning to think more like you do.
The researchers even demonstrated something called "scientific regret minimization"—using the AI to identify gaps in our understanding of human behavior, then developing better psychological models.
Can a model based on Centaur be tuned for how customers behave? Companies will know your next purchasing decision before you make it. They'll design products you'll want, craft messages you'll respond to, and predict your reactions with amazing accuracy.
Understanding human predictability is a competitive advantage today. Until now, that knowledge came from experts in behavioral science and consumer behavior. Now, there's Centaur.
Here's my question: If AI can decode the patterns behind human choice with this level of accuracy, what does that mean for authentic decision-making in business? Will companies serve us better with perfectly tailored offerings, or with this level of understanding lead to dystopian manipulation?
What's your take on predictable humans versus authentic choice?
#AI #Psychology #BusinessStrategy #HumanBehavior | 369 comments on LinkedIn
Distinguishing performance gains from learning when using generative AI - published in Nature Reviews Psychology!
Excited to share our latest commentary just published in Nature Reviews Psychology! ✨
""
Generative AI tools such as ChatGPT are reshaping education, promising improvements in learner performance and reduced cognitive load. 🤖
🤔But here's the catch: Do these immediate gains translate into deep and lasting learning?
Reflecting on recent viral systematic reviews and meta-analyses on #ChatGPT and #Learning, we argue that educators and researchers need to clearly differentiate short-term performance benefits from genuine, durable learning outcomes. 💡
📌 Key takeaways:
✅ Immediate boosts with generative AI tools don't necessarily equal durable learning
✅ While generative AI can ease cognitive load, excessive reliance might negatively impact critical thinking, metacognition, and learner autonomy
✅ Long-term, meaningful skill development demands going beyond immediate performance metrics
🔖 Recommendations for future research and practice:
1️⃣ Shift toward assessing retention, transfer, and deep cognitive processing
2️⃣ Promote active learner engagement, critical evaluation, and metacognitive reflection
3️⃣ Implement longitudinal studies exploring the relationship between generative AI assistance and prior learner knowledge
Special thanks 🙏 to my amazing collaborators and mentors, Samuel Greiff, Jason M. Lodge, and Dragan Gasevic, for their invaluable contributions, guidance, and encouragement. A big shout-out to Dr. Teresa Schubert for her insightful comments and wonderful support throughout the editorial process! 🌟
👉 Full article here: https://lnkd.in/g3YDQUrH
👉 Full-text Access (view-only version): https://rdcu.be/erwIt
#GenerativeAI #ChatGPT #AIinEducation #LearningScience #Metacognition #Cognition #EdTech #EducationalResearch #BJETspecialIssue #NatureReviewsPsychology #FutureOfEducation #OpenScience
In a now viral study, researchers examined how using ChatGPT for essay writing affects our brains and cognitive abilities.
In a now viral study, researchers examined how using ChatGPT for essay writing affects our brains and cognitive abilities. They divided participants into three groups: one using ChatGPT, one using search engines, and one using just their brains. Through EEG monitoring, interviews, and analysis of the essays, they discovered some not surprising results about how AI use impacts learning and cognitive engagement.
There were five key takeaways for me (although this is not an exhaustive list), within the context of this particular study:
1. The Cognitive Debt Issue
The study indicates that participants who used ChatGPT exhibited the weakest neural connectivity patterns when compared to those relying on search engines or unaided cognition. This suggests that defaulting to generative AI may function as an intellectual shortcut, diminishing rather than strengthening cognitive engagement.
Researchers are increasingly describing the tradeoff between short-term ease and productivity and long-term erosion of independent thinking and critical skills as “cognitive debt.” This parallels the concept of technical debt, when developers prioritise quick solutions over robust design, leading to hidden costs, inefficiencies, and increased complexity downstream.
2. The Memory Problem
Strikingly, users of ChatGPT had difficulty recalling or quoting from essays they had composed only minutes earlier. This undermines the notion of augmentation; rather than supporting cognitive function, the tool appears to offload essential processes, impairing retention and deep processing of information.
3. The Ownership Gap
Participants who used ChatGPT reported a reduced sense of ownership over their work. If we normalise over-reliance on AI tools, we risk cultivating passive knowledge consumers rather than active knowledge creators.
4. The Homogenisation Effect
Analysis showed that essays from the LLM group were highly uniform, with repeated phrases and limited variation, suggesting reduced cognitive and expressive diversity. In contrast, the Brain-only group produced more varied and original responses. The Search group fell in between.
5. The Potential for Constructive Re-engagement 🧠 🤖 🤖 🤖
There is, however, promising evidence for meaningful integration of AI when used in conjunction with prior unaided effort:
“Those who had previously written without tools (Brain-only group), the so-called Brain-to-LLM group, exhibited significant increase in brain connectivity across all EEG frequency bands when allowed to use an LLM on a familiar topic. This suggests that AI-supported re-engagement invoked high levels of cognitive integration, memory reactivation, and top-down control.”
This points to the potential for AI to enhance cognitive function when it is used as a complement to, rather than a substitute for, initial human effort.
At over 200 pages, expect multiple paper submissions out of this extensive body of work.
https://lnkd.in/gzicDHp2 | 16 comments on LinkedIn
𝐍𝐨, 𝐲𝐨𝐮𝐫 𝐛𝐫𝐚𝐢𝐧 𝐝𝐨𝐞𝐬 𝐧𝐨𝐭 𝐩𝐞𝐫𝐟𝐨𝐫𝐦 𝐛𝐞𝐭𝐭𝐞𝐫 𝐚𝐟𝐭𝐞𝐫 𝐋𝐋𝐌 𝐨𝐫 𝐝𝐮𝐫𝐢𝐧𝐠 𝐋𝐋𝐌 𝐮𝐬𝐞.
See our paper for more results: "Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task" (link in the comments).
For 4 months, 54 students were divided into three groups: ChatGPT, Google -ai, and Brain-only. Across 3 sessions, each wrote essays on SAT prompts. In an optional 4th session, participants switched: LLM users used no tools (LLM-to-Brain), and Brain-only group used ChatGPT (Brain-to-LLM).
👇
𝐈. 𝐍𝐋𝐏 𝐚𝐧𝐝 𝐄𝐬𝐬𝐚𝐲 𝐂𝐨𝐧𝐭𝐞𝐧𝐭
- LLM Group: Essays were highly homogeneous within each topic, showing little variation. Participants often relied on the same expressions or ideas.
- Brain-only Group: Diverse and varied approaches across participants and topics.
- Search Engine Group: Essays were shaped by search engine-optimized content; their ontology overlapped with the LLM group but not with the Brain-only group.
𝐈𝐈. 𝐄𝐬𝐬𝐚𝐲 𝐒𝐜𝐨𝐫𝐢𝐧𝐠 (𝐓𝐞𝐚𝐜𝐡𝐞𝐫𝐬 𝐯𝐬. 𝐀𝐈 𝐉𝐮𝐝𝐠𝐞)
- Teachers detected patterns typical of AI-generated content and scoring LLM essays lower for originality and structure.
- AI Judge gave consistently higher scores to LLM essays, missing human-recognized stylistic traits.
𝐈𝐈𝐈: 𝐄𝐄𝐆 𝐀𝐧𝐚𝐥𝐲𝐬𝐢𝐬
Connectivity: Brain-only group showed the highest neural connectivity, especially in alpha, theta, and delta bands. LLM users had the weakest connectivity, up to 55% lower in low-frequency networks. Search Engine group showed high visual cortex engagement, aligned with web-based information gathering.
𝑺𝒆𝒔𝒔𝒊𝒐𝒏 4 𝑹𝒆𝒔𝒖𝒍𝒕𝒔:
- LLM-to-Brain (🤖🤖🤖🧠) participants underperformed cognitively with reduced alpha/beta activity and poor content recall.
- Brain-to-LLM (🧠🧠🧠🤖) participants showed strong re-engagement, better memory recall, and efficient tool use.
LLM-to-Brain participants had potential limitations in achieving robust neural synchronization essential for complex cognitive tasks.
Results for Brain-to-LLM participants suggest that strategic timing of AI tool introduction following initial self-driven effort may enhance engagement and neural integration.
𝐈𝐕. 𝐁𝐞𝐡𝐚𝐯𝐢𝐨𝐫𝐚𝐥 𝐚𝐧𝐝 𝐂𝐨𝐠𝐧𝐢𝐭𝐢𝐯𝐞 𝐄𝐧𝐠𝐚𝐠𝐞𝐦𝐞𝐧𝐭
- Quoting Ability: LLM users failed to quote accurately, while Brain-only participants showed robust recall and quoting skills.
- Ownership: Brain-only group claimed full ownership of their work; LLM users expressed either no ownership or partial ownership.
- Critical Thinking: Brain-only participants cared more about 𝘸𝘩𝘢𝘵 and 𝘸𝘩𝘺 they wrote; LLM users focused on 𝘩𝘰𝘸.
- Cognitive Debt: Repeated LLM use led to shallow content repetition and reduced critical engagement. This suggests a buildup of "cognitive debt", deferring mental effort at the cost of long-term cognitive depth.
Support and share! ❤️
#MIT #AI #Brain #Neuroscience #CognitiveDebt | 54 comments on LinkedIn
The Alan Turing Institute 𝗮𝗻𝗱 the LEGO Group 𝗱𝗿𝗼𝗽𝗽𝗲𝗱 𝘁𝗵𝗲 𝗳𝗶𝗿𝘀𝘁 𝗰𝗵𝗶𝗹𝗱-𝗰𝗲𝗻𝘁𝗿𝗶𝗰 𝗔𝗜 𝘀𝘁𝘂𝗱𝘆! ⬇️
(𝘈 𝘮𝘶𝘴𝘵-𝘳𝘦𝘢𝘥 — 𝘦𝘴𝘱𝘦𝘤𝘪𝘢𝘭𝘭𝘺 𝘪𝘧 𝘺𝘰𝘶 𝘩𝘢𝘷𝘦 𝘤𝘩𝘪𝘭𝘥𝘳𝘦𝘯.)
While most AI debates and studies focus on models, chips, and jobs — this one zooms in on something far more personal: 𝗪𝗵𝗮𝘁 𝗵𝗮𝗽𝗽𝗲𝗻𝘀 𝘄𝗵𝗲𝗻 𝗰𝗵𝗶𝗹𝗱𝗿𝗲𝗻 𝗴𝗿𝗼𝘄 𝘂𝗽 𝘄𝗶𝘁𝗵 𝗴𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝘃𝗲 𝗔𝗜?
They surveyed 1,700+ kids, parents, and teachers across the UK — and what they found is both powerful and concerning.
𝗛𝗲𝗿𝗲 𝗮𝗿𝗲 9 𝘁𝗵𝗶𝗻𝗴𝘀 𝘁𝗵𝗮𝘁 𝘀𝘁𝗼𝗼𝗱 𝗼𝘂𝘁 𝘁𝗼 𝗺𝗲 𝗳𝗿𝗼𝗺 𝘁𝗵𝗲 𝗿𝗲𝗽𝗼𝗿𝘁: ⬇️
1. 1 𝗶𝗻 4 𝗸𝗶𝗱𝘀 (8–12 𝘆𝗿𝘀) 𝗮𝗹𝗿𝗲𝗮𝗱𝘆 𝘂𝘀𝗲 𝗚𝗲𝗻𝗔𝗜 — 𝗺𝗼𝘀𝘁 𝘄𝗶𝘁𝗵𝗼𝘂𝘁 𝘀𝗮𝗳𝗲𝗴𝘂𝗮𝗿𝗱𝘀
→ ChatGPT, Gemini, and even MyAI on Snapchat are now part of daily digital play.
2. 𝗔𝗜 𝗶𝘀 𝗵𝗲𝗹𝗽𝗶𝗻𝗴 𝗸𝗶𝗱𝘀 𝗲𝘅𝗽𝗿𝗲𝘀𝘀 𝘁𝗵𝗲𝗺𝘀𝗲𝗹𝘃𝗲𝘀 — 𝗲𝘀𝗽𝗲𝗰𝗶𝗮𝗹𝗹𝘆 𝘁𝗵𝗼𝘀𝗲 𝘄𝗶𝘁𝗵 𝗹𝗲𝗮𝗿𝗻𝗶𝗻𝗴 𝗻𝗲𝗲𝗱𝘀
→ 78% of neurodiverse kids use ChatGPT to communicate ideas they struggle to express otherwise.
3. 𝗖𝗿𝗲𝗮𝘁𝗶𝘃𝗶𝘁𝘆 𝗶𝘀 𝘀𝗵𝗶𝗳𝘁𝗶𝗻𝗴 — 𝗯𝘂𝘁 𝗻𝗼𝘁 𝗿𝗲𝗽𝗹𝗮𝗰𝗶𝗻𝗴
→ Kids still prefer offline tools (arts, crafts, games), even when they enjoy AI-assisted play. Digital is not (yet) the default.
4. 𝗔𝗜 𝗮𝗰𝗰𝗲𝘀𝘀 𝗶𝘀 𝗵𝗶𝗴𝗵𝗹𝘆 𝘂𝗻𝗲𝗾𝘂𝗮𝗹
→ 52% of private school students use GenAI, compared to only 18% in public schools. The next digital divide is already here.
5. 𝗖𝗵𝗶𝗹𝗱𝗿𝗲𝗻 𝗮𝗿𝗲 𝘄𝗼𝗿𝗿𝗶𝗲𝗱 𝗮𝗯𝗼𝘂𝘁 𝗔𝗜’𝘀 𝗲𝗻𝘃𝗶𝗿𝗼𝗻𝗺𝗲𝗻𝘁𝗮𝗹 𝗶𝗺𝗽𝗮𝗰𝘁
→ Some kids refused to use GenAI after learning about water and energy costs. Let that sink in.
6. 𝗣𝗮𝗿𝗲𝗻𝘁𝘀 𝗮𝗿𝗲 𝗼𝗽𝘁𝗶𝗺𝗶𝘀𝘁𝗶𝗰 — 𝗯𝘂𝘁 𝗱𝗲𝗲𝗽𝗹𝘆 𝘄𝗼𝗿𝗿𝗶𝗲𝗱
→ 76% support AI use, but 82% are scared of inappropriate content and misinformation. Only 41% fear cheating.
7. 𝗧𝗲𝗮𝗰𝗵𝗲𝗿𝘀 𝗮𝗿𝗲 𝘂𝘀𝗶𝗻𝗴 𝗔𝗜 — 𝗮𝗻𝗱 𝗹𝗼𝘃𝗶𝗻𝗴 𝗶𝘁
→ 85% say GenAI boosts their productivity, 88% feel confident using it. They’re ahead of the curve.
8. 𝗖𝗿𝗶𝘁𝗶𝗰𝗮𝗹 𝘁𝗵𝗶𝗻𝗸𝗶𝗻𝗴 𝗶𝘀 𝘂𝗻𝗱𝗲𝗿 𝘁𝗵𝗿𝗲𝗮𝘁
→ 76% of parents and 72% of teachers fear kids are becoming too trusting of GenAI outputs.
9. 𝗕𝗶𝗮𝘀 𝗮𝗻𝗱 𝗶𝗱𝗲𝗻𝘁𝗶𝘁𝘆 𝗿𝗲𝗽𝗿𝗲𝘀𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻 𝗶𝘀 𝘀𝘁𝗶𝗹𝗹 𝗮 𝗯𝗹𝗶𝗻𝗱𝘀𝗽𝗼𝘁
→ Children of color felt less seen and less motivated to use tools that didn’t reflect them. Representation matters.
The next generation isn’t just using AI. They’re being shaped by it. That’s why we need a more focused, intentional approach: Teaching them not just how to use these tools — but how to question them. To navigate the benefits, the risks, and the blindspots.
𝗪𝗮𝗻𝘁 𝗺𝗼𝗿𝗲 𝗯𝗿𝗲𝗮𝗸𝗱𝗼𝘄𝗻𝘀 𝗹𝗶𝗸𝗲 𝘁𝗵𝗶𝘀?
Subscribe to Human in the Loop — my new weekly deep dive on AI agents, real-world tools, and strategic insights: https://lnkd.in/dbf74Y9E | 174 comments on LinkedIn
In a new paper, British philosopher Andy Clark (author of the 2003 book Natural Born Cyborgs, see comment below) offers a rebuttal to the pervasive anxiety surrounding new technologies, particularly generative AI, by reframing the nature of human cognition.
In a new paper, British philosopher Andy Clark (author of the 2003 book Natural Born Cyborgs, see comment below) offers a rebuttal to the pervasive anxiety surrounding new technologies, particularly generative AI, by reframing the nature of human cognition. He begins by acknowledging familiar concerns: that GPS erodes our spatial memory, search engines inflate our sense of knowledge, and tools like ChatGPT might diminish creativity or encourage intellectual laziness. These fears, Clark observes, mirror ancient worries, like Plato’s warning that writing would weaken memory, and stem from a deeply ingrained but flawed assumption: the idea that the mind is confined to the biological brain.
Clark challenges this perspective with his extended mind thesis, arguing that humans have always been cognitive hybrids, seamlessly integrating external tools into our thinking processes. From the gestures we use to offload mental effort to the scribbled notes that help us untangle complex problems, our cognition has never been limited to what happens inside our skulls. This perspective transforms the debate about AI from a zero-sum game, where technology is seen as replacing human abilities, into a discussion about how we distribute cognitive labour across a network of biological and technological resources.
Recent advances in neuroscience lend weight to this view. Theories like predictive processing suggest that the brain is fundamentally geared toward minimising uncertainty by engaging with the world around it. Whether probing a river’s depth with a stick or querying ChatGPT to clarify an idea, the brain doesn’t distinguish between internal and external problem-solving—it simply seeks the most efficient path to resolution. This fluid interplay between mind and tool has shaped human history, from the invention of stone tools to the design of modern cities, each innovation redistributing cognitive tasks and expanding what we can achieve.
Generative AI, in Clark’s view, is the latest chapter in this story. While critics warn that it might stifle originality or turn us into passive curators of machine-generated content, evidence suggests a more nuanced reality. The key, Clark argues, lies in how we integrate these technologies into our cognitive ecosystems.
https://lnkd.in/gUmxE57w | 41 comments on LinkedIn
Recent research showed that every 7 months AI doubles the length (in human time taken) of the task they can solve. AI researcher Toby Ord has built on the original study to show that AI success probability declines exponentially with task length, defining model capabilities with a ‘half-life.’
One of the most interesting things about the original research is that it provides a clear metric for measuring AI performance improvement that is not tied to benchmarks that keep on being superceded, needing new benchmarks.
We can now rank AI models and agents by their half-life - the time for human tasks for which they achieve 50% success rate.
Of course we are usually more interested in models that can achieve 99+% success rates - depending on the task - but the relative consistency of the half life decay means the T50 threshold predicts whatever success rate we aim for, both today, and at future dates if the original trend holds
Generally the decay is due to cumulative errors or going off course. But the decay is not always consistent, as there can be subtasks of uneven difficulty, or agents can recover from early mistakes.
Interestingly, humans don't follow pure exponential decay curves. Our success rate falls off more slowly over very long tasks, suggesting we have broader context, allowing us to recover from early mistakes.
The research was applied to tasks in research or software engineering. The dynamics of this performance evolution may or may not apply to other domains.
Certainly, this reframing of assessing the development of AI capabilities and its comparison to human work is a very useful advance to the benchmarking approach.
Research on over 3500 workers points to two outcomes from use of GenAI: immediate performance boosts, and a decrease in motivation and increase in boredom when…
switching to non-augmented tasks.
It is definitely interesting research, but I am very cautious about the conclusions reached by the authors, partly since they are to a degree contradictory, and also not necessarily generalizable.
The authors implicitly criticize AI for removing the “most cognitively demanding parts” of work, implying that this reduces fulfillment. But the outputs and productivity are clearly improved. Are they suggesting workers create inferior output for the sake of engagement?
It is worth noting that other recent research points to improved emotion and engaement with genAI collaboration. The emotional impact of genAI collaboration will vary substantially across use cases, especially with the nature of the task, and certainly with the cultural context. It appears the use case here was performance reviews, which is not representative of many other types of cognitive work.
The authors also say that AI-assisted tasks reduce users’ sense of control, thus lowering motivation. But they say this sense of control is restored during subsequent solo tasks, even though those are when boredom and disengagement rise.
Having said that, for some tasks and work design the issues they raise could be real and substantial. These are the sound remedies they suggest:
➡️Blend AI and Human Contributions:
Use gen AI as a foundation for tasks while encouraging humans to personalize, expand, and refine outputs to retain creativity and ownership.
➡️Design Engaging Solo Tasks:
Follow AI-supported work with autonomous, creative tasks to help employees stay motivated and exercise their own skills.
➡️Make AI Collaboration Transparent:
Clearly communicate AI’s supporting role to preserve employees’ sense of control and fulfillment in their contributions.
➡️Rotate Between Tasks:
Alternate between independent and AI-assisted tasks to maintain engagement and productivity throughout the workday.
➡️Train Employees to Use AI Mindfully:
Provide training that helps employees critically and strategically integrate AI, strengthening their autonomy and judgment.
Eine empirische Studie zeigt: Der wirtschaftliche Nutzen der generativen KI wird von Führungskräften mit praktischer Erfahrung deutlich positiver bewertet als von solchen ohne. Während 64 Prozent der Erfahrenen von einer schnellen Amortisation ausgehen, glauben dies nur 35 Prozent der Unerfahrenen. Die Wirtschaftlichkeit hängt stark vom Betriebsmodell, der Nutzungstiefe und den unternehmensspezifischen Bedingungen ab. Wer GenAI gezielt einsetzt, steigert Produktivität, Innovationskraft und Arbeitgeberattraktivität – ein realer betriebswirtschaftlicher Vorteil, schreibt Peter Buxmann in seinem Gastbeitrag für F.A.Z. PRO Digitalwirtschaft.
𝗪𝗲𝗶𝘁𝗲𝗿𝗹𝗲𝘀𝗲𝗻: ▶︎ https://lnkd.in/e3faARTd
Der Text stammt aus unserem Digitalwirtschaft-Newsletter zur digitalen Ökonomie. Der Newsletter wird jeden Mittwoch um 8 Uhr an 230.000 Abonnenten versendet und erklärt die relevanten Digitalthemen der Woche, aufgeteilt auf die Themenbereiche Künstliche Intelligenz, Zukunft der Arbeit, Digitale Transformation, Plattformen und Digitale Mobilität. Interessenten können den Newsletter zwei Monate 𝗸𝗼𝘀𝘁𝗲𝗻𝗹𝗼𝘀 testen. ▶️ https://lnkd.in/eY_4zwbr
Frankfurter Allgemeine Zeitung | 13 Kommentare auf LinkedIn
AI vs. human coaches: Examining the working alliance | Amber Barger, EdD, MCC | 31 comments
New Research: AI vs. Human Coaches - Building Effective Working Relationships
This study explores a fascinating question: Can AI coaches of the future build effective working relationships with clients comparable to human coaches?
Surprisingly, the answer is yes.
Part of my dissertation research study at Teachers College, Columbia University was recently published in an Advancing Coaching Scholarship special issue alongside other prominent scholars.
With AI increasingly entering human-centered spaces like coaching, this research offers early insight into its impact.
Through a randomized controlled experiment, I found that people could establish strong connections with both simulated autonomous AI and human coaches in just a single hour-long session. The data showed comparable relationship quality metrics across both conditions, with individuals specifically valuing the collaborative, goal-oriented conversation regardless of coach type.
Read the full study here to explore what this means for the future of coaching.
#AICoaching
https://lnkd.in/g4W7i8dx | 31 comments on LinkedIn
Friends, this is the MOST IMPORTANT study on AI in 2025. The brilliant Ethan Mollick and team studied how AI impacts individuals and teams across Procter & Gamble - the results are stunning.
Friends, this is the MOST IMPORTANT study on AI in 2025. The brilliant Ethan Mollick and team studied how AI impacts individuals and teams across Procter & Gamble- the results are stunning. Here’s what you need to know:
The "Cybernetic Teammate" study was conducted in Summer 2024 by a research from Harvard and Wharton, in partnership with Procter & Gamble.
++++++++++++++++++++
WHO WAS TESTED:
The study involved 776 P&G professionals and replicated P&G's product development process across four business units.
The experiment featured four distinct conditions:
- Individuals working alone without AI
- Individuals working alone with AI
- Teams of two specialists (one commercial expert, one technical R&D expert) working without AI
- Teams of two specialists working with AI
++++++++++++++++++++
KEY FINDINGS:
INDIVIDUAL PERFORMANCE:
AI improved individual performance by 37%
TEAM PERFORMANCE:
AI improved team performance by 39%
BREAKTHROUGH SOLUTIONS:
Teams using AI were 3x more likely to produce solutions in top 10% of quality
EFFICIENCY GAINS
Individuals using AI completed tasks 16.4% faster than those without
Teams with AI finished 12.7% faster than teams without AI
OUTPUT QUALITY
Despite working faster, AI-enabled groups produced substantially longer and more detailed solutions
EXPERTISE AND COLLABORATION EFFECTS
Breaking Down Silos!!
Without AI:
Clear professional silos existed — R&D specialists created technical solutions while Commercial specialists developed market-focused ideas
With AI:
Distinctions virtually disappeared — both types of specialists produced balanced solutions integrating technical and commercial perspectives
EXPERIENCE LEVELING:
Less experienced employees using AI performed at levels comparable to teams with experienced members
EMOTIONAL EXPERIENCE
Positive Emotions: AI users reported significantly higher levels of excitement, energy, and enthusiasm
Negative Emotions: AI users experienced less anxiety and frustration during work
Individual Experience: People working alone with AI reported emotional experiences comparable to or better than those in human teams
TEAM DYNAMICS
Solution Types:
Teams without AI showed a bimodal distribution (either technically or commercially oriented solutions)
Balanced Input:
AI appeared to reduce dominance effects, allowing more equal contribution from team members
Consistency:
Teams with AI showed more uniform, high-quality outputs compared to the variable results of standard teams
We'll be talking about this study for a while.
+++++++++++++++++++++++++++++
UPSKILL YOUR ORGANIZATION:
When your company is ready, we are ready to upskill your workforce at scale. Our Generative AI for Professionals course is tailored to enterprise and highly effective in driving AI adoption through a unique, proven behavioral transformation. Check out our website or shoot me a DM. | 133 comments on LinkedIn
Sehr spannende Studie von der Harvard Business School (03/2025) die aufzeigt, dass der Einsatz generativer KI (GenAI) die zentralen Aspekte von Teamarbeit…
Sehr spannende Studie von der Harvard Business School (03/2025) die aufzeigt, dass der Einsatz generativer KI (GenAI) die zentralen Aspekte von Teamarbeit…
Das KI-System „The AI Scientist“ von Sakana AI hat eine… | Matthias Kindt
Das KI-System "The AI Scientist-v2" von Sakana AI hat eine wissenschaftliche Publikation erstellt, die den Peer-Review-Prozess bei einem Workshop der wichtigen…
An article in Nature Human Behaviour examines the cognitive and emotional…
An article in Nature Human Behaviour examines the cognitive and emotional reasons behind people’s resistance to AI tools, even when these tools could be beneficial. The authors, structure their analysis around five key psychological barriers:
1. Opacity – Many AI systems function as “black boxes,” meaning their decision-making processes are difficult to interpret. This lack of transparency fosters distrust, as users struggle to understand or predict AI behaviour. To address this, some AI-powered products now prioritise explainability. One example is Netflix recommendations, which provides explanations such as “We suggest this movie because you watched Don’t Look Up.”
2. Emotionlessness – AI lacks human emotions, making interactions with it feel impersonal and detached. People often prefer human decision-makers because they perceive them as capable of empathy, care, and moral reasoning.
3. Rigidity – AI operates based on predefined rules and patterns, which can make it appear inflexible or incapable of handling nuanced, context-dependent situations in the way humans can.
4. Autonomy – The idea that AI acts independently can create discomfort, as it raises concerns about control, agency, and the unpredictability of automated systems. This becomes particularly important for activities through which we express our identity. People are more trusting of AI in situations where they don't seek agency.
5. Group Membership – Humans have a natural tendency to trust other humans over non-human agents. AI is often perceived as an “outsider,” which can lead to resistance, particularly in domains where social interaction or human judgment is highly valued.
The article discusses how these psychological barriers are deeply rooted in human cognition and biases, drawing on empirical studies that show both correlational and causal links between these factors and AI resistance. The authors also separate the barriers into two categories:
- AI-related factors (e.g., a system’s lack of transparency or inability to convey emotions)
- User-related factors (e.g., cognitive biases, emotional responses, and cultural influences shaping AI perception)
This distinction is important for designing interventions that promote the adoption of beneficial AI tools. However, the authors warn that efforts to overcome AI resistance, for example by including anthropormorphic features, could have unintended consequences. | 28 comments on LinkedIn