Talent-Horter oder Talent-Förderer? Die Skill-Revolution mit DHL und Bayer - GOOD MORNING L&D
„Nicht wer du bist, sondern was du kannst, entscheidet über deine Rolle.“ – Ein Paradigmenwechsel, der gerade dabei ist, unsere Arbeitswelt auf den Kopf zu s...
𝗧𝗵𝗲 United Nations 𝗱𝗿𝗼𝗽𝗽𝗲𝗱 𝗮 𝗻𝗲𝘄 𝗿𝗲𝗽𝗼𝗿𝘁 𝗼𝗻 𝗔𝗜 𝗮𝗻𝗱 𝗵𝘂𝗺𝗮𝗻 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁: ⬇️ While the world chases the next frontier model or AGI milestone, the UN cuts deeper: Human development has flatlined (especially in the global South). Progress stalled. Inequality is rising. Trust crumbling. No real bounce-back since Covid. And right in the middle of that — AI shows up.
AI could drive a new era. Or it could deepen the cracks. It all comes down to: How societies choose to use AI to empower people — or fail to.
𝗛𝗲𝗿𝗲 𝗮𝗿𝗲 14 𝗸𝗲𝘆 𝘁𝗮𝗸𝗲𝗮𝘄𝗮𝘆𝘀 𝘁𝗵𝗮𝘁 𝘀𝘁𝗼𝗼𝗱 𝗼𝘂𝘁 𝘁𝗼 𝗺𝗲: ⬇️
1. Most AI systems today are designed in cultures that don’t reflect the majority world.
→ ChatGPT answers are most aligned with very high HDI countries. That’s a problem.
2. The real risk isn’t AI superintelligence. It’s “so-so AI.”
→ Tools that destroy jobs without improving productivity are quietly eroding economies from the inside.
3. Every person is becoming an AI decision-maker.
→ The future isn’t shaped by OpenAI or Google alone. It’s shaped by how we all choose to use this tech, every day.
4. AI hype is costing us agency.
→ The more we believe it will solve everything, the less we act ourselves.
5. People expect augmentation, not replacement.
→ 61% believe AI will "enhance" their jobs. But only if policy and incentives align.
6. The age of automation skipped the global south. The age of augmentation must not.
→ Otherwise, we widen the digital divide into a chasm.
7. Augmentation helps the least experienced workers the most.
→ From call centers to consulting, AI boosts performance fastest at the entry-level.
9. Narratives matter.
→ If all we talk about is risk and control, we miss the transformative potential to reimagine development.
10. Wellbeing among young people is collapsing.
→ And yes, digital tools (including AI) are a key driver. Especially in high HDI countries.
11. Human connections are becoming more valuable. Not less.
→ As machines get better at faking it, the real thing becomes rarer — and more needed.
12. Assistive AI is quietly revolutionizing inclusion.
→ Tools like sign language translation and live captioning are expanding access — but only if they’re accessible.
13. AI benchmarks must change.
→ We need to measure "how AI advances human development", not just how well it performs on tests.
14. The new divide is not just about access. It’s about how countries "use" AI.
→ Complement vs. compete. Empower vs. automate.
According to the UN: The old question was: “What can AI do?” The better question is: “What will we "choose" to do with it?”
More in the comments and report below.
Enjoy.
𝗜 𝗲𝘅𝗽𝗹𝗼𝗿𝗲 𝘁𝗵𝗲𝘀𝗲 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁𝘀 — 𝗮𝗻𝗱 𝘄𝗵𝗮𝘁 𝘁𝗵𝗲𝘆 𝗺𝗲𝗮𝗻 𝗳𝗼𝗿 𝗿𝗲𝗮𝗹-𝘄𝗼𝗿𝗹𝗱 𝘂𝘀𝗲 𝗰𝗮𝘀𝗲𝘀 — 𝗶𝗻 𝗺𝘆 𝘄𝗲𝗲𝗸𝗹𝘆 𝗻𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿. 𝗬𝗼𝘂 𝗰𝗮𝗻 𝘀𝘂𝗯𝘀𝗰𝗿𝗶𝗯𝗲 𝗵𝗲𝗿𝗲 𝗳𝗼𝗿 𝗳𝗿𝗲𝗲: https://lnkd.in/dbf74Y9E | 41 comments on LinkedIn
Andrej Karpathy's keynote on June 17, 2025 at AI Startup School in San Francisco. Slides provided by Andrej: https://drive.google.com/file/d/1a0h1mkwfmV2Plek...
Today's L&D is more than just content. Or at least it should be.
When we think about AI in L&D, we often think about AI in learning design. Yet, to meet the needs of the business, L&D leaders need to orchestrate design, data, decisions and dialogue- incidentally, these are all things that AI can help with.
In 𝐥𝐞𝐚𝐫𝐧𝐢𝐧𝐠 𝐝𝐞𝐬𝐢𝐠𝐧, we already extensively use AI not just for content production, but also for user research, as a sparring partner and a sounding board (that was one of the top write-in use cases in mine and Donald's AI in L&D survey last year).
In 𝐥𝐞𝐚𝐫𝐧𝐢𝐧𝐠 𝐬𝐭𝐫𝐚𝐭𝐞𝐠𝐲, AI can help make sense of business, people and skills data (featured use case: asking AI to find gaps in learning or performance support provision in your organisation), or work as a thought partner to help you bridge learning and business strategy. Crucially, it can also help you engage stakeholders by preparing you for conversations and tailoring your communications to different audiences.
In terms of 𝐩𝐞𝐫𝐬𝐨𝐧𝐚𝐥𝐢𝐬𝐞𝐝 𝐬𝐮𝐩𝐩𝐨𝐫𝐭, AI interacts directly with employees to help them do their jobs: practise tricky conversations through role-plays and personalised feedback, prioritise and contextualise learning content to their needs, and, lately, retrieve exactly the information they need from almost anywhere in the company’s knowledge base.
Finally, in 𝐥𝐞𝐚𝐫𝐧𝐢𝐧𝐠 𝐨𝐩𝐞𝐫𝐚𝐭𝐢𝐨𝐧𝐬, AI can help do more than just draft emails and reports. Working together with humans, AI can help select the right vendors for the learning ecosystem, streamline employee help desk operations, analyse, make sense of and action on different kinds of data generated in L&D, and, of course, help L&D communicate with the rest of the business.
Researcher, producer, thought partner, communicator — if your organisation only uses AI to write scripts, you’re leaving three quarters of the L&D value chain on the table.
I like a good table, and I hope this one will help you think about how to get more value out of your AI use.
---
P.S. I spent quite a lot of time arguing with myself about the dots on the table. Feel free to disagree and suggest AI roles or use cases that I have missed!
Nodes #GenAI #Learning #Talent #FutureOfWork #AIAdoption | 50 comments on LinkedIn
Distinguishing performance gains from learning when using generative AI - published in Nature Reviews Psychology!
Excited to share our latest commentary just published in Nature Reviews Psychology! ✨
""
Generative AI tools such as ChatGPT are reshaping education, promising improvements in learner performance and reduced cognitive load. 🤖
🤔But here's the catch: Do these immediate gains translate into deep and lasting learning?
Reflecting on recent viral systematic reviews and meta-analyses on #ChatGPT and #Learning, we argue that educators and researchers need to clearly differentiate short-term performance benefits from genuine, durable learning outcomes. 💡
📌 Key takeaways:
✅ Immediate boosts with generative AI tools don't necessarily equal durable learning
✅ While generative AI can ease cognitive load, excessive reliance might negatively impact critical thinking, metacognition, and learner autonomy
✅ Long-term, meaningful skill development demands going beyond immediate performance metrics
🔖 Recommendations for future research and practice:
1️⃣ Shift toward assessing retention, transfer, and deep cognitive processing
2️⃣ Promote active learner engagement, critical evaluation, and metacognitive reflection
3️⃣ Implement longitudinal studies exploring the relationship between generative AI assistance and prior learner knowledge
Special thanks 🙏 to my amazing collaborators and mentors, Samuel Greiff, Jason M. Lodge, and Dragan Gasevic, for their invaluable contributions, guidance, and encouragement. A big shout-out to Dr. Teresa Schubert for her insightful comments and wonderful support throughout the editorial process! 🌟
👉 Full article here: https://lnkd.in/g3YDQUrH
👉 Full-text Access (view-only version): https://rdcu.be/erwIt
#GenerativeAI #ChatGPT #AIinEducation #LearningScience #Metacognition #Cognition #EdTech #EducationalResearch #BJETspecialIssue #NatureReviewsPsychology #FutureOfEducation #OpenScience
In a now viral study, researchers examined how using ChatGPT for essay writing affects our brains and cognitive abilities.
In a now viral study, researchers examined how using ChatGPT for essay writing affects our brains and cognitive abilities. They divided participants into three groups: one using ChatGPT, one using search engines, and one using just their brains. Through EEG monitoring, interviews, and analysis of the essays, they discovered some not surprising results about how AI use impacts learning and cognitive engagement.
There were five key takeaways for me (although this is not an exhaustive list), within the context of this particular study:
1. The Cognitive Debt Issue
The study indicates that participants who used ChatGPT exhibited the weakest neural connectivity patterns when compared to those relying on search engines or unaided cognition. This suggests that defaulting to generative AI may function as an intellectual shortcut, diminishing rather than strengthening cognitive engagement.
Researchers are increasingly describing the tradeoff between short-term ease and productivity and long-term erosion of independent thinking and critical skills as “cognitive debt.” This parallels the concept of technical debt, when developers prioritise quick solutions over robust design, leading to hidden costs, inefficiencies, and increased complexity downstream.
2. The Memory Problem
Strikingly, users of ChatGPT had difficulty recalling or quoting from essays they had composed only minutes earlier. This undermines the notion of augmentation; rather than supporting cognitive function, the tool appears to offload essential processes, impairing retention and deep processing of information.
3. The Ownership Gap
Participants who used ChatGPT reported a reduced sense of ownership over their work. If we normalise over-reliance on AI tools, we risk cultivating passive knowledge consumers rather than active knowledge creators.
4. The Homogenisation Effect
Analysis showed that essays from the LLM group were highly uniform, with repeated phrases and limited variation, suggesting reduced cognitive and expressive diversity. In contrast, the Brain-only group produced more varied and original responses. The Search group fell in between.
5. The Potential for Constructive Re-engagement 🧠 🤖 🤖 🤖
There is, however, promising evidence for meaningful integration of AI when used in conjunction with prior unaided effort:
“Those who had previously written without tools (Brain-only group), the so-called Brain-to-LLM group, exhibited significant increase in brain connectivity across all EEG frequency bands when allowed to use an LLM on a familiar topic. This suggests that AI-supported re-engagement invoked high levels of cognitive integration, memory reactivation, and top-down control.”
This points to the potential for AI to enhance cognitive function when it is used as a complement to, rather than a substitute for, initial human effort.
At over 200 pages, expect multiple paper submissions out of this extensive body of work.
https://lnkd.in/gzicDHp2 | 16 comments on LinkedIn
𝐍𝐨, 𝐲𝐨𝐮𝐫 𝐛𝐫𝐚𝐢𝐧 𝐝𝐨𝐞𝐬 𝐧𝐨𝐭 𝐩𝐞𝐫𝐟𝐨𝐫𝐦 𝐛𝐞𝐭𝐭𝐞𝐫 𝐚𝐟𝐭𝐞𝐫 𝐋𝐋𝐌 𝐨𝐫 𝐝𝐮𝐫𝐢𝐧𝐠 𝐋𝐋𝐌 𝐮𝐬𝐞.
See our paper for more results: "Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task" (link in the comments).
For 4 months, 54 students were divided into three groups: ChatGPT, Google -ai, and Brain-only. Across 3 sessions, each wrote essays on SAT prompts. In an optional 4th session, participants switched: LLM users used no tools (LLM-to-Brain), and Brain-only group used ChatGPT (Brain-to-LLM).
👇
𝐈. 𝐍𝐋𝐏 𝐚𝐧𝐝 𝐄𝐬𝐬𝐚𝐲 𝐂𝐨𝐧𝐭𝐞𝐧𝐭
- LLM Group: Essays were highly homogeneous within each topic, showing little variation. Participants often relied on the same expressions or ideas.
- Brain-only Group: Diverse and varied approaches across participants and topics.
- Search Engine Group: Essays were shaped by search engine-optimized content; their ontology overlapped with the LLM group but not with the Brain-only group.
𝐈𝐈. 𝐄𝐬𝐬𝐚𝐲 𝐒𝐜𝐨𝐫𝐢𝐧𝐠 (𝐓𝐞𝐚𝐜𝐡𝐞𝐫𝐬 𝐯𝐬. 𝐀𝐈 𝐉𝐮𝐝𝐠𝐞)
- Teachers detected patterns typical of AI-generated content and scoring LLM essays lower for originality and structure.
- AI Judge gave consistently higher scores to LLM essays, missing human-recognized stylistic traits.
𝐈𝐈𝐈: 𝐄𝐄𝐆 𝐀𝐧𝐚𝐥𝐲𝐬𝐢𝐬
Connectivity: Brain-only group showed the highest neural connectivity, especially in alpha, theta, and delta bands. LLM users had the weakest connectivity, up to 55% lower in low-frequency networks. Search Engine group showed high visual cortex engagement, aligned with web-based information gathering.
𝑺𝒆𝒔𝒔𝒊𝒐𝒏 4 𝑹𝒆𝒔𝒖𝒍𝒕𝒔:
- LLM-to-Brain (🤖🤖🤖🧠) participants underperformed cognitively with reduced alpha/beta activity and poor content recall.
- Brain-to-LLM (🧠🧠🧠🤖) participants showed strong re-engagement, better memory recall, and efficient tool use.
LLM-to-Brain participants had potential limitations in achieving robust neural synchronization essential for complex cognitive tasks.
Results for Brain-to-LLM participants suggest that strategic timing of AI tool introduction following initial self-driven effort may enhance engagement and neural integration.
𝐈𝐕. 𝐁𝐞𝐡𝐚𝐯𝐢𝐨𝐫𝐚𝐥 𝐚𝐧𝐝 𝐂𝐨𝐠𝐧𝐢𝐭𝐢𝐯𝐞 𝐄𝐧𝐠𝐚𝐠𝐞𝐦𝐞𝐧𝐭
- Quoting Ability: LLM users failed to quote accurately, while Brain-only participants showed robust recall and quoting skills.
- Ownership: Brain-only group claimed full ownership of their work; LLM users expressed either no ownership or partial ownership.
- Critical Thinking: Brain-only participants cared more about 𝘸𝘩𝘢𝘵 and 𝘸𝘩𝘺 they wrote; LLM users focused on 𝘩𝘰𝘸.
- Cognitive Debt: Repeated LLM use led to shallow content repetition and reduced critical engagement. This suggests a buildup of "cognitive debt", deferring mental effort at the cost of long-term cognitive depth.
Support and share! ❤️
#MIT #AI #Brain #Neuroscience #CognitiveDebt | 54 comments on LinkedIn
Du kannst jetzt das passende Modell für deinen CustomGPT auswählen.
Na endlich!
Du kannst jetzt das passende Modell für deinen CustomGPT auswählen.
CustomGPTs sind für mich das beste Feature in ChatGPT und wurden in den letzten 12 Monaten stark vernachlässigt.
Mit der Modell-Auswahl kommt jetzt das erste gute Upgrade.
Mini-Guide zur Modell-Auswahl:
o3 -> Komplexe Problemstellungen und Datenanalyse
4.5 -> Kreative Aufgaben und Copywriting
4o -> Bild-Verarbeitung
4.1 -> Coding
Alle anderen Modelle benötigt man mMn nicht.
Mein Strategieberater bekommt zum Beispiel o3 hinterlegt (bessere Planungsfähigkeit in komplexen Aufgaben), wohingegen der Hook Writer GPT4.5 bekommt (besserer Schreibstil).
Wenn du die CustomGPTs selbst nutzen willst:
80+ Vorlagen frei verfügbar in unserer Assistenten-Datenbank 👇
P.S. Wie findest du das Update? | 15 Kommentare auf LinkedIn
BOOM! Microsoft just dropped a FREE 18-episode series on Generative AI.
Ideal for people who are new to AI & wanna start learning.
Here are 5 episodes that stood out
𝗜𝘁 𝘄𝗶𝗹𝗹 𝘁𝗮𝗸𝗲 𝘆𝗼𝘂 𝗹𝗲𝘀𝘀 𝘁𝗵𝗮𝗻 𝟭.𝟱 𝗵𝗼𝘂𝗿𝘀 𝘁𝗼 𝘄𝗮𝘁𝗰𝗵 𝗮𝗹𝗹 𝘁𝗵𝗲𝘀𝗲:
👉 Introduction to Generative AI and LLMs
https://lnkd.in/dxds5CXY
👉 Exploring and Comparing Different LLMs
https://lnkd.in/dnu5sP68
👉 Understanding Prompt Engineering Fundamentals
https://lnkd.in/d8t56acG
👉 Building Low-Code AI Applications
https://lnkd.in/dKVXmdeK
👉 AI Agents – Introduces AI Agents, where LLMs can take actions via tools or frameworks.
https://lnkd.in/d8VKw7Ve
More resources are in the comments.
Repost this post to help others in your network.
| 91 comments on LinkedIn
99% 𝗼𝗳 𝗽𝗲𝗼𝗽𝗹𝗲 𝗴𝗲𝘁 𝘁𝗵𝗶𝘀 𝘄𝗿𝗼𝗻𝗴: 𝗧𝗵𝗲𝘆 𝘂𝘀𝗲 𝘁𝗵𝗲 𝘁𝗲𝗿𝗺𝘀 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁 𝗮𝗻𝗱 𝗔𝗴𝗲𝗻𝘁𝗶𝗰 𝗔𝗜 𝗶𝗻𝘁𝗲𝗿𝗰𝗵𝗮𝗻𝗴𝗲𝗮𝗯𝗹𝘆 — 𝗯𝘂𝘁 𝘁𝗵𝗲𝘆 𝗱𝗲𝘀𝗰𝗿𝗶𝗯𝗲 𝘁𝘄𝗼 𝗳𝘂𝗻𝗱𝗮𝗺𝗲𝗻𝘁𝗮𝗹𝗹𝘆 𝗱𝗶𝗳𝗳𝗲𝗿𝗲𝗻𝘁 𝗮𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲𝘀! ⬇️
Let’s clarify it once and for all: ⬇️
1. 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁𝘀: 𝗧𝗼𝗼𝗹𝘀 𝘄𝗶𝘁𝗵 𝗔𝘂𝘁𝗼𝗻𝗼𝗺𝘆, 𝗪𝗶𝘁𝗵𝗶𝗻 𝗟𝗶𝗺𝗶𝘁𝘀
➜ AI agents are modular, goal-directed systems that operate within clearly defined boundaries. They’re built to:
* Use tools (APIs, browsers, databases)
* Execute specific, task-oriented workflows
* React to prompts or real-time inputs
* Plan short sequences and return actionable outputs
𝘛𝘩𝘦𝘺’𝘳𝘦 𝘦𝘹𝘤𝘦𝘭𝘭𝘦𝘯𝘵 𝘧𝘰𝘳 𝘵𝘢𝘳𝘨𝘦𝘵𝘦𝘥 𝘢𝘶𝘵𝘰𝘮𝘢𝘵𝘪𝘰𝘯, 𝘭𝘪𝘬𝘦: 𝘊𝘶𝘴𝘵𝘰𝘮𝘦𝘳 𝘴𝘶𝘱𝘱𝘰𝘳𝘵 𝘣𝘰𝘵𝘴, 𝘐𝘯𝘵𝘦𝘳𝘯𝘢𝘭 𝘬𝘯𝘰𝘸𝘭𝘦𝘥𝘨𝘦 𝘴𝘦𝘢𝘳𝘤𝘩, 𝘌𝘮𝘢𝘪𝘭 𝘵𝘳𝘪𝘢𝘨𝘦, 𝘔𝘦𝘦𝘵𝘪𝘯𝘨 𝘴𝘤𝘩𝘦𝘥𝘶𝘭𝘪𝘯𝘨, 𝘊𝘰𝘥𝘦 𝘴𝘶𝘨𝘨𝘦𝘴𝘵𝘪𝘰𝘯𝘴
But even the most advanced are limited by scope. They don’t initiate. They don’t collaborate. They execute what we ask!
2. 𝗔𝗴𝗲𝗻𝘁𝗶𝗰 𝗔𝗜: 𝗔 𝗦𝘆𝘀𝘁𝗲𝗺 𝗼𝗳 𝗦𝘆𝘀𝘁𝗲𝗺𝘀
➜ Agentic AI is an architectural leap. It’s not just one smarter agent — it’s multiple specialized agents working together toward shared goals. These systems exhibit:
* Multi-agent collaboration
* Goal decomposition and role assignment
* Inter-agent communication via memory or messaging
* Persistent context across time and tasks
* Recursive planning and error recovery
* Distributed orchestration and adaptive feedback
Agentic AI systems don’t just follow instructions. They coordinate. They adapt. They manage complexity.
𝘌𝘹𝘢𝘮𝘱𝘭𝘦𝘴 𝘪𝘯𝘤𝘭𝘶𝘥𝘦: 𝘳𝘦𝘴𝘦𝘢𝘳𝘤𝘩 𝘵𝘦𝘢𝘮𝘴 𝘱𝘰𝘸𝘦𝘳𝘦𝘥 𝘣𝘺 𝘢𝘨𝘦𝘯𝘵𝘴, 𝘴𝘮𝘢𝘳𝘵 𝘩𝘰𝘮𝘦 𝘦𝘤𝘰𝘴𝘺𝘴𝘵𝘦𝘮𝘴 𝘰𝘱𝘵𝘪𝘮𝘪𝘻𝘪𝘯𝘨 𝘦𝘯𝘦𝘳𝘨𝘺/𝘴𝘦𝘤𝘶𝘳𝘪𝘵𝘺, 𝘴𝘸𝘢𝘳𝘮𝘴 𝘰𝘧 𝘳𝘰𝘣𝘰𝘵𝘴 𝘪𝘯 𝘭𝘰𝘨𝘪𝘴𝘵𝘪𝘤𝘴 𝘰𝘳 𝘢𝘨𝘳𝘪𝘤𝘶𝘭𝘵𝘶𝘳𝘦 𝘮𝘢𝘯𝘢𝘨𝘪𝘯𝘨 𝘳𝘦𝘢𝘭-𝘵𝘪𝘮𝘦 𝘶𝘯𝘤𝘦𝘳𝘵𝘢𝘪𝘯𝘵𝘺
𝗧𝗵𝗲 𝗖𝗼𝗿𝗲 𝗗𝗶𝗳𝗳𝗲𝗿𝗲𝗻𝗰𝗲?
AI Agents = autonomous tools for single-task execution
Agentic AI = orchestrated ecosystems for workflow-level intelligence
𝗡𝗼𝘄 𝗹𝗼𝗼𝗸 𝗮𝘁 𝘁𝗵𝗲 𝗽𝗶𝗰𝘁𝘂𝗿𝗲: ⬇️
𝗢𝗻 𝘁𝗵𝗲 𝗹𝗲𝗳𝘁: a smart thermostat, which can be an AI Agent. It keeps your room at 21°C. Maybe it learns your schedule. But it’s working alone.
𝗢𝗻 𝘁𝗵𝗲 𝗿𝗶𝗴𝗵𝘁: Agentic AI. A full smart home ecosystem — weather-aware, energy-optimized, schedule-sensitive. Agents talk to each other. They share data. They make coordinated decisions to optimize your comfort, cost, and security in real time.
That’s the shift = From pure task automation to goal-driven orchestration. From single-agent logic to collaborative intelligence. This is what’s coming = This is Agentic AI. And if we confuse “agent” with “agentic,” we risk underbuilding for what AI is truly capable of.
The Cornell University paper in the comments on this topic is excellent! ⬇️ | 186 comments on LinkedIn
In fact, after guiding many organisations on this journey over the past few years, I've noticed two consistent drivers of AI adoption:
• A culture that encourages experimentation
• A strategic mandate from leadership that unlocks time, resources, and the infrastructure needed to make AI work at scale
Without both, even the most powerful tools are used at a fraction of their potential, leaving the promise of AI unrealised and considerable investments wasted.
➡️ If you have a conservative organisational culture, one that disincentivises taking risks and change, and there's no clear mandate to use AI, you'll have 𝗶𝗱𝗹𝗲 𝗽𝗼𝘁𝗲𝗻𝘁𝗶𝗮𝗹. Try as you might, AI training will hardly translate into people using AI in their work. The knowledge might be there but the impact isn't.
➡️ If you have an innovation culture, one where experimentation is encouraged, but where people are unsure if they're allowed to use AI, you'll have 𝗰𝗮𝘀𝘂𝗮𝗹 𝗲𝘅𝗽𝗲𝗿𝗶𝗺𝗲𝗻𝘁𝘀 - some people tinkering on their own, finding useful use cases and workarounds, but with no way to accumulate, build on, and spread this knowledge. That's where a lot of organisations find themselves in 2025 - the majority of employees are using AI in some form, yet their efforts are siloed and scattered.
➡️ If you have both an innovation culture *and* and an active mandate, you're 𝗽𝗶𝗼𝗻𝗲𝗲𝗿𝗶𝗻𝗴 𝗶𝗻𝗻𝗼𝘃𝗮𝘁𝗶𝗼𝗻 and there are few companies still at your level. That's an exciting place to be! That's also where a lot of organisations imagine they would get to as soon as they teach people to use AI, often without first doing the culture and mandate work.
➡️ If your organisation encourages the use of AI but your conservative culture keeps hitting the brakes, you'll likely end up with 𝗿𝗲𝗹𝘂𝗰𝘁𝗮𝗻𝘁 𝗰𝗼𝗺𝗽𝗹𝗶𝗮𝗻𝗰𝗲. That's also where a considerable number of organisations are right now: driven by expectations of benefits from AI adoption but burdened by processes that are incompatible with grassroots innovation.
There is a difference between individual and organisational AI adoption. Organisational adoption is frustratingly complex — it requires coordination across departments and leaders, alignment with business priorities, and systems that enable change, not just enthusiasm.
Curiosity gets people started. Supportive systems turn momentum into scale.
Nodes #GenAI #AIAdoption #FutureOfWork #Talent
Saw this move from Google this morning—thanks to Marc Steven Ramos (a very fine creator and curator of thought-provoking content). This statement towards the end stood out for me: many of the platform’s “courses were unused,” and “not relevant to the work we do today.”
But Google is not representative of most companies. Not even of tech companies. They can (and should!) be AI-first in every respect—yesterday. Virtually all other companies will take a slower approach, maintaining their learning content and systems, for now.
So don’t think you need to drop everything immediately. Instead, work out what a more measured approach looks like for your organisation. Think about how you’re preparing your data, metadata, internal and external content—and your people—for this not-so-distant future when agents are doing more and more of the work, multiplying productivity. Help your company lead the way—don’t await instructions!
That said, I think most three-year horizons will include the other big pull quote from this piece:
Google will “focus on teaching employees how to use modern artificial intelligence tools in their daily work routines.”
That, I believe, is where the most worthy—and therefore sustainable—L&D efforts lie: not in creating courses and force-feeding them to people, but in enabling people to work better with AI.
♻️ Please REPOST if people you’re connected to may like this.
➕ Follow Marc Zao-Sanders for more of this kind of thing.
#AI #learning #filtered.com #acelo.ai
https://lnkd.in/ehA2pB_R
ps: I'm working fractionally for both acelo.ai (sales x AI) and filtered.com (learning content x AI). If you're interested in talking about either, DM me)
For the longest time we've had two main options to help people perform: upskilling or performance support. Just-in-case vs just-in-time. Push vs pull. With AI, we now have a third - enablement.
It's different from what we've had before:
𝐔𝐩𝐬𝐤𝐢𝐥𝐥𝐢𝐧𝐠 ("teach me") - commonly done through hands-on learning with feedback and reflection, such as scenario simulations, in-person role-plays, facilitated discussions, building and problem-solving. None of that has become less relevant, but AI has enabled scale through AI-enabled role-plays, coaching, and other avenues for personalised feedback.
𝐏𝐞𝐫𝐟𝐨𝐫𝐦𝐚𝐧𝐜𝐞 𝐬𝐮𝐩𝐩𝐨𝐫𝐭 ("help me") - support in the flow of work, previously often in the format of short how-to resources located in convenient places. AI has elevated that in at least two ways: through knowledge management, which helps retrieve the necessary, contextualised information in the workflow; and general & specialised copilots that enhance the speed and, arguably, the expertise of the employee.
Yet, 𝐞𝐧𝐚𝐛𝐥𝐞𝐦𝐞𝐧𝐭 (‘do it for me’) is different – it takes the task off your plate entirely. We’ve seen hints of it with automations, but the text and analysis capabilities of genAI mean that increasingly 'skilled' tasks are now up for grabs.
Case in point: where written communication was once a skill to be learned, email and report writing are now increasingly being handed off to AI. No skill required (for better or worse) – AI does it for you.
But here's a plot twist: a lot of that enablement happens outside of L&D tech. It may happen in sales or design software, or even your general-purpose enterprise AI.
All of which points to a bigger shift: roles, tasks, and ways of working are changing – and L&D must tune into how work is being reimagined to adapt alongside it.
Nodes #GenAI #Learning #Talent #FutureOfWork #AIAdoption | 13 comments on LinkedIn
The Alan Turing Institute 𝗮𝗻𝗱 the LEGO Group 𝗱𝗿𝗼𝗽𝗽𝗲𝗱 𝘁𝗵𝗲 𝗳𝗶𝗿𝘀𝘁 𝗰𝗵𝗶𝗹𝗱-𝗰𝗲𝗻𝘁𝗿𝗶𝗰 𝗔𝗜 𝘀𝘁𝘂𝗱𝘆! ⬇️
(𝘈 𝘮𝘶𝘴𝘵-𝘳𝘦𝘢𝘥 — 𝘦𝘴𝘱𝘦𝘤𝘪𝘢𝘭𝘭𝘺 𝘪𝘧 𝘺𝘰𝘶 𝘩𝘢𝘷𝘦 𝘤𝘩𝘪𝘭𝘥𝘳𝘦𝘯.)
While most AI debates and studies focus on models, chips, and jobs — this one zooms in on something far more personal: 𝗪𝗵𝗮𝘁 𝗵𝗮𝗽𝗽𝗲𝗻𝘀 𝘄𝗵𝗲𝗻 𝗰𝗵𝗶𝗹𝗱𝗿𝗲𝗻 𝗴𝗿𝗼𝘄 𝘂𝗽 𝘄𝗶𝘁𝗵 𝗴𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝘃𝗲 𝗔𝗜?
They surveyed 1,700+ kids, parents, and teachers across the UK — and what they found is both powerful and concerning.
𝗛𝗲𝗿𝗲 𝗮𝗿𝗲 9 𝘁𝗵𝗶𝗻𝗴𝘀 𝘁𝗵𝗮𝘁 𝘀𝘁𝗼𝗼𝗱 𝗼𝘂𝘁 𝘁𝗼 𝗺𝗲 𝗳𝗿𝗼𝗺 𝘁𝗵𝗲 𝗿𝗲𝗽𝗼𝗿𝘁: ⬇️
1. 1 𝗶𝗻 4 𝗸𝗶𝗱𝘀 (8–12 𝘆𝗿𝘀) 𝗮𝗹𝗿𝗲𝗮𝗱𝘆 𝘂𝘀𝗲 𝗚𝗲𝗻𝗔𝗜 — 𝗺𝗼𝘀𝘁 𝘄𝗶𝘁𝗵𝗼𝘂𝘁 𝘀𝗮𝗳𝗲𝗴𝘂𝗮𝗿𝗱𝘀
→ ChatGPT, Gemini, and even MyAI on Snapchat are now part of daily digital play.
2. 𝗔𝗜 𝗶𝘀 𝗵𝗲𝗹𝗽𝗶𝗻𝗴 𝗸𝗶𝗱𝘀 𝗲𝘅𝗽𝗿𝗲𝘀𝘀 𝘁𝗵𝗲𝗺𝘀𝗲𝗹𝘃𝗲𝘀 — 𝗲𝘀𝗽𝗲𝗰𝗶𝗮𝗹𝗹𝘆 𝘁𝗵𝗼𝘀𝗲 𝘄𝗶𝘁𝗵 𝗹𝗲𝗮𝗿𝗻𝗶𝗻𝗴 𝗻𝗲𝗲𝗱𝘀
→ 78% of neurodiverse kids use ChatGPT to communicate ideas they struggle to express otherwise.
3. 𝗖𝗿𝗲𝗮𝘁𝗶𝘃𝗶𝘁𝘆 𝗶𝘀 𝘀𝗵𝗶𝗳𝘁𝗶𝗻𝗴 — 𝗯𝘂𝘁 𝗻𝗼𝘁 𝗿𝗲𝗽𝗹𝗮𝗰𝗶𝗻𝗴
→ Kids still prefer offline tools (arts, crafts, games), even when they enjoy AI-assisted play. Digital is not (yet) the default.
4. 𝗔𝗜 𝗮𝗰𝗰𝗲𝘀𝘀 𝗶𝘀 𝗵𝗶𝗴𝗵𝗹𝘆 𝘂𝗻𝗲𝗾𝘂𝗮𝗹
→ 52% of private school students use GenAI, compared to only 18% in public schools. The next digital divide is already here.
5. 𝗖𝗵𝗶𝗹𝗱𝗿𝗲𝗻 𝗮𝗿𝗲 𝘄𝗼𝗿𝗿𝗶𝗲𝗱 𝗮𝗯𝗼𝘂𝘁 𝗔𝗜’𝘀 𝗲𝗻𝘃𝗶𝗿𝗼𝗻𝗺𝗲𝗻𝘁𝗮𝗹 𝗶𝗺𝗽𝗮𝗰𝘁
→ Some kids refused to use GenAI after learning about water and energy costs. Let that sink in.
6. 𝗣𝗮𝗿𝗲𝗻𝘁𝘀 𝗮𝗿𝗲 𝗼𝗽𝘁𝗶𝗺𝗶𝘀𝘁𝗶𝗰 — 𝗯𝘂𝘁 𝗱𝗲𝗲𝗽𝗹𝘆 𝘄𝗼𝗿𝗿𝗶𝗲𝗱
→ 76% support AI use, but 82% are scared of inappropriate content and misinformation. Only 41% fear cheating.
7. 𝗧𝗲𝗮𝗰𝗵𝗲𝗿𝘀 𝗮𝗿𝗲 𝘂𝘀𝗶𝗻𝗴 𝗔𝗜 — 𝗮𝗻𝗱 𝗹𝗼𝘃𝗶𝗻𝗴 𝗶𝘁
→ 85% say GenAI boosts their productivity, 88% feel confident using it. They’re ahead of the curve.
8. 𝗖𝗿𝗶𝘁𝗶𝗰𝗮𝗹 𝘁𝗵𝗶𝗻𝗸𝗶𝗻𝗴 𝗶𝘀 𝘂𝗻𝗱𝗲𝗿 𝘁𝗵𝗿𝗲𝗮𝘁
→ 76% of parents and 72% of teachers fear kids are becoming too trusting of GenAI outputs.
9. 𝗕𝗶𝗮𝘀 𝗮𝗻𝗱 𝗶𝗱𝗲𝗻𝘁𝗶𝘁𝘆 𝗿𝗲𝗽𝗿𝗲𝘀𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻 𝗶𝘀 𝘀𝘁𝗶𝗹𝗹 𝗮 𝗯𝗹𝗶𝗻𝗱𝘀𝗽𝗼𝘁
→ Children of color felt less seen and less motivated to use tools that didn’t reflect them. Representation matters.
The next generation isn’t just using AI. They’re being shaped by it. That’s why we need a more focused, intentional approach: Teaching them not just how to use these tools — but how to question them. To navigate the benefits, the risks, and the blindspots.
𝗪𝗮𝗻𝘁 𝗺𝗼𝗿𝗲 𝗯𝗿𝗲𝗮𝗸𝗱𝗼𝘄𝗻𝘀 𝗹𝗶𝗸𝗲 𝘁𝗵𝗶𝘀?
Subscribe to Human in the Loop — my new weekly deep dive on AI agents, real-world tools, and strategic insights: https://lnkd.in/dbf74Y9E | 174 comments on LinkedIn
Understanding LLMs, RAG, AI Agents, and Agentic AI
I frequently see conversations where terms like LLMs, RAG, AI Agents, and Agentic AI are used interchangeably, even though they represent fundamentally different layers of capability.
This visual guides explain how these four layers relate—not as competing technologies, but as an evolving intelligence architecture.
Here’s a deeper look:
1. 𝗟𝗟𝗠 (𝗟𝗮𝗿𝗴𝗲 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗠𝗼𝗱𝗲𝗹)
This is the foundation. Models like GPT, Claude, and Gemini are trained on vast corpora of text to perform a wide array of tasks:
– Text generation
– Instruction following
– Chain-of-thought reasoning
– Few-shot/zero-shot learning
– Embedding and token generation
However, LLMs are inherently limited to the knowledge encoded during training and struggle with grounding, real-time updates, or long-term memory.
2. 𝗥𝗔𝗚 (𝗥𝗲𝘁𝗿𝗶𝗲𝘃𝗮𝗹-𝗔𝘂𝗴𝗺𝗲𝗻𝘁𝗲𝗱 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝗼𝗻)
RAG bridges the gap between static model knowledge and dynamic external information.
By integrating techniques such as:
– Vector search
– Embedding-based similarity scoring
– Document chunking
– Hybrid retrieval (dense + sparse)
– Source attribution
– Context injection
…RAG enhances the quality and factuality of responses. It enables models to “recall” information they were never trained on, and grounds answers in external sources—critical for enterprise-grade applications.
3. 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁
RAG is still a passive architecture—it retrieves and generates. AI Agents go a step further: they act.
Agents perform tasks, execute code, call APIs, manage state, and iterate via feedback loops. They introduce key capabilities such as:
– Planning and task decomposition
– Execution pipelines
– Long- and short-term memory integration
– File access and API interaction
– Use of frameworks like ReAct, LangChain Agents, AutoGen, and CrewAI
This is where LLMs become active participants in workflows rather than just passive responders.
4. 𝗔𝗴𝗲𝗻𝘁𝗶𝗰 𝗔𝗜
This is the most advanced layer—where we go beyond a single autonomous agent to multi-agent systems with role-specific behavior, memory sharing, and inter-agent communication.
Core concepts include:
– Multi-agent collaboration and task delegation
– Modular role assignment and hierarchy
– Goal-directed planning and lifecycle management
– Protocols like MCP (Anthropic’s Model Context Protocol) and A2A (Google’s Agent-to-Agent)
– Long-term memory synchronization and feedback-based evolution
Agentic AI is what enables truly autonomous, adaptive, and collaborative intelligence across distributed systems.
Whether you’re building enterprise copilots, AI-powered ETL systems, or autonomous task orchestration tools, knowing what each layer offers—and where it falls short—will determine whether your AI system scales or breaks.
If you found this helpful, share it with your team or network.
If there’s something important you think I missed, feel free to comment or message me—I’d be happy to include it in the next iteration. | 119 comments on LinkedIn
BREAKING: Claude launches Education. Free learning is now much faster with AI:
1. Set clear learning goals
↳ Knowing what you want to learn makes it easier.
↳ Claude helps you define your path.
2. Provide context for your knowledge
↳ Understanding the bigger picture is key.
↳ Claude connects new ideas to what you already know.
3. Request detailed explanations
↳ Sometimes, you need more than a quick answer.
↳ Claude can dive deep into complex topics.
4. Get real-world examples
↳ Learning is better with practical applications.
↳ Claude shows how concepts work in the real world.
5. Practice writing and receive feedback
↳ Writing helps solidify your knowledge.
↳ Claude gives instant feedback to improve your skills.
6. Role-play for languages or coding
↳ Learning by doing is effective.
↳ Claude can simulate conversations or coding scenarios.
7. Fact-check surprising claims
↳ Misinformation is everywhere.
↳ Claude helps you verify facts and claims.
8. Take breaks and reflect on learning
↳ Reflection is vital for understanding.
↳ Claude reminds you to pause and think.
9. Keep a learning journal
↳ Tracking your progress is important.
↳ Claude can help you log your journey.
10. Iterate and refine understanding
↳ Learning is a process.
↳ Claude encourages you to improve your knowledge. | 246 comments on LinkedIn
New research shows that your learners aren’t using AI to cheat - they’re using it to redesign your courses...
Despite our obsession with AI's impact on "academic integrity," two recent analyses show that rather than asking AI for answers, learners are much more likely to use AI to redesign the learning experience in an attempt to learn more.
Common strategies include asking AI to apply the protégé effect, using AI to apply the Pareto principle and enhancing levels of emotional metacognition within a learning experience, in the process redesigning the experience sometimes beyond recognition.
The uncomfortable truth? Learners are effectively running a real-time audit of our design decisions, processes & practices—and as instructional designers, we don't come out too well.
In this week's blog post, I explore what learner + AI behaviour reveals about our profession and how we might turn this into an opportunity for innovation in instrucitonal design practices and principles.
Check out the full post using the link in comments.
Happy innovating!
Phil 👋 | 16 comments on LinkedIn
An Introduction to Training from the Back of the Room (TBR) where we use brain-based accelerated learning to create engaging fun learning experiences.Trainin...
Die Pflichtlektüre zum Sonntag: Mary Meeker hat einen ihrer legendären Reports gedropped... Nach den jährlichen "Internet Trends" nun ein 340 Seiten Brett ihrer Investment Firma Bond Capital ganz zum Thema AI.
Superbes Gedankenfutter hinsichtlich u.a.:
1. Nutzerwachstum und Verbreitung
• ChatGPT erreichte 800 Millionen wöchentliche Nutzer in nur 17 Monaten
• Verbreitung außerhalb Nordamerikas liegt bei 90 Prozent – nach nur 3 Jahren
• Vergleich: Das Internet brauchte dafür 23 Jahre
• KI-Anwendungen skalieren global nahezu gleichzeitig
2. Investitionen und Infrastruktur
• Big Tech (Apple, Microsoft, Google, Amazon, Meta, Nvidia) investiert über 212 Milliarden Dollar CapEx pro Jahr
• KI wird zur neuen Infrastruktur – vergleichbar mit Strom oder Internet
• Rechenzentren werden zu produktiven "KI-Fabriken"
3. Entwickler-Ökosysteme explodieren
• Google Gemini: 7 Millionen aktive Entwickler, +500 Prozent in 12 Monaten
• NVIDIA-Ökosystem: 6 Millionen Entwickler, +6x in sieben Jahren
• Open Source spielt zunehmend eine Schlüsselrolle, auch in China
4. Technologischer Fortschritt beschleunigt sich exponentiell
• 260 Prozent Wachstum pro Jahr bei Trainingsdatenmengen
• 360 Prozent Wachstum pro Jahr beim Compute-Aufwand für Modelltraining
• Bessere Algorithmen führen zu 200 Prozent Effizienzsteigerung pro Jahr
• Fortschritte bei Supercomputern ermöglichen +150 Prozent Leistungszuwachs jährlich
5. Monetarisierung ist real – aber teuer
• OpenAI mit starkem Nutzerwachstum, aber weiterhin Milliardenverluste
• Compute-Kosten steigen, Inferenzkosten pro Token sinken
• Monetäre Skalierung bleibt herausfordernd und kompetitiv
6. Arbeit und Gesellschaft verändern sich sichtbar
• IT-KI-Stellen in den USA: +448 Prozent seit 2018
• Nicht-KI-IT-Stellen: –9 Prozent
• Erste autonome Taxis nehmen Marktanteile in Städten wie San Francisco
• KI-Scribes in der Medizin reduzieren administrativen Aufwand massiv
7. Wissen und Kommunikation erleben ein neues Zeitalter
• Nach Buchdruck und Internet folgt die Ära der generativen Wissensverbreitung
• Generative KI verändert, wie wir Wissen erzeugen, verbreiten und nutzen
• Anwendungen wie ElevenLabs oder Spotify übersetzen Stimmen in Echtzeit, global skalierbar
8. Geopolitik wird zur KI-Strategie
• USA und China investieren aggressiv in souveräne KI-Modelle
• Wer KI-Infrastruktur dominiert, definiert ökonomische und politische Macht neu
• Führende CTOs sprechen offen von einem neuen "Space Race"
9. Chancen und Risiken sind gewaltig
• KI kann medizinische Forschung, Bildung und Kreativität beflügeln
• Gleichzeitig drohen Kontrollverlust, Missbrauch, Arbeitsplatzverdrängung, ethische Dilemmata
Meinungen? Evangelos Papathanassiou Christian Herold Thorsten Muehl Christoph Deutschmann Constance Stein Rebecca Schalber Sandy Brueckner Dirk Hofmann Henning Tomforde Dr. Paul Elvers Katharina Neubert Laura Seiffe Ekaterina Schneider
re:publica 25: Bob Blume - 404: Bildung not found - Wie Lernen wieder berühren kann
404: Bildung not found - Wie Lernen wieder berühren kannSchule soll aufs Leben vorbereiten. Einige fordern mehr Wissen über Steuern und Finanzen, KI liefert ...
B𝘦𝘵𝘳𝘪𝘦𝘣𝘭𝘪𝘤𝘩𝘦𝘴 𝘓𝘦𝘳𝘯𝘦𝘯 𝘨𝘪𝘯𝘨 𝘣𝘪𝘴𝘩𝘦𝘳 𝘷𝘰𝘯 𝘧𝘰𝘭𝘨𝘦𝘯𝘥𝘦𝘯 𝘗𝘳𝘢𝘮𝘪𝘴𝘴𝘦𝘯 𝘢𝘶𝘴:
• 𝘞𝘪𝘴𝘴𝘦𝘯𝘴- 𝘶𝘯𝘥 𝘘𝘶𝘢𝘭𝘪𝘧𝘪𝘬𝘢𝘵𝘪𝘰𝘯𝘴𝘻𝘪𝘦𝘭𝘦 𝘴𝘪𝘯𝘥 𝘧𝘶𝘳 𝘢𝘭𝘭𝘦 𝘨𝘭𝘦𝘪𝘤𝘩 𝘪𝘯 𝘦𝘪𝘯𝘦𝘮 𝘊𝘶𝘳𝘳𝘪𝘤𝘶𝘭𝘶𝘮 𝘷𝘰𝘳𝘨𝘦𝘨𝘦𝘣𝘦𝘯.
• 𝘋𝘪𝘦𝘴𝘦 𝘡𝘪𝘦𝘭𝘦 𝘸𝘦𝘳𝘥𝘦𝘯 𝘪𝘯 𝘧𝘳𝘦𝘮𝘥𝘰𝘳𝘨𝘢𝘯𝘪𝘴𝘪𝘦𝘳𝘵𝘦𝘯 𝘓𝘦𝘩𝘳𝘢𝘳𝘳𝘢𝘯𝘨𝘦𝘮𝘦𝘯𝘵𝘴 „𝘷𝘦𝘳𝘮𝘪𝘵𝘵𝘦𝘭𝘵“.
• 𝘋𝘪𝘦 𝘈𝘶𝘧𝘣𝘢𝘶 𝘥𝘦𝘳 𝘏𝘢𝘯𝘥𝘭𝘶𝘯𝘨𝘴𝘧𝘢𝘩𝘪𝘨𝘬𝘦𝘪𝘵 𝘪𝘯 𝘥𝘦𝘳 𝘗𝘳𝘢𝘹𝘪𝘴 (𝘒𝘰𝘮𝘱𝘦𝘵𝘦𝘯𝘻𝘦𝘯) 𝘸𝘪𝘳𝘥 𝘥𝘶𝘳𝘤𝘩 𝘛𝘳𝘢𝘯𝘴𝘧𝘦𝘳𝘢𝘶𝘧𝘨𝘢𝘣𝘦𝘯 𝘨𝘦𝘴𝘪𝘤𝘩𝘦𝘳𝘵.
Wir erleben aktuell, verstärkt durch die Künstliche Intelligenz, einen Paradigmenwechsel, der diese betriebliche Didaktik auf den Kopf stellt:
• Formelle Bildungsangebote auf Basis von Curricula werden nach und nach durch „𝗙𝗹𝗶𝗽𝗽𝗲𝗱 𝗖𝘂𝗿𝗿𝗶𝗰𝘂𝗹𝗮“ (vgl. Sabine Seufert 2024) ersetzt. Danach bilden Werte und Kompetenzen – Soft Skills – die Ziele des Corporate Learning. Wissen und Qualifikation sind natürlich weiterhin notwendig, sind aber nicht mehr das Ziel des Lernens, sondern die notwendige Voraussetzung. Dies bedeutet, dass das erforderliche Wissen beispielsweise auch kuratiert durch die KI zur Verfügung gestellt werden kann.
• Der wichtigste Lernort wird der 𝗔𝗿𝗯𝗲𝗶𝘁𝘀𝗽𝗿𝗼𝘇𝗲𝘀𝘀, weil Werte und Kompetenzen nur selbstorganisiert bei der Bewältigung von realen Herausforderungen aufgebaut werden können.
Daraus ergibt sich folgender Planungsrythmus.
1. Am Anfang steht die Frage, in welcher 𝗣𝗿𝗮𝘅𝗶𝘀𝗵𝗲𝗿𝗮𝘂𝘀𝗳𝗼𝗿𝗱𝗲𝗿𝘂𝗻𝗴 die angestrebten Soft-Skills aufgebaut werden können. In Abstimmung mit ihren Führungskräften vereinbaren die Mitarbeitenden auf Basis ihrer Skills Diagnostik personalisierte Lernpfade im Arbeitsprozess.
2. Im zweiten Schritt ist zu klären, welche 𝗙𝗹𝗮𝗻𝗸𝗶𝗲𝗿𝘂𝗻𝗴 die selbstorganisierten Lernprozesse der Mitarbeitenden benötigen. Dabei spielt das soziale Lernen eine zentrale Rolle. Begleitet werden diese Prozesse durch die Beratung und Begleitung durch Lernbegleitende und Expert*innen.
3. Erst im dritten Schritt werden diese Lernmaßnahmen bei Bedarf durch 𝗧𝗿𝗮𝗶𝗻𝗶𝗻𝗴𝘀 ergänzt. Beispielsweise bieten sich Methodentrainings, z. B. zu SCRUM, an, wenn die ausgewählten Praxisaufgaben nach agilen Prinzipien erfolgen sollen.
4. In unterstützenden 𝗪𝗲𝗶𝘁𝗲𝗿𝗯𝗶𝗹𝗱𝘂𝗻𝗴𝘀𝗺𝗮ß𝗻𝗮𝗵𝗺𝗲𝗻 können Basiswissen und Grundqualifikationen aufgebaut oder Anstöße für die selbstorganisierten Lernprozess gegeben werden.
Lernen erfolgt von Anfang an in der Praxis, indem Arbeiten und Lernen zusammenwachsen. Damit erübrigen sich Konzepte zur Förderung des Lerntransfers weitgehend.
Die Verantwortung für das Lernen wandert damit zu den Mitarbeitenden, die dabei von der Personalentwicklung und Ihrer Führungskraft unterstützt werden.
-- Task criticality and risk are central considerations in performance support design. When there's high consequence for error (safety risks, costly damage, or life-or-death stakes) the skill guide design needs to be highly intentional, context-aware, and tightly integrated into the environment of use.
-- A skill guide is great in high-risk situations (we were in an airline context). In a low-stakes context a pre-flight checklist is great for trained pilots. It supports memory recall for the essential steps in a high-risk task.
-- In a context such as de-icing a plane, a diagram-based skill guide is great to illustrate the basic controls of the machine. This helps build mental models.
-- In flight simulation training, skill guides can walk a novice through tasks like starting the engine, adjusting trim, or responding to a warning light. These guides scaffold learning and reduce cognitive load in a controlled environment.
-- Of course, skill guides can't always replace training. Real-time control of a plane requires deeply embodied skill: fine motor control, situational awareness, and rapid decision-making. You can't guide someone through that just in time with a single page or even a tablet-based tool.
-- In life-critical systems, there’s a threshold beyond which skill guides must give way to rigorous training, simulation, and certification. Performance support becomes a supplement, not a substitute in these contexts.
Bob and Con have had immeasurable impact on my career and perspective when it comes to human performance. I even asked Bob to write the foreword of my most recent book. Their 5 Moments of Need framework enables direct alignment to real-time needs of workers. The moments of need are:
1. New (When learning something for the first time)
2. More (When there's a need to deepen or expand knowledge or skills)
3. Apply (When performing a task or applying knowledge in real situations)
4. Solve (When encountering a problem or unexpected challenge)
5. Change (When adapting to change such as a new process, tool, or an organizational shift)
When learning is designed against these moments of need, job performance not only becomes more effective, but the worker gets more done quicker and with minimal disruption and frustration. By addressing these moments effectively, you can optimize learning outcomes and drive tangible results.
In 2025, the AI landscape has evolved far beyond just large language models. Knowing which model to use for your specific use case — and how — is becoming a strategic advantage.
Let’s break down the 8 most important model types and what they’re actually built to do: ⬇️
1. 𝗟𝗟𝗠 – 𝗟𝗮𝗿𝗴𝗲 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗠𝗼𝗱𝗲𝗹
→ Your ChatGPT-style model.
Handles text, predicts the next token, and powers 90% of GenAI hype.
🛠 Use case: content, code, convos.
2. 𝗟𝗖𝗠 – 𝗟𝗮𝘁𝗲𝗻𝘁 𝗖𝗼𝗻𝘀𝗶𝘀𝘁𝗲𝗻𝗰𝘆 𝗠𝗼𝗱𝗲𝗹
→ Lightweight, diffusion-style models.
Fast, quantized, and efficient — perfect for real-time or edge deployment.
🛠 Use case: image generation, optimized inference.
3. 𝗟𝗔𝗠 – 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗔𝗰𝘁𝗶𝗼𝗻 𝗠𝗼𝗱𝗲𝗹
→ Where LLM meets planning.
Adds memory, task breakdown, and intent recognition.
🛠 Use case: AI agents, tool use, step-by-step execution.
4. 𝗠𝗼𝗘 – 𝗠𝗶𝘅𝘁𝘂𝗿𝗲 𝗼𝗳 𝗘𝘅𝗽𝗲𝗿𝘁𝘀
→ One model, many minds.
Routes input to the right “expert” model slice — dynamic, scalable, efficient.
🛠 Use case: high-performance model serving at low compute cost.
5. 𝗩𝗟𝗠 – 𝗩𝗶𝘀𝗶𝗼𝗻 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗠𝗼𝗱𝗲𝗹
→ Multimodal beast.
Combines image + text understanding via shared embeddings.
🛠 Use case: Gemini, GPT-4o, search, robotics, assistive tech.
6. 𝗦𝗟𝗠 – 𝗦𝗺𝗮𝗹𝗹 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗠𝗼𝗱𝗲𝗹
→ Tiny but mighty.
Designed for edge use, fast inference, low latency, efficient memory.
🛠 Use case: on-device AI, chatbots, privacy-first GenAI.
7. 𝗠𝗟𝗠 – 𝗠𝗮𝘀𝗸𝗲𝗱 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗠𝗼𝗱𝗲𝗹
→ The OG foundation model.
Predicts masked tokens using bidirectional context.
🛠 Use case: search, classification, embeddings, pretraining.
8. 𝗦𝗔𝗠 – 𝗦𝗲𝗴𝗺𝗲𝗻𝘁 𝗔𝗻𝘆𝘁𝗵𝗶𝗻𝗴 𝗠𝗼𝗱𝗲𝗹
→ Vision model for pixel-level understanding.
Highlights, segments, and understands *everything* in an image.
🛠 Use case: medical imaging, AR, robotics, visual agents.
Understanding these distinctions is essential for selecting the right model architecture for specific applications, enabling more effective, scalable, and contextually appropriate AI interactions.
While these are some of the most prominent specialized AI models, there are many more emerging across language, vision, speech, and robotics — each optimized for specific tasks and domains.
LLM, VLM, MoE, SLM, LCM → GenAI
LAM, MLM, SAM → Not classic GenAI, but critical building blocks for AI agents, reasoning, and multimodal systems
𝗜 𝗲𝘅𝗽𝗹𝗼𝗿𝗲 𝘁𝗵𝗲𝘀𝗲 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁𝘀 — 𝗮𝗻𝗱 𝘄𝗵𝗮𝘁 𝘁𝗵𝗲𝘆 𝗺𝗲𝗮𝗻 𝗳𝗼𝗿 𝗿𝗲𝗮𝗹-𝘄𝗼𝗿𝗹𝗱 𝘂𝘀𝗲 𝗰𝗮𝘀𝗲𝘀 — 𝗶𝗻 𝗺𝘆 𝘄𝗲𝗲𝗸𝗹𝘆 𝗻𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿. 𝗬𝗼𝘂 𝗰𝗮𝗻 𝘀𝘂𝗯𝘀𝗰𝗿𝗶𝗯𝗲 𝗵𝗲𝗿𝗲 𝗳𝗼𝗿 𝗳𝗿𝗲𝗲: https://lnkd.in/dbf74Y9E
Kudos for the graphic goes to Generative AI ! | 45 comments on LinkedIn
Welche Interventionen können menschliches Verhalten wirksam verändern?
Die Meta-Analyse von Albarracín et al. (2024) ist eine Meta-Analysen von Meta-Analysen. 147 Meta-Analysen wurden zusammengefasst. Krass.
Das Paper ist 106 Seiten lang und ein echtes Brett. Die Kolleg:innen haben über viele verschiedene Bereiche (z. B. Gesundheits- und Organisationsverhalten) hinweg untersucht, was das Verhalten von Menschen verändern kann.
Die Ergebnisse zeigen meiner Interpretation nach:
- es gibt keinen Interventionsbereich mit starken Effekten. Wir müssen demütig bleiben. Das Verhalten von Menschen zu beeinflussen, bleibt schwierig. Vielleicht ist das auch gut so.
- es existieren sowohl strukturelle als auch individuelle Interventionen, die erfolgreich sind. Mit einem Fokus auf nur "structure first" oder nur "people first" verschenkt man viel Potential.
- Menschen mit Wissen zu versorgen und hoffen, dass sie durch Einsicht ihr Verhalten verändern, bringt eher nichts.
- Sanktionen zeigen ebenfalls zu vernachlässigende Effekte
- nahezu keinen Verhaltenseffekt haben im Schnitt auch Mindsetinterventionen (beliefs)
- mittelstarke Effekte zeigt die Bereitstellung des Zugangs zu Ressourcen, die für das Zielverhalten wichtig sind
- auch wirksam sind Interventionen, die auf Gewohnheiten abzielen. Also solche, die Verhaltensgewohnheiten etablieren oder sie verändern.
- zumindest kleine Effekte bieten soziale Unterstützung, soziale Normen, Verhaltenstrainings und die Arbeit an und mit Emotionen.
Das sind natürlich "nur" Mittlerwerte von Mittelwerten aber das Studienfundament ist echt der Hammer.
Was könnte das für die Verhaltensveränderungen in Organisationen bedeuten?
Gebt den Menschen Zugang zu Ressourcen und unterstützt sie bei Veränderungen. Versucht an den Gewohnheiten zu arbeiten und trainiert Verhalten statt Wissen.
Da vieles ein bisschen wirkt, braucht man wohl viele unterschiedliche Ansätze, um größere Effekte zu erreichen. Die immer wieder gestellte Frage Mensch oder Organisation ist nicht zielführend. Strukturen und Menschen gehören gemeinsam gedacht und bearbeitet.
Die Studie ist frei verfügbar. Bei Interesse und Nachfragen gerne mal in die Studie reinschauen.
Albarracín, D., Fayaz-Farkhad, B., & Granados Samayoa, J. A. (2024). Determinants of behaviour and their efficacy as targets of behavioural change interventions. Nature Reviews Psychology, 3(6), 377-392.
#Verhalten #Macht #Transformation #Entwicklung | 175 Kommentare auf LinkedIn
Die 3 Level der KI-Integration - AI-Adoption, AI-Adaption & AI-Transformation
🎥 Adoption, Adaption, Transformation: Wie KI unsere Arbeitswelt verändert! 🤖✨🧠 In diesem Video tauchen wir in die Welt der KI-Integration ein und beleucht...
𝗥𝗲𝗶𝗰𝗵𝘄𝗲𝗶𝘁𝗲𝗻-𝗞.𝗢. 𝗳ü𝗿 𝘃𝗶𝗲𝗹𝗲 𝗪𝗲𝗯𝘀𝗶𝘁𝗲𝘀. Erst wandern Suchanfragen von Google zu ChatGPT - jetzt beantwortet sie Google direkt in den AI-Overviews.
🚨 Studien zeigen bereits hohe Traffic-Rückgänge.
Was können Redakteure und Publisher tun?
👊 Deshalb bin ich mit Matthäus Michalik in den Podcast-Ring gestiegen:
Wir haben 2 Folgen aufgenommen: 𝗚𝗘𝗢 𝘀𝘁𝗮𝘁𝘁 𝗦𝗘𝗢 & 𝗪𝗶𝗲 𝗽𝗹𝗮𝘁𝘇𝗶𝗲𝗿𝗲𝗻 𝘄𝗶𝗿 𝘂𝗻𝘀 𝗶𝗻 𝗱𝗲𝗻 𝗔𝗜-𝗢𝘃𝗲𝗿𝘃𝗶𝗲𝘄?
Als Teaser für euch:
4️⃣ Sofort-Tipps für GEO (Generative Engine Optimization)
1. Autorität & Vertrauen belegen
🔸 Quellen, Zitate und fachliche Referenzen explizit nennen.
🔸Ergebnis: bis zu +40 % höhere Wahrscheinlichkeit, in KI-Antworten zitiert zu werden.
2. Zahlen sprechen lassen
🔸Statistiken, Studien-Daten und eigene Benchmarks einbauen.
🔸KI-Modelle gewichten quantitative Infos stärker → +30 % Relevanz-Boost.
3. Klare Struktur, einfache Sprache
🔸Kurze Absätze, Bullet-Points, FAQs, sprechende Zwischenüberschriften.
🔸Erleichtert Parsing durch LLMs und erhöht die Chance auf direkte Übernahme.
4. Gezielter Fachwort-Einsatz
🔸Relevante Terminologie und Branchen-Jargon bewusst einstreuen.
🔸Signalisiert Expertise und verbessert das Matching für spezifische Nutzeranfragen.
‼️ Kurzformel: Autorität + Daten + Klarheit + Terminologie = Sichtbarkeit Chat-Antworten.
𝗦𝗶𝗰𝗵𝘁𝗯𝗮𝗿𝗸𝗲𝗶𝘁 𝗶𝗻 𝗔𝗜 𝗢𝘃𝗲𝗿𝘃𝗶𝗲𝘄𝘀 – 𝗱𝗮𝘀 𝗺𝘂𝘀𝘀𝘁 𝗱𝘂 𝗯𝗲𝗮𝗰𝗵𝘁𝗲𝗻
🔸Grundvoraussetzung: Deine Seite muss im Google-Index stehen und bereits ein gewisses Vertrauensniveau besitzen. Dann gilt:
🔸Hochwertige, faktenbasierte Inhalte: präzise, recherchiert, aktuell.
🔸Klare Struktur: H-Überschriften, Listen, Tabellen → erleichtert Parsing.
🔸Strukturierte Daten (Schema.org): zeigt der KI, was welche Bedeutung hat.
🔸UX & Performance: schnelle Ladezeiten, sauberes Mobile-Design.
🔸E-E-A-T pflegen: Expertise, Erfahrung, Autorität, Vertrauen kontinuierlich belegen (Autorenprofile, Quellen, Backlinks).
𝟴 𝗣𝗿𝗮𝘅𝗶𝘀-𝗧𝗶𝗽𝗽𝘀 𝗳ü𝗿 𝗱𝗶𝗲 𝗣𝗼𝘀𝘁-𝗦𝗘𝗢-Ä𝗿𝗮
✔️ Qualität vor Quantität – fewer, deeper pieces mit klarer Expertise.
✔️Struktur first – H-Tags, Bullet-Points, FAQ-Blöcke, Schema.
✔️User Experience optimieren – Speed, Navigation, mobile UX.
✔️Mehrwert über die KI hinaus – eigene Daten, Cases, Meinungen.
✔️Traffic-Quellen streuen – Social, E-Mail, Communities, Partnerschaften.
✔️Monitoring & Anpassung – beobachte, welche Seiten in AI Overviews landen, und iteriere.
✔️Multimedial denken – Videos, Podcasts, Infografiken ergänzen Text.
✔️E-E-A-T kontinuierlich stärken – Fachautor:innen, Referenzen, Reviews, Backlinks.
𝗞𝘂𝗿𝘇𝗳𝗼𝗿𝗺𝗲𝗹: Qualität + Struktur + Mehrwert + Vertrauen + Channel-Mix = langfristige Sichtbarkeit – auch in der KI-Suche.
❓ Wie geht ihr den Battle um Sichtbarkeit und Traffic an? Lasst uns diskutieren. 👇 | 12 Kommentare auf LinkedIn
Hugging Face 𝗷𝘂𝘀𝘁 𝗱𝗿𝗼𝗽𝗽𝗲𝗱 9 𝗙𝗥𝗘𝗘 𝗔𝗜 𝗰𝗼𝘂𝗿𝘀𝗲𝘀!
If you’re trying to level up or pivot into AI — this is pure gold.
𝗔𝗹𝗹 OPEN. 𝗔𝗹𝗹 FREE. 𝗔𝗹𝗹 expert thaugt.
Here’s what’s inside (with links): ⬇️
1. 𝗟𝗟𝗠 𝗖𝗼𝘂𝗿𝘀𝗲
Master large language models fast.
Train, fine-tune, deploy with Transformers.
→ https://lnkd.in/dcCMCs96
2. 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁𝘀 𝗖𝗼𝘂𝗿𝘀𝗲
Build multi-step reasoning agents with LangChain + HuggingFace.
→ https://lnkd.in/dJD3QRuT
3. 𝗗𝗲𝗲𝗽 𝗥𝗟 𝗖𝗼𝘂𝗿𝘀𝗲
Teach AI to learn like a human.
Reward-based decision-making in real environments.
→ https://lnkd.in/d8JuRvn8
4. 𝗖𝗼𝗺𝗽𝘂𝘁𝗲𝗿 𝗩𝗶𝘀𝗶𝗼𝗻 𝗖𝗼𝘂𝗿𝘀𝗲
Image classification, segmentation, object detection — with HF models.
https://lnkd.in/dEH8Tx-v
5. 𝗔𝘂𝗱𝗶𝗼 𝗖𝗼𝘂𝗿𝘀𝗲
Turn sound into signal.
Voice recognition, music tagging, audio generation.
→ https://lnkd.in/dZtkA3sw
6. 𝗠𝗟 𝗳𝗼𝗿 𝗚𝗮𝗺𝗲𝘀 𝗖𝗼𝘂𝗿𝘀𝗲
AI-powered game design: NPCs, logic, procedural generation.
→ https://lnkd.in/d4RhU6pz
7. 𝗠𝗟 𝗳𝗼𝗿 3𝗗 𝗖𝗼𝘂𝗿𝘀𝗲
Work with point clouds, meshes, and 3D data in ML.
→ https://lnkd.in/dU8T8BPw
8. 𝗗𝗶𝗳𝗳𝘂𝘀𝗶𝗼𝗻 𝗠𝗼𝗱𝗲𝗹𝘀 𝗖𝗼𝘂𝗿𝘀𝗲
The tech behind DALL·E and Stable Diffusion.
Generate visuals from noise — step by step.
→ https://lnkd.in/dFwN_idt
9. 𝗢𝗽𝗲𝗻-𝗦𝗼𝘂𝗿𝗰𝗲 𝗔𝗜 𝗖𝗼𝗼𝗸𝗯𝗼𝗼𝗸
Not a course — a growing library of real-world AI notebooks.
Copy, remix, and build.
→ https://lnkd.in/dQ5BXvSz
There’s no excuse left. Save this. Study it. Build.
Share this with your network to help them level up! ♻️
Which one will you start with? | 16 comments on LinkedIn
Wenn Du nochmal bei 0 starten könntest, wie würdest du eine Daten- und KI-Organisation aufbauen?
Genau das wollte ich von Claudia Pohlink wissen, die eine beeindruckende Karriere in der Daten- und KI-Welt bei Telekom, Deutsche Bahn und FIEGE hingelegt hat.
Also, wie sieht der Anti-Hype Blueprint aus?
𝟭. 𝗦𝘁𝗮𝗺𝗺𝗱𝗮𝘁𝗲𝗻 𝗱𝗲𝗳𝗶𝗻𝗶𝗲𝗿𝗲𝗻 𝘂𝗻𝗱 𝘀𝘁𝗿𝘂𝗸𝘁𝘂𝗿𝗶𝗲𝗿𝗲𝗻
Starte mit der Definition deiner Kerndomänen und Stammdaten. Bestimme führende Systeme für jede Datendomäne, bevor du Tools auswählst. Dies schafft ein stabiles Fundament für alle KI-Aktivitäten.
𝟮. 𝗘𝗿𝘀𝘁𝗲 𝗘𝗿𝗳𝗼𝗹𝗴𝘀𝗴𝗲𝘀𝗰𝗵𝗶𝗰𝗵𝘁𝗲 𝘀𝗰𝗵𝗿𝗲𝗶𝗯𝗲𝗻
Identifiziere einen ersten Use Case, zum Beispiel mit dem Controlling-Bereich, wo bereits Datenaffinität besteht. Zeige schnelle Erfolge, um Management-Support zu gewinnen.
𝟯. 𝗗𝗮𝘀 𝟯-𝗛𝗮̈𝘂𝘀𝗲𝗿-𝗠𝗼𝗱𝗲𝗹𝗹 𝗶𝗺𝗽𝗹𝗲𝗺𝗲𝗻𝘁𝗶𝗲𝗿𝗲𝗻
• House of Data: Grundlagen, Governance, Architektur
• House of AI: Use Cases, Data Scientists, Engineers
• House of 3C: Change, Communication, Community
Diese 3 Bereiche sollten zu gleichen Teilen aufgebaut werden. Keiner kann ohne den anderen für nachhaltige Daten- und KI-Implementierung. Die Leads sollten zu Beginn intern aufgebaut werden, extern können operative Ressourcen zugekauft werden.
𝟰. 𝗕𝗮𝗹𝗮𝗻𝗰𝗲 𝘇𝘄𝗶𝘀𝗰𝗵𝗲𝗻 𝘇𝗲𝗻𝘁𝗿𝗮𝗹 𝘂𝗻𝗱 𝗱𝗲𝘇𝗲𝗻𝘁𝗿𝗮𝗹 𝗳𝗶𝗻𝗱𝗲𝗻
Etabliere zentrale Standards und Koordination, befähige aber gleichzeitig dezentrale Teams durch Multiplikatoren-Ideen wie KI-Awards, Schulungen und Hackathons. Laut Claudia ist diese Balance eine der größten Herausforderungen in der Umsetzung.
𝟱. 𝗣𝗿𝗮𝗴𝗺𝗮𝘁𝗶𝘀𝗰𝗵 𝗽𝗹𝗮𝗻𝗲𝗻 𝘀𝘁𝗮𝘁𝘁 𝘁𝗵𝗲𝗼𝗿𝗲𝘁𝗶𝘀𝗶𝗲𝗿𝗲𝗻
Erstelle 6-12-Monats-Pläne statt langfristiger Strategien. Dokumentiere Erfahrungen systematisch, auch Misserfolge, und passe deine Pläne regelmäßig an.
Ich weiß, wie viele Mittelständler vor der großen Aufgabe stehen, Daten- und KI-Kompetenzen und Strukturen im Unternehmen aufzubauen.
Claudia's Erfahrungen sind eine echte Schatzkiste.
Ganz ohne Buzzwords, Hype oder Selbstprofilierung.
Claudia, 1000 Dank für deine Offenheit und dass du uns an deinen Erfahrungen teilhaben lässt!
Was sagt ihr zum Blueprint? | 22 Kommentare auf LinkedIn
Mit seinem AI Mode und dem Agenten Mariner zieht Google eine Plattformschicht über das offene Web. Google transformiert sich von einer klassischen Suchmaschine zum zentralen Marktplatz, Assistenten und Zahlungsdienstleister. Nutzer können künftig Produkte direkt in der Google-Suche finden, vergleichen, kaufen und bezahlen – ohne die Plattform zu verlassen.
Diese Entwicklung hat weitreichende Folgen für das gesamte Internet-Ökosystem. Die Auswirkungen treffen nicht nur klassische Online-Händler, sondern auch Marktplatzgiganten wie Amazon, Verlage, Übersetzungsdienste wie DeepL, Reservierungsanbieter wie OpenTable, Buchungsseiten wie Ticketmaster oder Sprachschulen wie Duolingo.
Wer weiterhin sichtbar und relevant bleiben will, muss sich auf die neuen Spielregeln einstellen, in KI-Overviews und Shopping-Graphen präsent sein und seine Inhalte für KI-Systeme optimieren. Denn OpenAI baut etwas Ähnliches auf und auch Amazon bewegt sich in diese Richtung. Der Wettstreit der Plattformen ist damit endgültig im KI-Zeitalter angekommen.
Weiterlesen auf F.A.Z. PRO Digitalwirtschaft (€) ▶︎ https://lnkd.in/e-r8k7up€
Frankfurter Allgemeine Zeitung
LEGO hat externe Trainer und Berater rausgeschmissen und seine Führungskräfte zu Coaches auf drei Ebenen ausgebildet, die eine nachhaltige #Lernkultur schaffen.
Erfahrungswerte aus der aktuellen MIT Sloan Management Review (Bahnhofsbuchhandel).
„Gehe langsam, wenn Du es eilig hast.“
Diese Erkenntnis war es, die zwei weltbekannte dänische Unternehmen – LEGO und VELUX (Dachfenster) – dazu brachte, ihren Umgang mit Veränderung zu überdenken. Inmitten digitaler Umbrüche und wachsender Komplexität stießen beide an die Grenzen ihres bisherigen Erfolgsmodells: Was früher als effizient galt, erwies sich plötzlich als zu starr und zu oberflächlich.
Workshops sind oft Strohfeuer. Externe Berater kamen und gingen. Also die Erkenntnis: Veränderung muss von innen kommen – durch Führung.
LEGO und VELUX machten etwas Ungewöhnliches: Sie bildeten ihre Führungskräfte nicht zu besseren Projektmanagern aus, sondern zu besseren Frage-Stellern. Sie machten sie zu Coaches. Zu Lernbegleitern ihrer eigenen Mitarbeitenden. Zu Menschen, die nicht mit Antworten glänzen, sondern mit klugen Fragen Orientierung geben.
⸻
Element 1: Probleme neu denken – mit A3
Beide Unternehmen führten die A3-Methode von Toyota ein – ein strukturiertes Denkformat, das ein Problem auf einer einzigen DIN-A3-Seite abbildet. Klar. Visuell. Jeder arbeitet damit.
Das dazugehörige Modell:
🍍 Finding: Das richtige Problem entdecken.
🍍 Facing: Sich ihm mutig stellen.
🍍 Framing: Die eigentliche Herausforderung erkennen.
🍍 Forming: Lösungen entwickeln.
Diese vier Phasen führten zu einem neuen Problembewusstsein: Nicht Symptome bekämpfen. Ursachen verstehen. Nicht sofort handeln. Erst gemeinsam denken. Teams lernten langsamer und nachhaltiger.
⸻
Element 2: Lernen im Kollektiv – Gruppen-Coaching als Mikrokosmos
Individuelles Lernen reicht nicht. Also bauten LEGO und VELUX einen Raum für kollektive Reflexion: Gruppencoaching.
Dort trafen sich Teams aus Führungskräften in festen Rollen: ein Moderator, eine Fallgeberin, ein Coach und stille Beobachter. In 30 Minuten wurde ein reales Problem durchdacht – mit klugen Fragen, ehrlichen Perspektiven, geteilten Einsichten.
Diese Sessions stärkten nicht nur die Problemlösefähigkeiten – sie schufen psychologische Sicherheit. Menschen konnten sich verletzlich zeigen. Fehler besprechen. Ideen testen. Und gemeinsam wachsen.
⸻
Element 3: Coaching-Hierarchie – Lernen strukturell verankern
Um all das nachhaltig zu machen, entwickelten beide Unternehmen eine dreistufige Coaching-Struktur:
🍓 First Coach: Die direkte Führungskraft begleitet das tägliche Lernen.
🍓 Second Coach: Bereichsleiter coachen die Coaches – und verbessern deren Fragekompetenz.
🍓 Third Coach: Das Top-Management reflektiert die Metaebene und sichert strategische Ausrichtung.
So wurde Innovation nicht zur Aufgabe von Externen, sondern zur DNA der Organisation. Lernen wurde nicht delegiert – sondern verkörpert. Erfordert erst Zeit und Geduld, zahlt sich langfristig jedoch aus. | 96 Kommentare auf LinkedIn