Found 703 bookmarks
Newest
After dozens of conversations with L&D leaders over the past months across industries, org sizes, and wildly different levels of AI maturity, one thing has become painfully clear: There’s a huge gap… | Inna Horvath 🇺🇦
After dozens of conversations with L&D leaders over the past months across industries, org sizes, and wildly different levels of AI maturity, one thing has become painfully clear: There’s a huge gap… | Inna Horvath 🇺🇦
After dozens of conversations with L&D leaders over the past months across industries, org sizes, and wildly different levels of AI maturity, one thing has become painfully clear:
·linkedin.com·
After dozens of conversations with L&D leaders over the past months across industries, org sizes, and wildly different levels of AI maturity, one thing has become painfully clear: There’s a huge gap… | Inna Horvath 🇺🇦
Mein liebster KI-Hack zurzeit ist es, 1. Meeting-Notes mit einem Note-taker von einer Session im New Learning Lab machen.
Mein liebster KI-Hack zurzeit ist es, 1. Meeting-Notes mit einem Note-taker von einer Session im New Learning Lab machen.
2. Das Transkript in NotebookLM laden 3. Audio und Video davon erstellen lassen 4. Der KI von NotebookLM bei der Erstellung sagen: "Erstelle mir ein Recap zu diesem Transkript einer Session. Was ist passiert? Was wurde besprochen? Was waren die zentralen Erkenntnisse? Was waren die Ergebnisse?" Und boom, kommt da etwas raus, was die Inhalte der Session oftmals direkt beim ersten Versuch wirklich gut zusammenfasst. Aber hört selbst in das Recap zu unserer 4. Session zum Thema Vibe Learning, in welcher wir angefangen haben, Vibe Learning in verschiedenen Lernkontexten zu denken. Konkret: Vibe Learning als Trainingsstarter und Vibe Learning als Bindeglied von größeren Upskilling Programmen. Ich fand das so cool, das musste ich vor dem Wochenende noch mit euch teilen. 😊
·linkedin.com·
Mein liebster KI-Hack zurzeit ist es, 1. Meeting-Notes mit einem Note-taker von einer Session im New Learning Lab machen.
🥇This is Gold! just dropped by Carnegie Mellon University! It’s one of the most honest looks yet at how “autonomous” agents actually perform in the real world.
🥇This is Gold! just dropped by Carnegie Mellon University! It’s one of the most honest looks yet at how “autonomous” agents actually perform in the real world.
👇 The study analyzed AI agents across 50+ occupations, from software engineering to marketing, HR, and design, and compared how they completed human workflows end to end. What they found is both exciting and humbling: • Agents “code everything.” Even in creative or administrative tasks, AI agents defaulted to treating work as a coding problem. Instead of drafting slides or writing strategies, they generated and ran code to produce results, automating processes that humans usually approach through reasoning and iteration. • They’re faster and cheaper, but not better. Agents completed tasks 4 – 8× faster and at a fraction of the cost, yet their outputs showed lower quality, weak tool use, and frequent factual errors or hallucinations. • Human–AI teaming consistently outperformed solo AI.🔥 When humans guided or reviewed the agent’s process, acting more like a “manager” or “co-pilot”, the results improved dramatically. 🧠 My take: The race toward “fully autonomous AI” is missing the real opportunity, co-intelligence. Right now, the biggest ROI in enterprises isn’t from replacing humans. It’s from augmenting them. ✅ Use AI to translate intent into action, not replace decision-making. ✅ Build copilots before colleagues, co-workers who understand your workflow, not just your prompt. ✅ Redesign processes for hybrid intelligence, where AI handles execution and humans handle ambiguity. The future of work isn’t humans or AI. (for the next 5 years IMO) It’s humans with AI, working in a shared cognitive space where each amplifies the other’s strengths. Because autonomy without alignment isn’t intelligence, it’s chaos. Autonomous AI isn’t replacing human work, it’s redistributing it. Humans shifted from doing to directing, while agents handled repetitive, programmable layers. Maybe we are just too fast to shift from "uncool" Copilot to sth more exciting called "Fully Autonomous AI", WDYT? | 36 comments on LinkedIn
·linkedin.com·
🥇This is Gold! just dropped by Carnegie Mellon University! It’s one of the most honest looks yet at how “autonomous” agents actually perform in the real world.
Mckinsey, State of AI 2025 Report
Mckinsey, State of AI 2025 Report
🚨 Just dropped! McKinsey report on AI in 2025: the hype is loud, the impact is.... All the CEO must read this: almost everyone is “using AI,” but only a small slice is wiring it deep enough to move the needle. 88% of companies use AI somewhere, yet ~⅔ are still stuck in experiments/pilots, not scale. Agents are real but early: 62% are experimenting; only 23% are scaling in at least one function (and typically just one or two). Only 39% report any impact from AI at the enterprise level. The rest have scattered wins, not system change. High performers (≈6%) think bigger: they aim for transformation, not just cost cuts, and are ~3× more likely to redesign workflows around AI. Leadership matters: where the CEO and senior team own AI, adoption scales and budgets follow (many leaders spend 20% of digital on AI). Value shows up fastest in software eng, IT, mfg (cost ↓) and in marketing/sales, strategy/finance, product (revenue ↑). Risk is real and showing up: inaccuracy and explainability issues top the list, mature orgs pair ambition with stronger guardrails and human-in-the-loop. My take: Most firms bought tools; the few winners rebuilt work. Agent pilots are cool, but without workflow redesign, data plumbing, and clear governance, you’re funding demos, not outcomes. The org that rewires will beat the org that “rolls out.” Leaders should set the bar higher than “efficiency.” Tie AI to growth, new offerings, and customer experience, then go after costs. Redesign 3–5 critical workflows end-to-end (not feature by feature). Ship, measure, harden, repeat. Put ownership at the top. If the CEO isn’t accountable for AI governance and ROI, it will stall. Invest in the platform: data products, evaluation, CI/CD for models/agents, human-in-the-loop checkpoints, risk controls. Skill the workforce for agents: task decomposition, prompt/context ops, verification, and change management, at scale. AI ROI doesn’t come from the model. It comes from the company willing to change its operating system. wdyt? | 76 comments on LinkedIn
·linkedin.com·
Mckinsey, State of AI 2025 Report
A few hours ago, Google published a white paper laying out their vision for the Future of Learning. Here's the TLDR:
A few hours ago, Google published a white paper laying out their vision for the Future of Learning. Here's the TLDR:
The Headline: 👉 Global learning is at a crossroads: learner outcomes have dropped sharply worldwide, and UNESCO projects a shortage of 44 million teachers by 2030. 👉 AI is positioned as *the* tool to save us from an impending education crisis BUT... 👉 The real "secret weapon" for improving education isn't the tech: it's the learning science we build into it. According to Google, the four biggest opportunities offered by AI in education are: 🔥 Learning Science at Scale – Embed evidence-based methods (retrieval practice, spaced repetition, active feedback) directly into everyday tools. 🔥 Making Anything Learnable – Adjust explanations, examples and complexity to meet each learner where they are. 🔥 Universal Access – Break down language, literacy and disability barriers through AI-powered translation and transformation. 🔥 Empowering Educators – Free up teacher time through AI-assisted lesson planning, resource creation and differentiation. Overall, Google's latest white paper signals an evolving ed-tech culture which centres on a more substantive partnership between ed & tech: 👉 Co-Creation: Google commits to investing in evidence-based approaches to learning design and development and to rigorous evaluation, pilot studies and educator-led research to test and demo impact. 👉 Collaborative Development: Google commits to working with schools, NGOs, researchers and learning scientists to co-design tools for learning. You can read the white paper in full using the link in comments. Happy innovating! Phil 👋 | 26 comments on LinkedIn
·linkedin.com·
A few hours ago, Google published a white paper laying out their vision for the Future of Learning. Here's the TLDR:
What happens when learners meet AI?
What happens when learners meet AI?
What happens when learners meet AI? Think of skill development as a road from beginner to expert. You normally start with basic practice, work through tough problems, reflect on what's working, and eventually reach the point where you can handle anything that comes up. Now AI has entered this picture. Depending on how we use it, we end up on completely different roads. Use AI too early and you risk never-skilling. You skip the fundamentals and never develop real capability. Hand over too much and you risk de-skilling. Abilities you once had start to fade. Copy AI outputs without thinking and you risk mis-skilling. You learn the wrong lessons and build on faulty foundations. But there's another path. Use AI while staying critical. Question its outputs. Think through the logic. Verify the answers. This is AI-enhanced adaptive practice. AI becomes a sparring partner that helps you learn faster without replacing your own reasoning. The difference comes down to one thing: who's in control. The people who'll succeed with AI aren't avoiding it or surrendering to it completely. They're the ones who keep thinking while using AI to compress learning cycles and test ideas faster. AI shouldn't replace your thinking. It should make your thinking better. The question isn't whether to use AI when learning. It's whether you're driving or just sitting in the passenger seat. How are you seeing this play out in your work? ✍ Raja-Elie Abdulnour, Brian Gin, Christy Boscardin. Educational Strategies for Clinical Supervision of Artificial Intelligence Use. N Engl J Med. 2025;393(8):786-797. DOI: 10.1056/NEJMra2503232 | 10 comments on LinkedIn
·linkedin.com·
What happens when learners meet AI?
In 2014 Michael Staton detailed how the university experience is a bundle of many things rolled up together (see image below), and suggested that many components could be disaggregated, or… | Jonathan Boymal
In 2014 Michael Staton detailed how the university experience is a bundle of many things rolled up together (see image below), and suggested that many components could be disaggregated, or… | Jonathan Boymal
In 2014 Michael Staton detailed how the university experience is a bundle of many things rolled up together (see image below), and suggested that many components could be disaggregated, or “unbundled” and provided in alternative ways. He drew on this framework in his “The Degree is Doomed” piece in the Harvard Business Review in 2014. https://lnkd.in/gYdkhCb In 2025, Shannon McKeen, writing in Forbes, considers where we are now https://lnkd.in/gYir-XDe: “Most college students now use AI tools for academic work, yet employers consistently report that new graduates lack the critical thinking and decision-making skills needed in an AI-augmented workplace. This disconnect signals the beginning of higher education's great unbundling. For decades, universities have operated on a bundled model: combining information delivery, skill development, credentialing, and social networking into a premium package. AI is now attacking the most profitable part of that bundle—information transfer—while employers increasingly value what machines cannot replicate: human judgment under uncertainty. Higher education represents a massive market built largely on controlling access to specialized knowledge. Students pay premium prices for information that AI now delivers instantly and for free. A business student can ask ChatGPT to explain supply chain optimization or generate market analysis in seconds. The traditional lecture-and-test model faces its Blockbuster moment. This is classic disruption theory in action. The incumbent model optimized for information scarcity while a new technology makes that core offering abundant. Universities that continue competing with AI on content delivery are fighting the wrong battle. The real value is migrating from information transfer to judgment development, from transactional learning to transformational learning. In an AI saturated world, premium skills are distinctly human: verification of sources, contextual decision-making, ethical reasoning under ambiguity, and accountability for real-world outcomes. This shift mirrors what happened to other information-based industries. When Google made basic research free, management consulting pivoted to implementation and change management. When smartphones made maps ubiquitous, GPS companies focused on real-time optimization and personalized routing. Higher education must make the same transition… The great unbundling of higher education is underway. Information delivery is becoming commoditized while judgment development becomes premium. Institutions that recognize this shift early will capture disproportionate value in the new market structure.” H/t Sinclair Davidson
·linkedin.com·
In 2014 Michael Staton detailed how the university experience is a bundle of many things rolled up together (see image below), and suggested that many components could be disaggregated, or… | Jonathan Boymal
AI isn’t just transforming Learning & Development.
AI isn’t just transforming Learning & Development.
AI isn’t just transforming Learning & Development. It’s revealing it. For years, we’ve talked about being strategic partners - about impact, performance, and business alignment - but much of L&D has still operated as a content-production function. We’ve equated “learning” with “stuff we make”. Now AI has arrived, and it’s showing us what’s really been going on. - If your value comes from creating courses and content, AI will replace you. - If your value comes from solving real problems for the business, AI will amplify you. That’s the pivot point we’re in. The new report, The Race for Impact written by Egle Vinauskaite and Donald H Taylor, captures this moment perfectly. Within it, they describe the “Implementation Inflexion” - the shift from experimenting with AI to actually using it - and revealing what L&D teams are doing as they lead the way. The “Transformation Triangle” lays out three models that go beyond content: Skills Authority - owning data and insight around workforce capability Enablement Partner - orchestrating systems that help others solve problems Adaptation Engine - continuously learning with the business to stay relevant Each one moves L&D closer to the business and further from being an internal production house. This isn’t about tech. It’s about identity. And the teams that figure that out now will define what L&D means in the age of AI. Hear more about this from Egle in the latest episode of The Learning & Development Podcast and what this all means in practice. A link to this episode is in the comments.
·linkedin.com·
AI isn’t just transforming Learning & Development.
TeachLM: six key findings from a study of the latest AI model fine-tuned for teaching & learning.
TeachLM: six key findings from a study of the latest AI model fine-tuned for teaching & learning.
TeachLM: six key findings from a study of the latest AI model fine-tuned for teaching & learning. LLMs are great assistants but ineffective instructional designers and teachers. This week, researchers at Polygence + Stanford University published a paper on a new model — TeachLM — which was built to address exactly this gap. In my latest blog post, I share the key findings from the study, including observations on what it tells us about AI’s instructional design skills. Here’s the TLDR: 🔥 TeachLM outperformed generic LLMs on six key education metrics, including improved question quality & increased personalisation 🔥TeachLM also outperformed “Educational LLMs” - e.g. Anthropic’s Learning Mode, OpenAI’s Study Mode and Google’s Guided Learning - which fail to deliver the productive struggle, open exploration and specialised dialogue required for substantive learning 🔥TeachLM flourished at developing some teaching skills (e.g. being succinct in its comms) but struggled with others (e.g. asking enough of the right sorts of probing questions) 🔥 Training TeachLM on real educational interactions rather than relying on prompts or synthetic data lead to improved model performance 🔥TeachLM was trained primarily for delivery, leaving significant gaps in its ability to “design the right experience”, e.g. by failing to define learners’ start points and goal 🔥Overall, human educators still outperform all LLMs, including TeachLM, on both learning design and delivery Learn more & access the full paper in my latest blog post (link in comments). Phil 👋
·linkedin.com·
TeachLM: six key findings from a study of the latest AI model fine-tuned for teaching & learning.
KI-Adoption Selbsteinstufung
KI-Adoption Selbsteinstufung
Ermitteln Sie den Reifegrad Ihrer KI-Transformation - Kostenlose Selbsteinschätzung basierend auf dem Learning Ecosystem Framework
·aitransformationassessment.lovable.app·
KI-Adoption Selbsteinstufung
Sag mir, was du klickst und ich sage dir, was du lernst - eLearning Journal Online
Sag mir, was du klickst und ich sage dir, was du lernst - eLearning Journal Online
Ein Impuls für individuelle Lernpfade durch Künstliche Intelligenz im Bereich L&D „Kunden, die diese Bettwäsche kauften, kauften auch …“ Dieses Prinzip kennen Sie sicher von E-Commerce-Plattformen. Ihr Verhalten wird erfasst, verglichen und in Empfehlungen übersetzt. Das Ziel: Sie kaufen bestenfalls mehr als diesen einen Artikel. Übertragen auf Lernplattformen heißt das: Klicks, Quiz-Ergebnisse und Suchanfragen lassen […]
·elearning-journal.com·
Sag mir, was du klickst und ich sage dir, was du lernst - eLearning Journal Online
🚨 OpenAI 𝗷𝘂𝘀𝘁 𝗮𝗻𝗻𝗼𝘂𝗻𝗰𝗲𝗱 𝘁𝗵𝗲𝗶𝗿 𝗼𝘄𝗻 𝗔𝗴𝗲𝗻𝘁 𝗕𝘂𝗶𝗹𝗱𝗲𝗿.
🚨 OpenAI 𝗷𝘂𝘀𝘁 𝗮𝗻𝗻𝗼𝘂𝗻𝗰𝗲𝗱 𝘁𝗵𝗲𝗶𝗿 𝗼𝘄𝗻 𝗔𝗴𝗲𝗻𝘁 𝗕𝘂𝗶𝗹𝗱𝗲𝗿.
🚨 OpenAI 𝗷𝘂𝘀𝘁 𝗮𝗻𝗻𝗼𝘂𝗻𝗰𝗲𝗱 𝘁𝗵𝗲𝗶𝗿 𝗼𝘄𝗻 𝗔𝗴𝗲𝗻𝘁 𝗕𝘂𝗶𝗹𝗱𝗲𝗿. [And no — it didn’t kill 99% of startups overnight.] It’s called AgentKit — a no-code, full-stack platform to build, deploy, and optimize AI agents. The UI looks surprisingly clean, but let’s be clear: this doesn’t instantly replace Zapier, Make, n8n, or Lindy. AgentKit is impressive, yes — but it’s still early, still developer-focused, and far from being a plug-and-play automation killer. 𝗛𝗲𝗿𝗲’𝘀 𝘄𝗵𝗮𝘁 𝗔𝗴𝗲𝗻𝘁𝗞𝗶𝘁 𝗶𝗻𝗰𝗹𝘂𝗱𝗲𝘀: ⬇️ 1. Agent Builder – a visual interface to design and connect multiple AI agents → You can drag and drop steps, test them instantly, and track versions → Comes with built-in safety checks (guardrails) → It’s in beta — I haven’t tested it yet, but the interface looks quite polished 2. Connector Registry – a control center for all your data connections → Possible to manage integrations with MCP → Adds content and tools to keep it organized, secure, and compliant for enterprise use 3. ChatKit – provides an interface to add chat to your product → Turns agents into a chat interface that looks native → Handles threads, live responses, and context automatically 4. Evals 2.0 – a system to test and improve your agents → Lets you run evaluations using datasets and automated grading → According to OpenAI companies which used it saw up to 30% higher accuracy after using it None of the announced capabilities are truly new, and I doubt that building agents with OpenAI will offer a better experience than platforms like n8n or Zapier. The output still generates code, and the whole setup clearly targets developers (for now) — which explains why it was introduced at DevDay rather than rolled out to the broader user base. And for enterprise-ready AI agents, you still need solid frameworks like LangChain or CrewAI, not another drag-and-drop automation layer. AgentKit is a strong step, but there’s still a way to go before it becomes a production-grade enterprise solution and kills "99% of all other tools". 𝗣.𝗦. 𝗜 𝗿𝗲𝗰𝗲𝗻𝘁𝗹𝘆 𝗹𝗮𝘂𝗻𝗰𝗵𝗲𝗱 𝗮 𝗻𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿  — 𝘄𝗵𝗲𝗿𝗲 𝗜 𝘀𝗵𝗮𝗿𝗲 𝘁𝗵𝗲 𝗯𝗲𝘀𝘁 𝘄𝗲𝗲𝗸𝗹𝘆 𝗱𝗿𝗼𝗽𝘀 𝗼𝗻 𝗔𝗜 𝗮𝗴𝗲𝗻𝘁𝘀, 𝗲𝗺𝗲𝗿𝗴𝗶𝗻𝗴 𝘄𝗼𝗿𝗸𝗳𝗹𝗼𝘄𝘀, 𝗮𝗻𝗱 𝗵𝗼𝘄 𝘁𝗼 𝘀𝘁𝗮𝘆 𝗮𝗵𝗲𝗮𝗱 𝘄𝗵𝗶𝗹𝗲 𝗼𝘁𝗵𝗲𝗿𝘀 𝘄𝗮𝘁𝗰𝗵 𝗳𝗿𝗼𝗺 𝘁𝗵𝗲 𝘀𝗶𝗱𝗲𝗹𝗶𝗻𝗲𝘀. 𝗜𝘁’𝘀 𝗳𝗿𝗲𝗲 — 𝗮𝗻𝗱 𝗮𝗹𝗿𝗲𝗮𝗱𝘆 𝗿𝗲𝗮𝗱 𝗯𝘆 𝟮𝟬,𝟬𝟬𝟬+ 𝗽𝗲𝗼𝗽𝗹𝗲. https://lnkd.in/dbf74Y9E
·linkedin.com·
🚨 OpenAI 𝗷𝘂𝘀𝘁 𝗮𝗻𝗻𝗼𝘂𝗻𝗰𝗲𝗱 𝘁𝗵𝗲𝗶𝗿 𝗼𝘄𝗻 𝗔𝗴𝗲𝗻𝘁 𝗕𝘂𝗶𝗹𝗱𝗲𝗿.
ChatGPT's biggest update got leaked. Tomorrow, they will announce automation: ✦ It seems to be right using the API (not ChatGPT). ✦ I'm guessing it's a mix of Zapier, n8n, or Make. ✦ You can read "Agent Builder" or "Workflow".
ChatGPT's biggest update got leaked. Tomorrow, they will announce automation: ✦ It seems to be right using the API (not ChatGPT). ✦ I'm guessing it's a mix of Zapier, n8n, or Make. ✦ You can read "Agent Builder" or "Workflow".
PS: once it's live, I'll make a full guide on how-to-ai.guide. It's my newsletter, read by 132,000 people. Here is all of the (trusted) information I gathered: — The no-code AI era starts now — ✓ Drag-and-drop visual canvas for building agents. ✓ Templates for customer support, data, etc. ✓ Native OpenAI model access, including GPT-5. ✓ Full integration with external tools and services. But there was always a wall: coding. Now, anyone can build advanced AI agents. No code. No friction. Here’s how it (seems to) work: You want to automate customer support: 1. Pick a template for a support bot. But you need it to pull info from your database. 2. Drag in an MCP connector. Link your data. You want human approval for refunds. 3. Add a user approval step. Set the rules. You want to check documents for fraud. 4. Drop in a file search and comparison node. Test it. Preview it. Deploy it. All in one place. OpenAI is more than just an API company. It is building the backbone for the no-code AI economy. Now, anyone can create agents that work across systems, talk to users, and make decisions. The age of visual AI automation is here.
·linkedin.com·
ChatGPT's biggest update got leaked. Tomorrow, they will announce automation: ✦ It seems to be right using the API (not ChatGPT). ✦ I'm guessing it's a mix of Zapier, n8n, or Make. ✦ You can read "Agent Builder" or "Workflow".
Want to know what L&D is really doing with AI? Well, now you can. Today, Egle Vinauskaite and I publish our third annual report on AI in L&D. We’ve listened to more than 600 people in 53 countries, and there’s plenty to share.
Want to know what L&D is really doing with AI? Well, now you can. Today, Egle Vinauskaite and I publish our third annual report on AI in L&D. We’ve listened to more than 600 people in 53 countries, and there’s plenty to share.
To learn more, download the 54-page report AI in L&D 2025: The Race For Impact (link in comments and in my bio). Inside, you’ll find: · 10 ‘snapshot’ mini case studies · 12 pages of detailed analysis of how L&D is using AI · 12 pages of quantative analysis · 14 pages of in-depth case studies from Microsoft, ServiceNow, TTEC, KPMG UK, Leyton and mci group. · 1 framework: the Transformation Triangle. As AI makes is easier and faster to generated content, we explore the profound implications for L&D. And all of this is illustrated with ample quotes from the people out there doing the work. This isn’t an armchair exercise. We’ve gone through countless interviews and around 20,000 words of text that our respondents generated in the survey describing their work. This is a vivid illustration of what’s happening with AI in L&D today. We hope it will provide both insight, information and inspiration. My key take away: we’ve passed an inflexion point. For the first time, over half our respondents said they weren’t experimenting with AI, but actually using it. That’s a significant shift from last year. AI has moved from being a novelty to being part of L&D’s regular toolkit. And look at how they are using it – sure, content creation dominates. But look at the table of how things have changed since last year. Again, content dominates the top four place, but just beneath, there’s one extraordinary change. Qualitative data analysis has leapt from 8 last year to 5 this year. The single biggest change from year to year. This single points illustrates something we see across all our analysis, and in all of our case studies: a shift towards more sophisticated use, an increased focus on data, analysis and research. The featured case studies illustrate some of these inventive new uses perfectly – to learn more, download the report now. Our thanks to our report sponsors, OpenSesame, Speexx and The Regis Company for making this report possible. To download, click the link in my profile, or go to the first comment.  | 39 comments on LinkedIn
·linkedin.com·
Want to know what L&D is really doing with AI? Well, now you can. Today, Egle Vinauskaite and I publish our third annual report on AI in L&D. We’ve listened to more than 600 people in 53 countries, and there’s plenty to share.
What's happening with AI in L&D? Well, here it is — the 2025 edition.
What's happening with AI in L&D? Well, here it is — the 2025 edition.
Today, Donald H Taylor and I are releasing our third annual report on AI in L&D: The Race for Impact. If you’ve been wondering whether you’re behind, which AI uses you haven’t yet tried, or how to take your work further, we’ve put this report together to give you answers and ideas. Inside you'll find: ➡️ Fresh data on the most popular AI uses in L&D, how patterns are shifting, and what barriers teams still face ➡️ 12 pages detailing AI uses across learning design and content development, internal L&D ops, strategy and insight, and workforce enablement to inform and inspire your practice ➡️ 14 pages of in-depth AI in L&D case studies by Microsoft, ServiceNow, TTEC, KPMG UK, Leyton and mci group ➡️ A framework - the Transformation Triangle - exploring what AI’s move into “traditional” L&D work means for the function’s future role 600+ respondents. 53 countries. 20,000+ words in write-in responses. Days of interviews. Countless hours of deliberations and coffees trying to make sense of how the industry has evolved over the past 3 years and what it means for the road ahead. These are extraordinary numbers and they wouldn’t exist without the community behind them. Thank you to everyone who took the time to complete the survey and share thoughtful answers. Thank you to our case study contributors, who gave hours of their own time to document their practice for the benefit of the wider industry. Thank you to our sponsors OpenSesame, The Regis Company and Speexx who made this work possible. And thank you to Don: what started as a coffee conversation has grown into a three-year collaboration that keeps pushing both of us (and hopefully the field) forward. The full report is free to download (link in the comments). P.S. Below is a snapshot of the most common AI use cases we mapped this year. It gives a sense of where the field is and might spark a few new ideas 🙌 ♻️ Share this post so more teams can find these insights and build on each other’s work. | 22 comments on LinkedIn
·linkedin.com·
What's happening with AI in L&D? Well, here it is — the 2025 edition.
This article dropped a few days ago 👉 https://lnkd.in/djktVNKi Main talking points:
This article dropped a few days ago 👉 https://lnkd.in/djktVNKi Main talking points:
💡 Companies are adopting AI like crazy, but they should invest in preparing people to work with AI just as much. Apparently, that doesn't happen nearly enough as it should 💡 The research presented in the article highlights that Get AI Tutors outperform classroom training by 32% on personalization and 17% on feedback relevance. 💡 Gen AI Tutors create space for self-reflection, which is awesome 💡 Learners finished training 23% faster while achieving the same results 💡 Frontline workers, culture change, and building AI competence were mentioned as applications for Gen AI My thoughts: 💭 I think one of the hardest decisions we will face is where we should use Gen AI Tutors and where we should keep human interaction as part of learning 💭 The "results" in the research presented were, mostly, imho, still vanity metrics. I'm looking forward to seeing research done where analysis of results is more comprehensive (spanning a longer timeline, with clear leading indicators, etc). Until then, I can fully be convinced of the fact that Gen AI Tutors truly perform better on growing cognitive & behavioral skills 💭 While I find the culture change application interesting, I do hope Gen AI Tutors won't be used to absolve leaders of the responsibility THEY have for building cultures. I can't see a good result coming out of this. Very curious to hear your thoughts 👀 #learninganddevelopment | 14 comments on LinkedIn
·linkedin.com·
This article dropped a few days ago 👉 https://lnkd.in/djktVNKi Main talking points:
SAP and OpenAI partner to launch sovereign ‘OpenAI for Germany’ | Anja C. Wagner
SAP and OpenAI partner to launch sovereign ‘OpenAI for Germany’ | Anja C. Wagner
Endlich geht mal was ordentlich voran: OpenAI for Germany – ein interessanter Schritt in Richtung digitaler Souveränität. Heute haben SAP und OpenAI offiziell die Partnerschaft „OpenAI for Germany“ vorgestellt. Das Ziel: KI-Technologien für den öffentlichen Sektor in Deutschland bereitzustellen – mit Fokus auf Datenschutz, Datensouveränität und rechtliche Compliance. Die Lösung basiert auf SAPs Delos Cloud (unter Microsoft Azure-Technologie) – lokal in Deutschland betrieben. Was steckt dahinter? Millionen von Beschäftigten in Verwaltungen, Behörden und Forschungseinrichtungen sollen KI-gestützte Tools nutzen können, um Prozesse effizienter zu gestalten. Geplant ist u.a. Automatisierung von Vorgängen, Datenanalysen und Workflow-Integration. Zum Start sollen 4000 GPUs für KI-Rechenleistung zur Verfügung stehen — mit Ambition, weiter zu skalieren. Es passt zur deutschen KI-Ambition: Der Staat sieht KI bis 2030 als wichtigen Werttreiber, mit Potenzial für bis zu 10 % BIP-Beitrag. Warum ich das für wichtig halte: Es ist ein klares Signal: KI-Lösungen müssen nicht zwangsläufig „Alles ins Ausland“ oder "Alles nur national" bedeuten. Lokale, hybride Infrastrukturen können Teil der Lösung sein. Gerade im öffentlichen Sektor lassen sich durch KI echte Mehrwerte schaffen — wenn Sicherheit, Recht und Vertrauen stimmen. Es ist ein gutes Beispiel dafür, wie Partnerschaften (Tech + Industrie + Staat) zusammenspielen können. Start soll 2026 sein. Bravo, denke ich 🙏 Wird aber nicht allen gefallen ...  ⛓️‍💥 https://lnkd.in/dE3q9Jys #digital #souveraen #verwaltung #ki
·linkedin.com·
SAP and OpenAI partner to launch sovereign ‘OpenAI for Germany’ | Anja C. Wagner
AI will find its way into schools whether we like it or not. The danger lies in ignoring it; that’s how ‘workslop’ takes root.
AI will find its way into schools whether we like it or not. The danger lies in ignoring it; that’s how ‘workslop’ takes root.
‘We define workslop as AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.’ So begins a great piece in the Harvard Business Review which has coined a new term referring to poor AI practices which are developing: employees are producing sloppy work with AI and actually creating more work down the line for the person they pass the ‘workslop’ onto. The article offers some clear pointers on how organisations can move on to better AI practice, summed up in the conclusion: ‘Leaders will do best to model thoughtful AI use that has purpose and intention. Set clear guardrails for your teams around norms and acceptable use. Frame AI as a collaborative tool, not a shortcut. Embody a pilot mindset, with high agency and optimism, using AI to accelerate specific outcomes with specific usage. And uphold the same standards of excellence for work done by bionic human-AI duos as by humans alone.’ These lessons are just as applicable to schools as to businesses. The key difference is that we not only need leaders to model best practice, but teachers to help students understand what this looks like. It’s vital we take active steps now to shape habits: AI can be a force for innovation and amplify what’s best in our schools, or it can drive ‘workslop’ in staff and students. Surely the choice is a no brainer? (Link to piece in comments via post on this from David Monis-Weston)
·linkedin.com·
AI will find its way into schools whether we like it or not. The danger lies in ignoring it; that’s how ‘workslop’ takes root.
The development team behind the Model Context Protocol (MCP) has introduced the MCP Registry
The development team behind the Model Context Protocol (MCP) has introduced the MCP Registry
– an open catalog and API to discover and use publicly available MCP servers. Finally, MCP servers can be discovered through a central catalog. Think of it as an App Store for scanning and searching your MCPs: → An open catalog + API for discovering MCP servers → One-click install in VS Code → Servers from npm, PyPI, DockerHub → Sub-registries possible for security and curation → Works across Copilot, Claude, Perplexity, Figma, Terraform, Dynatrace etc. Although it is still in preview and being worked on, it will definitely serve a major problem. Github link: https://lnkd.in/df-qTnYe
·linkedin.com·
The development team behind the Model Context Protocol (MCP) has introduced the MCP Registry
So, New Zealanders and Australians are the most similar to AI, and Ethopians and Pakistanis the least.
So, New Zealanders and Australians are the most similar to AI, and Ethopians and Pakistanis the least.
So, New Zealanders and Australians are the most similar to AI, and Ethopians and Pakistanis the least. This study shows that WEiRD (Western, Educated, Industrialized, Rich, and Democratic) populations are the most aligned with ChatGPT. It is intriguing why some nations, including Netherlands and Germany, are more GPT-similar than Americans. The paper uses cognitive tasks such as the “triad task,” which distinguishes between analytic (category-based) and holistic (relationship-based) thinking, GPT tends to analytic thinking, which aligns with countries like the Netherlands and Sweden, that value rationality. This contrasts with holistic thinkers found in many non-WEIRD cultures. GPT tends to describe the "average human" in terms that are aligned with WEIRD norms. In short, with the overwhelmingly skewed data used to train AI and the outputs, it's “WEIRD in, WEIRD out”. The size of the models or training data is not the issue, it's the diversity and representativeness of the data. All of which underlines the value and importance of sovereign AI, potentially for regions or values-aligned cultures, not just at the national level. | 18 comments on LinkedIn
·linkedin.com·
So, New Zealanders and Australians are the most similar to AI, and Ethopians and Pakistanis the least.
„𝗗𝗮𝘀 𝗶𝘀𝘁 𝗱𝗮𝘀 𝗹𝗲𝘁𝘇𝘁𝗲 𝗝𝗮𝗵𝗿, 𝗶𝗻 𝗱𝗲𝗺 𝘄𝗶𝗿 𝗣𝗼𝘄𝗲𝗿𝗽𝗼𝗶𝗻𝘁𝘀 𝗮𝗹𝘀 𝗣𝗿𝗼𝗱𝘂𝗸𝘁 𝗹𝗶𝗲𝗳𝗲𝗿𝗻.
„𝗗𝗮𝘀 𝗶𝘀𝘁 𝗱𝗮𝘀 𝗹𝗲𝘁𝘇𝘁𝗲 𝗝𝗮𝗵𝗿, 𝗶𝗻 𝗱𝗲𝗺 𝘄𝗶𝗿 𝗣𝗼𝘄𝗲𝗿𝗽𝗼𝗶𝗻𝘁𝘀 𝗮𝗹𝘀 𝗣𝗿𝗼𝗱𝘂𝗸𝘁 𝗹𝗶𝗲𝗳𝗲𝗿𝗻.
„𝗗𝗮𝘀 𝗶𝘀𝘁 𝗱𝗮𝘀 𝗹𝗲𝘁𝘇𝘁𝗲 𝗝𝗮𝗵𝗿, 𝗶𝗻 𝗱𝗲𝗺 𝘄𝗶𝗿 𝗣𝗼𝘄𝗲𝗿𝗽𝗼𝗶𝗻𝘁𝘀 𝗮𝗹𝘀 𝗣𝗿𝗼𝗱𝘂𝗸𝘁 𝗹𝗶𝗲𝗳𝗲𝗿𝗻. 𝗞𝘂̈𝗻𝗳𝘁𝗶𝗴 𝗹𝗶𝗲𝗳𝗲𝗿𝗻 𝘄𝗶𝗿 𝗕𝗼𝘁𝘀.“ Die alte Ordnung der Wissensarbeit gerät ins Rutschen. Im Open-AI‑Forum diskutierten Aaron "Ronnie" Chatterji, Chefökonom von Open AI, und Joseph Fuller von der Harvard Business School, wie Künstliche Intelligenz Aufgaben, Organisationen und Karrieren neu gestaltet. Ihre Diagnose: Die Technologie sprintet voraus, doch Unternehmen verharren in Prozessen, die für eine analoge Welt eines klassischen Foliensatzes gebaut wurden. Fuller bringt die Zäsur auf eine Formel, die die Beratungsbranche aufhorchen lässt: „Das ist das letzte Jahr, in dem wir Powerpoints als Produkt liefern. Künftig liefern wir Bots.“ Hinter dem Bonmot steckt eine nüchterne Wirtschaftsanalyse: Regelbasierte Tätigkeiten lassen sich skalieren. Kunden verlangen weniger Informationsarbitrage, dafür mehr umsetzbare Systeme. Viele Beratungsmodelle lebten von der Fähigkeit, Informationen quer durch Silos zu heben, zu strukturieren und zu deuten. Jetzt rücken Entscheidungsplattformen in den Vordergrund, die Livedaten aus Finanz‑, Personal‑ und Vertriebssystemen integrieren und die Frage „Was jetzt?“ automatisiert beantworten. So entsteht eine Overlay‑Schicht für Entscheidungsintelligenz, und Powerpoint-Folien werden zur Randnotiz. Stattdessen werden Bots und Workflows zum Produkt, erwarten die beiden Experten. Weiterlesen auf F.A.Z. PRO Digitalwirtschaft (FAZ+) ▶︎ https://lnkd.in/eFjyCYGw Frankfurter Allgemeine Zeitung ___________ Mein neuer Online-Kurs: 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝘃𝗲 𝗞𝗜 𝗳𝘂̈𝗿 𝗙𝘂̈𝗵𝗿𝘂𝗻𝗴𝘀𝗸𝗿𝗮̈𝗳𝘁𝗲 Mein Kurs erläutert in kompakter Form die wesentlichen strategischen und ökonomischen Effekte der generativen KI für Unternehmen. Im Zentrum stehen die Produktivitätseffekte, die Auswirkungen auf Geschäftsmodelle und die Wettbewerbsfähigkeit der Unternehmen sowie der Zusammenhang zwischen KI und Arbeit. Den Abschluss bildet der Blick auf den Status der KI in Deutschland. Der Kurs richtet sich an Führungskräfte, die an den ökonomischen Implikationen der generativen KI für Unternehmen, Wirtschaft und Wettbewerb interessiert sind. Der Kurs berücksichtigt den aktuellen Stand in Wirtschaft und Wissenschaft. ▪️ Dauer: 80 Minuten ▪️ Inhalt: 9 Videos / 66 Slides Zur Buchung ▶︎ https://lnkd.in/ezsB-KDg
·linkedin.com·
„𝗗𝗮𝘀 𝗶𝘀𝘁 𝗱𝗮𝘀 𝗹𝗲𝘁𝘇𝘁𝗲 𝗝𝗮𝗵𝗿, 𝗶𝗻 𝗱𝗲𝗺 𝘄𝗶𝗿 𝗣𝗼𝘄𝗲𝗿𝗽𝗼𝗶𝗻𝘁𝘀 𝗮𝗹𝘀 𝗣𝗿𝗼𝗱𝘂𝗸𝘁 𝗹𝗶𝗲𝗳𝗲𝗿𝗻.
This interesting Deloitte report is framed around AI for HR, but the lessons are applicable across organizations, and support the broader issue of transformation to a Humans + AI organization.
This interesting Deloitte report is framed around AI for HR, but the lessons are applicable across organizations, and support the broader issue of transformation to a Humans + AI organization.
The report is definitely worth a look, perhaps especially the Appendix. Below sharing a few of the distilled highlights. 🔄 Multi-agent systems (MAS) are the next-gen operating model. In the next 12–18 months, expect a shift from siloed APIs to MAS that can reason, plan, and act across business units—enabling autonomous execution with governance and “human in the loop” oversight. 📈 Human–AI collaboration boosts decision-making capacity. AI can instantly synthesize vast datasets into contextual, role-specific insights, allowing executives and managers to make better, faster, and more informed decisions across the enterprise. 💡 Workforce roles are redesigned, not just replaced. Agentic AI shifts roles across the board—from purely executional to more analytical, creative, and relationship-focused work—impacting job design in marketing, operations, R&D, and beyond. 📊 AI standardizes excellence across the enterprise. By codifying best practices into AI systems, organizations can eliminate “pockets of excellence” and ensure consistent quality across all teams and regions—not just in HR but in sales, operations, and service delivery. 🔍 Predictive intervention beats reactive problem-solving. AI can detect signals—like turnover risk, performance decline, or customer churn—before they become problems. This enables leaders to act early with targeted, data-backed interventions. 🛠 Orchestration of multi-step, cross-functional workflows. Agentic AI can coordinate tasks across multiple business functions without manual handoffs—e.g., onboarding a new employee touches HR, IT, facilities, and finance, yet AI can plan, execute, and monitor the entire process end-to-end. 🗺 AI’s biggest impact areas are mapped. A “heatmap” of hundreds of HR processes pinpoints where AI should be fully powered (e.g., data analysis, reporting, inquiries), augmented (e.g., recruiting, performance reviews), or assistive—helping leaders invest for maximum ROI. 🚀 80%+ of admin work can be automated. In future HR operations, AI will handle over 80% of administrative and operational tasks, freeing HR teams to focus on strategy, workforce planning, and proactive talent interventions. | 14 comments on LinkedIn
·linkedin.com·
This interesting Deloitte report is framed around AI for HR, but the lessons are applicable across organizations, and support the broader issue of transformation to a Humans + AI organization.