AI News

AI News

618 bookmarks
Custom sorting
The AI opportunity in L&D isn't one thing - it's five different capability unlocks. The "What's the biggest opportunity with AI in L&D?" conversation usually takes one of two levels: we either zoom…
The AI opportunity in L&D isn't one thing - it's five different capability unlocks. The "What's the biggest opportunity with AI in L&D?" conversation usually takes one of two levels: we either zoom…
The AI opportunity in L&D isn't one thing - it's five different capability unlocks.
·linkedin.com·
The AI opportunity in L&D isn't one thing - it's five different capability unlocks. The "What's the biggest opportunity with AI in L&D?" conversation usually takes one of two levels: we either zoom…
If you thought working in #LearningandDevelopment this year was tough then you might want to hide under the duvet for what's coming in #2026.
If you thought working in #LearningandDevelopment this year was tough then you might want to hide under the duvet for what's coming in #2026.
There is no way to candy coat it.. From L&D challenges and strategy, to investment and budgets, to the adoption and emerging adoption of AI, along with new L&D tech decisions and the evolution of the learning experience - it's not getting easier - just yet! Expectations for #AI in L&D are becoming turbo-charged - with the numbers doubling this year for how much AI is already influencing L&D headcount and resourcing plans. Half of L&D teams think that by 2030 half of what they do could be replaced by AI and still be effective. So, what will L&D teams be doing with the time? In the absence of a good value proposition - downsizing beckons! But equally not all supplier AI roadmaps are real, and there is a lot of vapourware and a long list of promises. Who can you trust to be telling the truth about what AI they can really delivery for you? Is AI going to come soon enough to save L&D - anyway? It looks like some of the cavalry may not arrive for some time. Most L&D professionals don't think their learning systems are fit for the modern workforce. Whilst ChatGPT have Study Mode, and Google has Notebook LM, most LMS & LXPs are still stuck in catalogues and AI enhanced catalogue searches. Nothing exists to support learning cycles at a time when skills and blended learning are becoming more and more important. BUT, as 2026 looms large, it really isn't time to hide under the duvet...! It's time to take a deep breath, put your big pants on and start building the capabilities in your team that will enable you to move from being a transactional function into one that uses its deep understanding of learning, learning motivation, feedback culture, skills, tasks ontologies, organisational change, people and work intelligence and start building a route into the high performance organisation of tomorrow, and the new realities ofthe future of work... It's time to be deeply aligned to business transformation, upskilling and business improvement. It's time to be a #IntelligenceLed & #ValueCentred-L&D team, because you'll need to be able to evidence your value-add if you want a smooth ride. It's also time to find the learning solutions that are fit for the future workforce because as sure a night follows day and day follows night... having the right solutions in the right connected ecosystem might be the thing that turns turns #2026 into a great year. Why do I say ALL this? Because that's what you told us in this year’s Fosway Group Digital Learning Realities Research. Naturally if you want help navigating this maelstrom - come and speak to our experts Myles Runham and Fiona Leteney . No-one knows more about the opportunities than they do!
·linkedin.com·
If you thought working in #LearningandDevelopment this year was tough then you might want to hide under the duvet for what's coming in #2026.
After dozens of conversations with L&D leaders over the past months across industries, org sizes, and wildly different levels of AI maturity, one thing has become painfully clear: There’s a huge gap… | Inna Horvath 🇺🇦
After dozens of conversations with L&D leaders over the past months across industries, org sizes, and wildly different levels of AI maturity, one thing has become painfully clear: There’s a huge gap… | Inna Horvath 🇺🇦
After dozens of conversations with L&D leaders over the past months across industries, org sizes, and wildly different levels of AI maturity, one thing has become painfully clear:
·linkedin.com·
After dozens of conversations with L&D leaders over the past months across industries, org sizes, and wildly different levels of AI maturity, one thing has become painfully clear: There’s a huge gap… | Inna Horvath 🇺🇦
Mein liebster KI-Hack zurzeit ist es, 1. Meeting-Notes mit einem Note-taker von einer Session im New Learning Lab machen.
Mein liebster KI-Hack zurzeit ist es, 1. Meeting-Notes mit einem Note-taker von einer Session im New Learning Lab machen.
2. Das Transkript in NotebookLM laden 3. Audio und Video davon erstellen lassen 4. Der KI von NotebookLM bei der Erstellung sagen: "Erstelle mir ein Recap zu diesem Transkript einer Session. Was ist passiert? Was wurde besprochen? Was waren die zentralen Erkenntnisse? Was waren die Ergebnisse?" Und boom, kommt da etwas raus, was die Inhalte der Session oftmals direkt beim ersten Versuch wirklich gut zusammenfasst. Aber hört selbst in das Recap zu unserer 4. Session zum Thema Vibe Learning, in welcher wir angefangen haben, Vibe Learning in verschiedenen Lernkontexten zu denken. Konkret: Vibe Learning als Trainingsstarter und Vibe Learning als Bindeglied von größeren Upskilling Programmen. Ich fand das so cool, das musste ich vor dem Wochenende noch mit euch teilen. 😊
·linkedin.com·
Mein liebster KI-Hack zurzeit ist es, 1. Meeting-Notes mit einem Note-taker von einer Session im New Learning Lab machen.
🥇This is Gold! just dropped by Carnegie Mellon University! It’s one of the most honest looks yet at how “autonomous” agents actually perform in the real world.
🥇This is Gold! just dropped by Carnegie Mellon University! It’s one of the most honest looks yet at how “autonomous” agents actually perform in the real world.
👇 The study analyzed AI agents across 50+ occupations, from software engineering to marketing, HR, and design, and compared how they completed human workflows end to end. What they found is both exciting and humbling: • Agents “code everything.” Even in creative or administrative tasks, AI agents defaulted to treating work as a coding problem. Instead of drafting slides or writing strategies, they generated and ran code to produce results, automating processes that humans usually approach through reasoning and iteration. • They’re faster and cheaper, but not better. Agents completed tasks 4 – 8× faster and at a fraction of the cost, yet their outputs showed lower quality, weak tool use, and frequent factual errors or hallucinations. • Human–AI teaming consistently outperformed solo AI.🔥 When humans guided or reviewed the agent’s process, acting more like a “manager” or “co-pilot”, the results improved dramatically. 🧠 My take: The race toward “fully autonomous AI” is missing the real opportunity, co-intelligence. Right now, the biggest ROI in enterprises isn’t from replacing humans. It’s from augmenting them. ✅ Use AI to translate intent into action, not replace decision-making. ✅ Build copilots before colleagues, co-workers who understand your workflow, not just your prompt. ✅ Redesign processes for hybrid intelligence, where AI handles execution and humans handle ambiguity. The future of work isn’t humans or AI. (for the next 5 years IMO) It’s humans with AI, working in a shared cognitive space where each amplifies the other’s strengths. Because autonomy without alignment isn’t intelligence, it’s chaos. Autonomous AI isn’t replacing human work, it’s redistributing it. Humans shifted from doing to directing, while agents handled repetitive, programmable layers. Maybe we are just too fast to shift from "uncool" Copilot to sth more exciting called "Fully Autonomous AI", WDYT? | 36 comments on LinkedIn
·linkedin.com·
🥇This is Gold! just dropped by Carnegie Mellon University! It’s one of the most honest looks yet at how “autonomous” agents actually perform in the real world.
Mckinsey, State of AI 2025 Report
Mckinsey, State of AI 2025 Report
🚨 Just dropped! McKinsey report on AI in 2025: the hype is loud, the impact is.... All the CEO must read this: almost everyone is “using AI,” but only a small slice is wiring it deep enough to move the needle. 88% of companies use AI somewhere, yet ~⅔ are still stuck in experiments/pilots, not scale. Agents are real but early: 62% are experimenting; only 23% are scaling in at least one function (and typically just one or two). Only 39% report any impact from AI at the enterprise level. The rest have scattered wins, not system change. High performers (≈6%) think bigger: they aim for transformation, not just cost cuts, and are ~3× more likely to redesign workflows around AI. Leadership matters: where the CEO and senior team own AI, adoption scales and budgets follow (many leaders spend 20% of digital on AI). Value shows up fastest in software eng, IT, mfg (cost ↓) and in marketing/sales, strategy/finance, product (revenue ↑). Risk is real and showing up: inaccuracy and explainability issues top the list, mature orgs pair ambition with stronger guardrails and human-in-the-loop. My take: Most firms bought tools; the few winners rebuilt work. Agent pilots are cool, but without workflow redesign, data plumbing, and clear governance, you’re funding demos, not outcomes. The org that rewires will beat the org that “rolls out.” Leaders should set the bar higher than “efficiency.” Tie AI to growth, new offerings, and customer experience, then go after costs. Redesign 3–5 critical workflows end-to-end (not feature by feature). Ship, measure, harden, repeat. Put ownership at the top. If the CEO isn’t accountable for AI governance and ROI, it will stall. Invest in the platform: data products, evaluation, CI/CD for models/agents, human-in-the-loop checkpoints, risk controls. Skill the workforce for agents: task decomposition, prompt/context ops, verification, and change management, at scale. AI ROI doesn’t come from the model. It comes from the company willing to change its operating system. wdyt? | 76 comments on LinkedIn
·linkedin.com·
Mckinsey, State of AI 2025 Report
AI, in 1 hour - Resources List
AI, in 1 hour - Resources List
Archive ‘How to AI’ (most recent to oldest) Delve, and the many words to ban on ChatGPT. (soon) From Youtube to your own AI. (the last one) Your ChatGPT prompt is too long. Remove em-dashes (and more). How to stop receiving the same ChatGPT answer. The new ChatGPT Atlas is live. Is it any good...
·docs.google.com·
AI, in 1 hour - Resources List
A few hours ago, Google published a white paper laying out their vision for the Future of Learning. Here's the TLDR:
A few hours ago, Google published a white paper laying out their vision for the Future of Learning. Here's the TLDR:
The Headline: 👉 Global learning is at a crossroads: learner outcomes have dropped sharply worldwide, and UNESCO projects a shortage of 44 million teachers by 2030. 👉 AI is positioned as *the* tool to save us from an impending education crisis BUT... 👉 The real "secret weapon" for improving education isn't the tech: it's the learning science we build into it. According to Google, the four biggest opportunities offered by AI in education are: 🔥 Learning Science at Scale – Embed evidence-based methods (retrieval practice, spaced repetition, active feedback) directly into everyday tools. 🔥 Making Anything Learnable – Adjust explanations, examples and complexity to meet each learner where they are. 🔥 Universal Access – Break down language, literacy and disability barriers through AI-powered translation and transformation. 🔥 Empowering Educators – Free up teacher time through AI-assisted lesson planning, resource creation and differentiation. Overall, Google's latest white paper signals an evolving ed-tech culture which centres on a more substantive partnership between ed & tech: 👉 Co-Creation: Google commits to investing in evidence-based approaches to learning design and development and to rigorous evaluation, pilot studies and educator-led research to test and demo impact. 👉 Collaborative Development: Google commits to working with schools, NGOs, researchers and learning scientists to co-design tools for learning. You can read the white paper in full using the link in comments. Happy innovating! Phil 👋 | 26 comments on LinkedIn
·linkedin.com·
A few hours ago, Google published a white paper laying out their vision for the Future of Learning. Here's the TLDR:
What happens when learners meet AI?
What happens when learners meet AI?
What happens when learners meet AI? Think of skill development as a road from beginner to expert. You normally start with basic practice, work through tough problems, reflect on what's working, and eventually reach the point where you can handle anything that comes up. Now AI has entered this picture. Depending on how we use it, we end up on completely different roads. Use AI too early and you risk never-skilling. You skip the fundamentals and never develop real capability. Hand over too much and you risk de-skilling. Abilities you once had start to fade. Copy AI outputs without thinking and you risk mis-skilling. You learn the wrong lessons and build on faulty foundations. But there's another path. Use AI while staying critical. Question its outputs. Think through the logic. Verify the answers. This is AI-enhanced adaptive practice. AI becomes a sparring partner that helps you learn faster without replacing your own reasoning. The difference comes down to one thing: who's in control. The people who'll succeed with AI aren't avoiding it or surrendering to it completely. They're the ones who keep thinking while using AI to compress learning cycles and test ideas faster. AI shouldn't replace your thinking. It should make your thinking better. The question isn't whether to use AI when learning. It's whether you're driving or just sitting in the passenger seat. How are you seeing this play out in your work? ✍ Raja-Elie Abdulnour, Brian Gin, Christy Boscardin. Educational Strategies for Clinical Supervision of Artificial Intelligence Use. N Engl J Med. 2025;393(8):786-797. DOI: 10.1056/NEJMra2503232 | 10 comments on LinkedIn
·linkedin.com·
What happens when learners meet AI?
Embracing Transformation in a Disrupted World | Dr. Christoph Spöck
Embracing Transformation in a Disrupted World | Dr. Christoph Spöck
𝗘𝗺𝗯𝗿𝗮𝗰𝗶𝗻𝗴 𝗧𝗿𝗮𝗻𝘀𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻 𝗶𝗻 𝗮 𝗗𝗶𝘀𝗿𝘂𝗽𝘁𝗲𝗱 𝗪𝗼𝗿𝗹𝗱 𝗪𝗮𝗿𝘂𝗺 𝗱𝗲𝗿 𝗠𝗲𝗻𝘀𝗰𝗵𝗻 𝗱𝗲𝗿 𝘄𝗶𝗰𝗵𝘁𝗶𝗴𝘀𝘁𝗲 𝗙𝗮𝗸𝘁𝗼𝗿 𝗶𝗻 𝗱𝗲𝗿 𝗧𝗿𝗮𝗻𝘀𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻 𝗶𝘀𝘁 In einer Welt voller Unsicherheit, technologischem Wandel und geopolitischer Spannungen ist Transformation längst kein Projekt mehr – sie ist Dauerzustand. Die aktuelle Studie von Arthur D. Little – „Embracing Transformation in a Disrupted World“ (2025) zeigt eindrucksvoll, wie tiefgreifend der Wandel bereits in Unternehmen verankert ist – und wo er scheitert. ➡️ 65 % der Unternehmen befinden sich aktuell in umfassenden Transformationsprozessen. ➡️ 95 % der Führungskräfte glauben an ihren Erfolg. ➡️ Doch nur 7 % schaffen es, eine wirklich kontinuierliche Transformation zu leben. ➡️ Der größte Stolperstein? Nicht Technologie – sondern Menschen. Die Studie belegt: Ohne echte Einbindung der Mitarbeitenden bleiben Strategien, Strukturen und Systeme leere Hüllen. Denn: Transformation gelingt nicht um Menschen herum – sie gelingt nur mit ihnen. 𝗪𝗮𝘀 𝗯𝗲𝗱𝗲𝘂𝘁𝗲𝘁 𝗱𝗮𝘀 𝗳ü𝗿 𝗛𝗥 𝘂𝗻𝗱 𝗟𝗲𝗮𝗱𝗲𝗿𝘀𝗵𝗶𝗽 ➡️ Upskilling & Re-Skilling sind kein „Nice-to-have“ mehr, sondern Voraussetzung für Zukunftsfähigkeit. ➡️ Kulturarbeit muss Teil der Transformationsarchitektur sein – nicht ein Begleitprogramm. ➡️ Führungskräfte sind die entscheidenden Übersetzer zwischen Strategie und Emotion. Sie schaffen Sinn, Vertrauen und Energie. ➡️ Iterative Transformation statt Großprojektdenken: Wandel wird dann nachhaltig, wenn Organisationen lernen, sich selbst permanent weiterzuentwickeln. 📊 Besonders spannend: Nur 5 % der Unternehmen bewerten ihre Lernkultur als „sehr effektiv“. Das zeigt, wie groß der Handlungsbedarf ist – gerade im HR. Hier entscheidet sich, ob Transformation getragen oder gebremst wird. 🎯 𝗠𝗲𝗶𝗻 𝗙𝗮𝘇𝗶𝘁 Technologie mag der Katalysator sein – aber Menschen sind der Motor jeder erfolgreichen Transformation. Wer ihre Potenziale entfesselt, gestaltet nicht nur Wandel, sondern Zukunft. Quelle: Arthur D. Little (2025): “Embracing Transformation in a Disrupted World” – Autoren: Francesco Marsella, Wilhelm Lerner, Ben van der Schaaf, Marten Zieris, Alexander Buirski, Francesco Cotrone, Alexis Ost Duchateau.
·linkedin.com·
Embracing Transformation in a Disrupted World | Dr. Christoph Spöck
A lot of firms - virtually all firms now - are shaping their AI strategy. Or, better, they’re adapting their strategy in light of the new capabilities we have and will have, thanks to AI.
A lot of firms - virtually all firms now - are shaping their AI strategy. Or, better, they’re adapting their strategy in light of the new capabilities we have and will have, thanks to AI.
But people have reacted to genrative AI so differently. Some have embraced it with gusto. Many have shrunk away from it. Thre vast majority of AI experimentation and usage still happens outside of work (ChatGPT has 800m weekly mostly-consumer users now). Most firms don’t have a very good idea of where the individuals and teams that make up their workforce are. Well, a 2x2 matrix almost always helps - so simple, so illuminating. It’s my favourite mental model. In this situation, adoption and capability are two pertinent axes to think about this. It gives a sense of where there’s overconfidence, underconfidence and appropriate confidence. And what actions you might take for populations in each of the quadrants. This enables you to better serve your people, and be better served by them. If you’re interested in a 30-question survey which generates the data behind each axis and forms part of and builds on my AI in the Wild use case research, send me a message. ♻️Please REPOST if people you’re connected to may like to be updated on how AI is being used, out in the Wild. #aiinthewild
·linkedin.com·
A lot of firms - virtually all firms now - are shaping their AI strategy. Or, better, they’re adapting their strategy in light of the new capabilities we have and will have, thanks to AI.
AI isn’t just transforming Learning & Development.
AI isn’t just transforming Learning & Development.
AI isn’t just transforming Learning & Development. It’s revealing it. For years, we’ve talked about being strategic partners - about impact, performance, and business alignment - but much of L&D has still operated as a content-production function. We’ve equated “learning” with “stuff we make”. Now AI has arrived, and it’s showing us what’s really been going on. - If your value comes from creating courses and content, AI will replace you. - If your value comes from solving real problems for the business, AI will amplify you. That’s the pivot point we’re in. The new report, The Race for Impact written by Egle Vinauskaite and Donald H Taylor, captures this moment perfectly. Within it, they describe the “Implementation Inflexion” - the shift from experimenting with AI to actually using it - and revealing what L&D teams are doing as they lead the way. The “Transformation Triangle” lays out three models that go beyond content: Skills Authority - owning data and insight around workforce capability Enablement Partner - orchestrating systems that help others solve problems Adaptation Engine - continuously learning with the business to stay relevant Each one moves L&D closer to the business and further from being an internal production house. This isn’t about tech. It’s about identity. And the teams that figure that out now will define what L&D means in the age of AI. Hear more about this from Egle in the latest episode of The Learning & Development Podcast and what this all means in practice. A link to this episode is in the comments.
·linkedin.com·
AI isn’t just transforming Learning & Development.
TeachLM: six key findings from a study of the latest AI model fine-tuned for teaching & learning.
TeachLM: six key findings from a study of the latest AI model fine-tuned for teaching & learning.
TeachLM: six key findings from a study of the latest AI model fine-tuned for teaching & learning. LLMs are great assistants but ineffective instructional designers and teachers. This week, researchers at Polygence + Stanford University published a paper on a new model — TeachLM — which was built to address exactly this gap. In my latest blog post, I share the key findings from the study, including observations on what it tells us about AI’s instructional design skills. Here’s the TLDR: 🔥 TeachLM outperformed generic LLMs on six key education metrics, including improved question quality & increased personalisation 🔥TeachLM also outperformed “Educational LLMs” - e.g. Anthropic’s Learning Mode, OpenAI’s Study Mode and Google’s Guided Learning - which fail to deliver the productive struggle, open exploration and specialised dialogue required for substantive learning 🔥TeachLM flourished at developing some teaching skills (e.g. being succinct in its comms) but struggled with others (e.g. asking enough of the right sorts of probing questions) 🔥 Training TeachLM on real educational interactions rather than relying on prompts or synthetic data lead to improved model performance 🔥TeachLM was trained primarily for delivery, leaving significant gaps in its ability to “design the right experience”, e.g. by failing to define learners’ start points and goal 🔥Overall, human educators still outperform all LLMs, including TeachLM, on both learning design and delivery Learn more & access the full paper in my latest blog post (link in comments). Phil 👋
·linkedin.com·
TeachLM: six key findings from a study of the latest AI model fine-tuned for teaching & learning.
KI-Adoption Selbsteinstufung
KI-Adoption Selbsteinstufung
Ermitteln Sie den Reifegrad Ihrer KI-Transformation - Kostenlose Selbsteinschätzung basierend auf dem Learning Ecosystem Framework
·aitransformationassessment.lovable.app·
KI-Adoption Selbsteinstufung
Sag mir, was du klickst und ich sage dir, was du lernst - eLearning Journal Online
Sag mir, was du klickst und ich sage dir, was du lernst - eLearning Journal Online
Ein Impuls für individuelle Lernpfade durch Künstliche Intelligenz im Bereich L&D „Kunden, die diese Bettwäsche kauften, kauften auch …“ Dieses Prinzip kennen Sie sicher von E-Commerce-Plattformen. Ihr Verhalten wird erfasst, verglichen und in Empfehlungen übersetzt. Das Ziel: Sie kaufen bestenfalls mehr als diesen einen Artikel. Übertragen auf Lernplattformen heißt das: Klicks, Quiz-Ergebnisse und Suchanfragen lassen […]
·elearning-journal.com·
Sag mir, was du klickst und ich sage dir, was du lernst - eLearning Journal Online
🚨 OpenAI 𝗷𝘂𝘀𝘁 𝗮𝗻𝗻𝗼𝘂𝗻𝗰𝗲𝗱 𝘁𝗵𝗲𝗶𝗿 𝗼𝘄𝗻 𝗔𝗴𝗲𝗻𝘁 𝗕𝘂𝗶𝗹𝗱𝗲𝗿.
🚨 OpenAI 𝗷𝘂𝘀𝘁 𝗮𝗻𝗻𝗼𝘂𝗻𝗰𝗲𝗱 𝘁𝗵𝗲𝗶𝗿 𝗼𝘄𝗻 𝗔𝗴𝗲𝗻𝘁 𝗕𝘂𝗶𝗹𝗱𝗲𝗿.
🚨 OpenAI 𝗷𝘂𝘀𝘁 𝗮𝗻𝗻𝗼𝘂𝗻𝗰𝗲𝗱 𝘁𝗵𝗲𝗶𝗿 𝗼𝘄𝗻 𝗔𝗴𝗲𝗻𝘁 𝗕𝘂𝗶𝗹𝗱𝗲𝗿. [And no — it didn’t kill 99% of startups overnight.] It’s called AgentKit — a no-code, full-stack platform to build, deploy, and optimize AI agents. The UI looks surprisingly clean, but let’s be clear: this doesn’t instantly replace Zapier, Make, n8n, or Lindy. AgentKit is impressive, yes — but it’s still early, still developer-focused, and far from being a plug-and-play automation killer. 𝗛𝗲𝗿𝗲’𝘀 𝘄𝗵𝗮𝘁 𝗔𝗴𝗲𝗻𝘁𝗞𝗶𝘁 𝗶𝗻𝗰𝗹𝘂𝗱𝗲𝘀: ⬇️ 1. Agent Builder – a visual interface to design and connect multiple AI agents → You can drag and drop steps, test them instantly, and track versions → Comes with built-in safety checks (guardrails) → It’s in beta — I haven’t tested it yet, but the interface looks quite polished 2. Connector Registry – a control center for all your data connections → Possible to manage integrations with MCP → Adds content and tools to keep it organized, secure, and compliant for enterprise use 3. ChatKit – provides an interface to add chat to your product → Turns agents into a chat interface that looks native → Handles threads, live responses, and context automatically 4. Evals 2.0 – a system to test and improve your agents → Lets you run evaluations using datasets and automated grading → According to OpenAI companies which used it saw up to 30% higher accuracy after using it None of the announced capabilities are truly new, and I doubt that building agents with OpenAI will offer a better experience than platforms like n8n or Zapier. The output still generates code, and the whole setup clearly targets developers (for now) — which explains why it was introduced at DevDay rather than rolled out to the broader user base. And for enterprise-ready AI agents, you still need solid frameworks like LangChain or CrewAI, not another drag-and-drop automation layer. AgentKit is a strong step, but there’s still a way to go before it becomes a production-grade enterprise solution and kills "99% of all other tools". 𝗣.𝗦. 𝗜 𝗿𝗲𝗰𝗲𝗻𝘁𝗹𝘆 𝗹𝗮𝘂𝗻𝗰𝗵𝗲𝗱 𝗮 𝗻𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿  — 𝘄𝗵𝗲𝗿𝗲 𝗜 𝘀𝗵𝗮𝗿𝗲 𝘁𝗵𝗲 𝗯𝗲𝘀𝘁 𝘄𝗲𝗲𝗸𝗹𝘆 𝗱𝗿𝗼𝗽𝘀 𝗼𝗻 𝗔𝗜 𝗮𝗴𝗲𝗻𝘁𝘀, 𝗲𝗺𝗲𝗿𝗴𝗶𝗻𝗴 𝘄𝗼𝗿𝗸𝗳𝗹𝗼𝘄𝘀, 𝗮𝗻𝗱 𝗵𝗼𝘄 𝘁𝗼 𝘀𝘁𝗮𝘆 𝗮𝗵𝗲𝗮𝗱 𝘄𝗵𝗶𝗹𝗲 𝗼𝘁𝗵𝗲𝗿𝘀 𝘄𝗮𝘁𝗰𝗵 𝗳𝗿𝗼𝗺 𝘁𝗵𝗲 𝘀𝗶𝗱𝗲𝗹𝗶𝗻𝗲𝘀. 𝗜𝘁’𝘀 𝗳𝗿𝗲𝗲 — 𝗮𝗻𝗱 𝗮𝗹𝗿𝗲𝗮𝗱𝘆 𝗿𝗲𝗮𝗱 𝗯𝘆 𝟮𝟬,𝟬𝟬𝟬+ 𝗽𝗲𝗼𝗽𝗹𝗲. https://lnkd.in/dbf74Y9E
·linkedin.com·
🚨 OpenAI 𝗷𝘂𝘀𝘁 𝗮𝗻𝗻𝗼𝘂𝗻𝗰𝗲𝗱 𝘁𝗵𝗲𝗶𝗿 𝗼𝘄𝗻 𝗔𝗴𝗲𝗻𝘁 𝗕𝘂𝗶𝗹𝗱𝗲𝗿.
ChatGPT's biggest update got leaked. Tomorrow, they will announce automation: ✦ It seems to be right using the API (not ChatGPT). ✦ I'm guessing it's a mix of Zapier, n8n, or Make. ✦ You can read "Agent Builder" or "Workflow".
ChatGPT's biggest update got leaked. Tomorrow, they will announce automation: ✦ It seems to be right using the API (not ChatGPT). ✦ I'm guessing it's a mix of Zapier, n8n, or Make. ✦ You can read "Agent Builder" or "Workflow".
PS: once it's live, I'll make a full guide on how-to-ai.guide. It's my newsletter, read by 132,000 people. Here is all of the (trusted) information I gathered: — The no-code AI era starts now — ✓ Drag-and-drop visual canvas for building agents. ✓ Templates for customer support, data, etc. ✓ Native OpenAI model access, including GPT-5. ✓ Full integration with external tools and services. But there was always a wall: coding. Now, anyone can build advanced AI agents. No code. No friction. Here’s how it (seems to) work: You want to automate customer support: 1. Pick a template for a support bot. But you need it to pull info from your database. 2. Drag in an MCP connector. Link your data. You want human approval for refunds. 3. Add a user approval step. Set the rules. You want to check documents for fraud. 4. Drop in a file search and comparison node. Test it. Preview it. Deploy it. All in one place. OpenAI is more than just an API company. It is building the backbone for the no-code AI economy. Now, anyone can create agents that work across systems, talk to users, and make decisions. The age of visual AI automation is here.
·linkedin.com·
ChatGPT's biggest update got leaked. Tomorrow, they will announce automation: ✦ It seems to be right using the API (not ChatGPT). ✦ I'm guessing it's a mix of Zapier, n8n, or Make. ✦ You can read "Agent Builder" or "Workflow".
Want to know what L&D is really doing with AI? Well, now you can. Today, Egle Vinauskaite and I publish our third annual report on AI in L&D. We’ve listened to more than 600 people in 53 countries, and there’s plenty to share.
Want to know what L&D is really doing with AI? Well, now you can. Today, Egle Vinauskaite and I publish our third annual report on AI in L&D. We’ve listened to more than 600 people in 53 countries, and there’s plenty to share.
To learn more, download the 54-page report AI in L&D 2025: The Race For Impact (link in comments and in my bio). Inside, you’ll find: · 10 ‘snapshot’ mini case studies · 12 pages of detailed analysis of how L&D is using AI · 12 pages of quantative analysis · 14 pages of in-depth case studies from Microsoft, ServiceNow, TTEC, KPMG UK, Leyton and mci group. · 1 framework: the Transformation Triangle. As AI makes is easier and faster to generated content, we explore the profound implications for L&D. And all of this is illustrated with ample quotes from the people out there doing the work. This isn’t an armchair exercise. We’ve gone through countless interviews and around 20,000 words of text that our respondents generated in the survey describing their work. This is a vivid illustration of what’s happening with AI in L&D today. We hope it will provide both insight, information and inspiration. My key take away: we’ve passed an inflexion point. For the first time, over half our respondents said they weren’t experimenting with AI, but actually using it. That’s a significant shift from last year. AI has moved from being a novelty to being part of L&D’s regular toolkit. And look at how they are using it – sure, content creation dominates. But look at the table of how things have changed since last year. Again, content dominates the top four place, but just beneath, there’s one extraordinary change. Qualitative data analysis has leapt from 8 last year to 5 this year. The single biggest change from year to year. This single points illustrates something we see across all our analysis, and in all of our case studies: a shift towards more sophisticated use, an increased focus on data, analysis and research. The featured case studies illustrate some of these inventive new uses perfectly – to learn more, download the report now. Our thanks to our report sponsors, OpenSesame, Speexx and The Regis Company for making this report possible. To download, click the link in my profile, or go to the first comment.  | 39 comments on LinkedIn
·linkedin.com·
Want to know what L&D is really doing with AI? Well, now you can. Today, Egle Vinauskaite and I publish our third annual report on AI in L&D. We’ve listened to more than 600 people in 53 countries, and there’s plenty to share.
What's happening with AI in L&D? Well, here it is — the 2025 edition.
What's happening with AI in L&D? Well, here it is — the 2025 edition.
Today, Donald H Taylor and I are releasing our third annual report on AI in L&D: The Race for Impact. If you’ve been wondering whether you’re behind, which AI uses you haven’t yet tried, or how to take your work further, we’ve put this report together to give you answers and ideas. Inside you'll find: ➡️ Fresh data on the most popular AI uses in L&D, how patterns are shifting, and what barriers teams still face ➡️ 12 pages detailing AI uses across learning design and content development, internal L&D ops, strategy and insight, and workforce enablement to inform and inspire your practice ➡️ 14 pages of in-depth AI in L&D case studies by Microsoft, ServiceNow, TTEC, KPMG UK, Leyton and mci group ➡️ A framework - the Transformation Triangle - exploring what AI’s move into “traditional” L&D work means for the function’s future role 600+ respondents. 53 countries. 20,000+ words in write-in responses. Days of interviews. Countless hours of deliberations and coffees trying to make sense of how the industry has evolved over the past 3 years and what it means for the road ahead. These are extraordinary numbers and they wouldn’t exist without the community behind them. Thank you to everyone who took the time to complete the survey and share thoughtful answers. Thank you to our case study contributors, who gave hours of their own time to document their practice for the benefit of the wider industry. Thank you to our sponsors OpenSesame, The Regis Company and Speexx who made this work possible. And thank you to Don: what started as a coffee conversation has grown into a three-year collaboration that keeps pushing both of us (and hopefully the field) forward. The full report is free to download (link in the comments). P.S. Below is a snapshot of the most common AI use cases we mapped this year. It gives a sense of where the field is and might spark a few new ideas 🙌 ♻️ Share this post so more teams can find these insights and build on each other’s work. | 22 comments on LinkedIn
·linkedin.com·
What's happening with AI in L&D? Well, here it is — the 2025 edition.
This article dropped a few days ago 👉 https://lnkd.in/djktVNKi Main talking points:
This article dropped a few days ago 👉 https://lnkd.in/djktVNKi Main talking points:
💡 Companies are adopting AI like crazy, but they should invest in preparing people to work with AI just as much. Apparently, that doesn't happen nearly enough as it should 💡 The research presented in the article highlights that Get AI Tutors outperform classroom training by 32% on personalization and 17% on feedback relevance. 💡 Gen AI Tutors create space for self-reflection, which is awesome 💡 Learners finished training 23% faster while achieving the same results 💡 Frontline workers, culture change, and building AI competence were mentioned as applications for Gen AI My thoughts: 💭 I think one of the hardest decisions we will face is where we should use Gen AI Tutors and where we should keep human interaction as part of learning 💭 The "results" in the research presented were, mostly, imho, still vanity metrics. I'm looking forward to seeing research done where analysis of results is more comprehensive (spanning a longer timeline, with clear leading indicators, etc). Until then, I can fully be convinced of the fact that Gen AI Tutors truly perform better on growing cognitive & behavioral skills 💭 While I find the culture change application interesting, I do hope Gen AI Tutors won't be used to absolve leaders of the responsibility THEY have for building cultures. I can't see a good result coming out of this. Very curious to hear your thoughts 👀 #learninganddevelopment | 14 comments on LinkedIn
·linkedin.com·
This article dropped a few days ago 👉 https://lnkd.in/djktVNKi Main talking points: