Open New Learning Lab Resources

Open New Learning Lab Resources

1068 bookmarks
Newest
Etablieren Sie Infinite Learning als eine Art unbegrenztes Lernens mit KI in Ihrem Unternehmen | LinkedIn Learning
Etablieren Sie Infinite Learning als eine Art unbegrenztes Lernens mit KI in Ihrem Unternehmen | LinkedIn Learning
Dieser LinkedIn Learning-Kurs hilft Ihnen dabei, die besten Einsatzmöglichkeiten mit KI zu erkunden, um Infinite Learning und Infinite Development zu etablieren. Durch den Kurs führt Sie Jan Foelsing, Autor des Buchs »New Work braucht New Learning«, Tech-Experte und Tool-Nerd, der Unternehmen und Teams auf dem Weg zu einer wirksameren Lernkultur, New Learning und vor allem dem sinnvollen Einsatz von KI begleitet.
·linkedin.com·
Etablieren Sie Infinite Learning als eine Art unbegrenztes Lernens mit KI in Ihrem Unternehmen | LinkedIn Learning
KI hebt den Boden an – und verschiebt die Karten am Arbeitsmarkt!
KI hebt den Boden an – und verschiebt die Karten am Arbeitsmarkt!
KI hebt den Boden an – und verschiebt die Karten am Arbeitsmarkt! Eine unerwartete Chance für erfahrene Fachkräfte. In meinem Blogpost "KI hebt den Boden an" hatte ich darüber geschrieben, wie Künstliche Intelligenz (KI) den Einstieg ins Lernen dramatisch erleichtert. Sie ist ein "Floor Raiser", der uns schneller auf ein produktives Niveau bringt. Doch die neue "Canaries in the Coal Mine"-Studie des Stanford Digital Economy Lab zeigt nun, dass dieser angehobene "Boden" den Arbeitsmarkt für BerufseinsteigerInnen deutlich verändert – und gleichzeitig neue Türen für erfahrenere Erwerbstätige öffnet. Die Studie enthüllt, dass BerufseinsteigerInnen im Alter von 22-25 Jahren einen signifikanten Rückgang der Beschäftigung in stark KI-exponierten Berufen erleben. Ein prägnantes Beispiel: "Die Beschäftigung von SoftwareentwicklerInnen im Alter von 22 bis 25 Jahren ist laut ADP-Daten seit ihrem Höchststand Ende 2022 um fast 20 % zurückgegangen.". Warum trifft es gerade die Jüngsten? Die ForscherInnen erklären, dass KI besonders effektiv "kodifiziertes Wissen" ersetzt – also das "Buchwissen", das frisch von der Universität kommt. Da junge Arbeitskräfte typischerweise mehr kodifiziertes als "stillschweigendes Wissen" (Erfahrung) mitbringen, sind sie anfälliger für die Aufgabenablösung durch KI. Hier kommt die entscheidende Wendung für alle mit Berufserfahrung: "Im Gegensatz dazu sind die Beschäftigungstrends für erfahrenere ArbeitnehmerInnen in denselben Berufen [...] stabil geblieben oder weiter gewachsen.". Die Studie zeigt, dass der Rückgang der Berufseinsteiger-Beschäftigung in Anwendungen von KI stattfindet, die die Arbeit automatisieren, nicht aber dort, wo KI die Arbeit augmentiert (ergänzt). Erfahrene Fachkräfte besitzen das stillschweigende Wissen – jene unbezahlbaren Tipps, Tricks und das Urteilsvermögen, das sich erst durch jahrelange Praxis ansammelt und von KI nicht ersetzt, sondern ideal ergänzt (augmentiert) werden kann. Die Nutzung von KI als Augmentierung führt sogar zu robustem Beschäftigungswachstum. Fazit für Führung und Karriereentwicklung: Während KI den "Boden" der Basiskompetenzen anhebt, erschwert sie möglicherweise den Einstieg für diejenigen, die nur auf diesem angehobenen Niveau operieren. Für erfahrene Erwerbstätige ist dies jedoch eine enorme Chance: Ihre gesammelte Erfahrung und die Fähigkeit, KI als mächtiges Augmentationswerkzeug zu nutzen, machen sie zu unverzichtbaren GestalterInnen der zukünftigen Arbeitswelt. Reflexion: Wie können erfahrene Professionals diese Chance ergreifen und KI gezielt zur Wertsteigerung ihrer Expertise einsetzen? Wie gelingt es, stillschweigendes Wissen aktiv mit KI zu verbinden? Hier der Link zur Studie: https://lnkd.in/dEArWX58 #KI #Arbeitsmarkt #Führung #Lernen #GenerativeAI #FloorRaiser #Erfahrung #ZukunftderArbeit #Skills #Karriere
·linkedin.com·
KI hebt den Boden an – und verschiebt die Karten am Arbeitsmarkt!
Is AI coaching really coaching?
Is AI coaching really coaching?
Is AI coaching really coaching? I’m not sure it matters. Hiding behind semantics won’t shelter our profession from the coming tidal wave. Fidji Simo, OpenAI's CEO of Applications, recently shared her vision for the future of AI; including transforming personalized coaching from a "privilege reserved for the few" into an everyday service for everyone. Her dream, inspired by her own transformative relationship with her human coach Katia, poses fascinating questions we're actively exploring at the @Hudson Institute of Coaching. How are we—coaches, leaders, learning professionals, growth-minded individuals—to think of it? While Prof. Nicky Terblanche (PhD) and other researchers explore the rapidly expanding frontier of AI coaching’s developmental potential, Tatiana Bachkirova and Robert Kemp have brilliantly articulated the unique value of human coaching in transforming individuals and organizations alike. My latest for Forbes examines the tension between democratization and depth in the age of AI coaching. Academic research offers a number of valuable insights: ☑️ AI can match human coaches in terms of structured goal-tracking and maintaining momentum. 🔥 The deepest transformation emerges through "heat experiences"—moments of productive discomfort that require genuine human witness and relational risk that an AI cannot replicate. 👥 Professional coaching comprises six essential elements that current AI cannot fully embody: joint inquiry, meaning-for-action, values navigation, contextual understanding, relational attunement, and fostering client autonomy. I believe the future isn't about choosing sides. Instead, it's about thoughtful integration that preserves what makes human-to-human coaching transformative while exploring technology’s potential to expand access to meaningful development. The path forward requires care to distinguish what technology can replicate from what only emerges when one human commits to another's growth. https://lnkd.in/eUV89Vcc How are you thinking about AI's role in human development? Can we preserve the irreducible power of human presence while making meaningful growth more accessible? | 105 comments on LinkedIn
·linkedin.com·
Is AI coaching really coaching?
Schule neu denken geht - wenn man will…
Schule neu denken geht - wenn man will…
Keine Privatschule, keine „besonderen Kinder“, die sich die Schule aussuchen darf. Einfach ein Lernort für die Kinder vom Dorf. So gebaut, dass Unterricht gar nicht mehr möglich ist. Die Schule zeigt, dass es geht, indem sie es macht. Gewinnt Preise, bringt hervorragende Abschlüsse hervor. Geht alles. Es gibt keine guten Gründe mehr, keine guten Argumente mehr, es nicht auch so zu machen. Alle komplett widerlegt. Kinder sind glücklich, Kinder lernen leicht, Kinder kommen sich auf die Spur. Welcher Vater, welche Mutter könnte das nicht für ihre und seine Kinder wollen? Welche Lehrperson, welche Schulleiterin liesse sich nicht von diesen Argumenten überzeugen? Welche Gemeinde würde nicht sofort auf diesen Zug aufspringen, weil so eine Schule eine grosse Anziehungskraft ausübt, weil sie offene, offenherzige junge Familien anzieht, die händeringend nach einer guten Schule für ihre Kinder suchen. Es spricht alles dafür und nichts mehr dagegen. Wer Lust hat, in der Schweiz so etwas auf die Beine zu stellen, möge sich doch bitte bei mir melden - bringt eure Stadt- und Regierungsräte mit, dann legen wir los 💪
·linkedin.com·
Schule neu denken geht - wenn man will…
"Human in the loop". I hear this phrase dozens of times per week. In LinkedIn posts. In board meetings about AI strategy. In product requirements. In compliance documents that tick the "responsible AI" box. It's become the go-to phrase for any situation where humans interact with AI decisions...
"Human in the loop". I hear this phrase dozens of times per week. In LinkedIn posts. In board meetings about AI strategy. In product requirements. In compliance documents that tick the "responsible AI" box. It's become the go-to phrase for any situation where humans interact with AI decisions...
But there's a story I think of when I hear "human in the loop" which makes me think we're grossly over-simplifying things. It's a story about the man who saved the world. September 26, 1983. The height of the Cold War. Lieutenant Colonel Stanislav Petrov was the duty officer at a secret Soviet bunker, monitoring early warning satellites. His job was simple: if computers detected incoming American missiles, report it immediately so the USSR could launch its counterattack. 12:15 AM... the unthinkable. Every alarm in the facility started screaming. The screens showed five US ballistic missiles, 28 minutes from impact. Confidence level: 100%. Petrov had minutes to decide whether to trigger a chain reaction that would start nuclear war and could very well end civilisation as we knew it. He was the "human in the loop" in the most literal, terrifying sense. Everything told him to follow protocol. His training. His commanders. The computers. But something felt wrong. His intuition, built from years of intelligence work, whispered that this didn't match what he knew about US strategic thinking. Against every protocol, against the screaming certainty of technology, he pressed the button marked "false alarm". Twenty-three minutes of gripping fear passed before ground radar confirmed: no missiles. The system had mistaken a rare alignment of sunlight on high-altitude clouds for incoming warheads. His decision to break the loop prevented nuclear war. What made Petrov effective wasn't just being "in the loop" - it was having genuine authority, time to think, and understanding the bigger picture well enough to question the system. Most of today's "human in the loop" implementations have none of these qualities. Instead, we see job applications rejected by algorithms before recruiters ever see promising candidates. Customer service bots that frustrate instead of giving agents the context to actually solve problems. AI systems sold as human replacements when they should be human amplifiers. The framework I use with organisations building AI systems starts with two practical questions every leader can answer: what are you optimising for, and what's at stake? It then points to the type of intentional human-AI oversight design that works best. Routine processing might only need "spot checking" - periodic human review of AI decisions. Innovation projects might use "collaborative ideation" - AI generating options while humans provide strategic direction. The goal isn't perfect categorisation but moving beyond generic "human in the loop" to build the the systems we actually intend, not the ones we accidentally create. Download: https://lnkd.in/eVFAC9gN | 261 comments on LinkedIn
·linkedin.com·
"Human in the loop". I hear this phrase dozens of times per week. In LinkedIn posts. In board meetings about AI strategy. In product requirements. In compliance documents that tick the "responsible AI" box. It's become the go-to phrase for any situation where humans interact with AI decisions...
Did you join a cult? Working in L&D is weird: on the one hand we have the opportunity to make a lasting difference to people, on the other we run the risk of being trapped in rituals & conversations that make no difference at all.
Did you join a cult? Working in L&D is weird: on the one hand we have the opportunity to make a lasting difference to people, on the other we run the risk of being trapped in rituals & conversations that make no difference at all.
Over the years I’ve watched some of the brightest, most promising people I know fall victim to the cult of eduction - forever turning in circles over topics such as the LMS/LXP, learning pathways, ROI measurement, instructional design, AI content generation, modules & microlearning. So how do you sidestep the abyss? A good rule of thumb is to ask yourself: ‘Am I creating stuff that helps people with their challenges OR creating challenging experiences that help people practice?’ If you’re doing neither of these things, you might need to consider your escape plan 🏃🏽‍♂️‍➡️🏃🏾‍♀️‍➡️ #learning #education #learninganddevelopment #training #learningdesign | 36 comments on LinkedIn
·linkedin.com·
Did you join a cult? Working in L&D is weird: on the one hand we have the opportunity to make a lasting difference to people, on the other we run the risk of being trapped in rituals & conversations that make no difference at all.
From ADDIE to ADGIE? New research proposes an "AI-centred" update to our most long-standing Instructional Design process. Here's the TLDR:
From ADDIE to ADGIE? New research proposes an "AI-centred" update to our most long-standing Instructional Design process. Here's the TLDR:
👉 In a research paper published 3 weeks ago, researchers propose that traditional models like as ADDIE & SAM have severe limitations which have made it difficult to create dynamic, relevant and ultimately high-impact learning. 👉 In its place, they propose a new model, "ADGIE" - a hybrid human + AI re-imagining of the process, with the goal of dramatically increasing the speed, agility & quality of instructional design. Here’s how it works: Analysis: AI analyses data like SME & learner interviews to create learner personas & concepts and skills maps; the Designer validates them. Design: The Designer curates content; AI structures it into a plan in line with the persona & concept & skills map. Generation: AI produces the first draft of learning materials; the Designer refines them. Individualisation: AI adapts learning paths for individuals; the Designer oversees the process. Evaluation: A continuous process where the Designer validates AI outputs and learner feedback improves the system. My take: ADGIE is more than just a new acronym—it’s one example of a practical, forward-looking framework that provides a language to talk about how the profession is changing and captures how Instructional Design is evolving in response to AI. Read more in my latest blog post (link in comments). Happy innovating! Phil 👋 | 28 comments on LinkedIn
Analysis: AI analyses data like SME & learner interviews to create learner personas & concepts and skills maps; the Designer validates them. Design: The Designer curates content; AI structures it into a plan in line with the persona & concept & skills map. Generation: AI produces the first draft of learning materials; the Designer refines them. Individualisation: AI adapts learning paths for individuals; the Designer oversees the process. Evaluation: A continuous process where the Designer validates AI outputs and learner feedback improves the system.
In its place, they propose a new model, "ADGIE" - a hybrid human + AI re-imagining of the process, with the goal of dramatically increasing the speed, agility & quality of instructional design. Here’s how it works: Analysis: AI analyses data like SME & learner interviews to create learner personas & concepts and skills maps; the Designer validates them. Design: The Designer curates content; AI structures it into a plan in line with the persona & concept & skills map. Generation: AI produces the first draft of learning materials; the Designer refines them. Individualisation: AI adapts learning paths for individuals; the Designer oversees the process. Evaluation: A continuous process where the Designer validates AI outputs and learner feedback improves the system.
·linkedin.com·
From ADDIE to ADGIE? New research proposes an "AI-centred" update to our most long-standing Instructional Design process. Here's the TLDR:
HR Monitor 2025
HR Monitor 2025
A state-of-the-art survey among HR professionals and employees across Europe and the US unveils global workforce and HR trends, serving as a comprehensive benchmark for the HR landscape.
·mckinsey.com·
HR Monitor 2025
Microsoft feuert 10.000 Menschen und trainiert gleichzeitig 15.000 "AI Specialists". Das ist nicht Stellenabbau – das ist Kompetenz-Tsunami.
Microsoft feuert 10.000 Menschen und trainiert gleichzeitig 15.000 "AI Specialists". Das ist nicht Stellenabbau – das ist Kompetenz-Tsunami.
Die Zahlen sind brutal: 62% aller Bürojobs verschwinden bis 2030 (McKinsey AI Report 2024) Gleichzeitig entstehen 89% neue Job-Kategorien Problem: 91% der Arbeitnehmer haben keine AI-Skills Ein Personalvorstand sagte mir gestern: "Ich kann meinen Mitarbeitern nicht erklären, dass ihre 20-jährige Erfahrung plötzlich wertlos ist. Letzte Woche hat ne KI die 3-Tage-Arbeit unserer besten Buchhalterin in wenigen Minuten gemacht. Fehlerfrei." Wir diskutieren über AI-Ethik, während AI unsere Jobs übernimmt. Unternehmen suchen nicht mehr erfahrene Manager – sondern Transformations-Leader. "Ich brauche jemanden, der 500 Menschen erklärt, warum ihre Arbeit bald ein Algorithmus macht." Die härteste Frage: Wie führt man Menschen durch eine Revolution, die sie überflüssig macht? Die besten Führungskräfte werden nicht AI-Experten – sie werden Menschlichkeits-Experten. Wie bereitet ihr euch auf den AI-Jobwandel vor? Quellen: McKinsey Future of Work in the Age of AI 2024 Microsoft Work Trend Index 2024 #AI #ArtificialIntelligence #Jobs #Transformation #Leadership #Microsoft #ChatGPT #FutureOfWork #ExecutiveSearch #Automation #Reskilling #StantonChase| 105 Kommentare auf LinkedIn
·linkedin.com·
Microsoft feuert 10.000 Menschen und trainiert gleichzeitig 15.000 "AI Specialists". Das ist nicht Stellenabbau – das ist Kompetenz-Tsunami.
The AI Hype is a Dead Man Walking.
The AI Hype is a Dead Man Walking.
The AI Hype is a Dead Man Walking. The Math Finally Proves It. For the past two years, the AI industry has been operating on a single, seductive promise: that if we just keep scaling our current models, we'll eventually arrive at AGI. A wave of new research, brilliantly summarized in a recent video analysis, has finally provided the mathematical proof that this promise is a lie. This isn't just another opinion; it's a brutal, two-pronged assault on the very foundations of the current AI paradigm: 1. The Wall of Physics: The first paper reveals a terrifying reality about the economics of reliability. To reduce the error rate of today's LLMs by even a few orders of magnitude—to make them truly trustworthy for enterprise use—would require 10^20 times more computing power. This isn't just a challenge; it's a physical impossibility. We have hit a hard wall where the cost of squeezing out the last few percentage points of reliability is computationally insane. The era of brute-force scaling is over. 2. The Wall of Reason: The second paper is even more damning. It proves that "Chain-of-Thought," the supposed evidence of emergent reasoning in LLMs, is a "brittle mirage". The models aren't reasoning; they are performing a sophisticated pattern-match against their training data. The moment a problem deviates even slightly from that data, the "reasoning" collapses entirely. This confirms what skeptics have been saying all along: we have built a world-class "statistical parrot," not a thinking machine. This is the end of the "Blueprint Battle." The LLM-only blueprint has failed. The path forward is not to build a bigger parrot, but to invest in the hard, foundational research for a new architecture. The future belongs to "world models," like those being pursued by Yann LeCun and others—systems that learn from interacting with a real or virtual world, not just from a library of text. The "disappointing" GPT-5 launch wasn't a stumble; it was the first, visible tremor of this entire architectural paradigm hitting a dead end. The hype is over. Now the real, foundational work of inventing the next paradigm begins. | 554 comments on LinkedIn
·linkedin.com·
The AI Hype is a Dead Man Walking.
OpenAI 𝗹𝗮𝘂𝗻𝗰𝗵𝗲𝗱 𝗮𝗻 𝗲𝗻𝘁𝗶𝗿𝗲 𝗔𝗰𝗮𝗱𝗲𝗺𝘆 𝘁𝗼 𝘁𝗲𝗮𝗰𝗵 𝘆𝗼𝘂 𝗔𝗜 𝗳𝗼𝗿 𝗳𝗿𝗲𝗲 𝗮𝗻𝗱 𝗮𝗹𝗺𝗼𝘀𝘁 𝗻𝗼𝗯𝗼𝗱𝘆 𝗸𝗻𝗼𝘄𝘀!
OpenAI 𝗹𝗮𝘂𝗻𝗰𝗵𝗲𝗱 𝗮𝗻 𝗲𝗻𝘁𝗶𝗿𝗲 𝗔𝗰𝗮𝗱𝗲𝗺𝘆 𝘁𝗼 𝘁𝗲𝗮𝗰𝗵 𝘆𝗼𝘂 𝗔𝗜 𝗳𝗼𝗿 𝗳𝗿𝗲𝗲 𝗮𝗻𝗱 𝗮𝗹𝗺𝗼𝘀𝘁 𝗻𝗼𝗯𝗼𝗱𝘆 𝗸𝗻𝗼𝘄𝘀!
OpenAI 𝗹𝗮𝘂𝗻𝗰𝗵𝗲𝗱 𝗮𝗻 𝗲𝗻𝘁𝗶𝗿𝗲 𝗔𝗰𝗮𝗱𝗲𝗺𝘆 𝘁𝗼 𝘁𝗲𝗮𝗰𝗵 𝘆𝗼𝘂 𝗔𝗜 𝗳𝗼𝗿 𝗳𝗿𝗲𝗲 𝗮𝗻𝗱 𝗮𝗹𝗺𝗼𝘀𝘁 𝗻𝗼𝗯𝗼𝗱𝘆 𝗸𝗻𝗼𝘄𝘀! It’s a beginner-friendly, self-paced platform designed to teach anyone — students, teachers, parents, or professionals with zero technical background — how to actually use AI. 𝗛𝗲𝗿𝗲 𝗮𝗿𝗲 𝘀𝗼𝗺𝗲 𝗼𝗳 𝘁𝗵𝗲 𝘁𝗵𝗶𝗻𝗴𝘀 𝘆𝗼𝘂’𝗹𝗹 𝗳𝗶𝗻𝗱 𝗶𝗻𝘀𝗶𝗱𝗲 𝘁𝗵𝗲 𝗔𝗰𝗮𝗱𝗲𝗺𝘆: → How ChatGPT works (broken down simply) → Real-world examples for daily life → Prompt writing, AI ethics & responsible use → Tailored tracks for educators, small businesses & learners → Hands-on tutorials directly in ChatGPT This is practical AI education — accessible to everyone, and completely free. The ability to use AI effectively is quickly becoming a core skill. Not just for engineers, but for every profession. I consider initiatives like this as an important step toward closing the AI literacy gap and ensuring that the future of AI is shaped by many, not just a few. Explore it here: https://academy.openai.com 𝗣.𝗦. 𝗜 𝗿𝗲𝗰𝗲𝗻𝘁𝗹𝘆 𝗹𝗮𝘂𝗻𝗰𝗵𝗲𝗱 𝗮 𝗻𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿 𝘄𝗵𝗲𝗿𝗲 𝗜 𝘄𝗿𝗶𝘁𝗲 𝗮𝗯𝗼𝘂𝘁 𝗔𝗜 𝗮𝗴𝗲𝗻𝘁𝘀, 𝗲𝗺𝗲𝗿𝗴𝗶𝗻𝗴 𝘄𝗼𝗿𝗸𝗳𝗹𝗼𝘄𝘀, 𝗮𝗻𝗱 𝗵𝗼𝘄 𝘁𝗼 𝘀𝘁𝗮𝘆 𝗮𝗵𝗲𝗮𝗱 𝘄𝗵𝗶𝗹𝗲 𝗼𝘁𝗵𝗲𝗿𝘀 𝘄𝗮𝘁𝗰𝗵 𝗳𝗿𝗼𝗺 𝘁𝗵𝗲 𝘀𝗶𝗱𝗲𝗹𝗶𝗻𝗲𝘀. 𝗜𝘁’𝘀 𝗳𝗿𝗲𝗲, 𝗮𝗻𝗱 𝘆𝗼𝘂 𝗰𝗮𝗻 𝘀𝘂𝗯𝘀𝗰𝗿𝗶𝗯𝗲 𝗵𝗲𝗿𝗲: https://lnkd.in/dbf74Y9E | 32 comments on LinkedIn
·linkedin.com·
OpenAI 𝗹𝗮𝘂𝗻𝗰𝗵𝗲𝗱 𝗮𝗻 𝗲𝗻𝘁𝗶𝗿𝗲 𝗔𝗰𝗮𝗱𝗲𝗺𝘆 𝘁𝗼 𝘁𝗲𝗮𝗰𝗵 𝘆𝗼𝘂 𝗔𝗜 𝗳𝗼𝗿 𝗳𝗿𝗲𝗲 𝗮𝗻𝗱 𝗮𝗹𝗺𝗼𝘀𝘁 𝗻𝗼𝗯𝗼𝗱𝘆 𝗸𝗻𝗼𝘄𝘀!
IMHO eine lesenswerte ungeschönte reality-check Studie vom MIT zum aktuellen Stand von GenAI Implementierungen bei Unternehmen
IMHO eine lesenswerte ungeschönte reality-check Studie vom MIT zum aktuellen Stand von GenAI Implementierungen bei Unternehmen
IMHO eine lesenswerte ungeschönte reality-check Studie vom MIT zum aktuellen Stand von GenAI Implementierungen bei Unternehmen, mit u.a.: _ 95 Prozent der Unternehmen erzielen trotz 30 bis 40 Milliarden Dollar Investitionen noch keinen messbaren P&L Effekt aus GenAI _ Nur 5 Prozent der Piloten werden produktiv, entscheidend ist Lernen im System und tiefe Prozessintegration statt Toolshow _ Hohe Nutzung von ChatGPT und Copilot für individuelle Produktivität, aber geringe Wirkung auf P&L, unternehmensspezifische Systeme scheitern oft an brüchigen Workflows und fehlender Kontextanpassung _ Branchenbild: deutliche Disruption in Technologie sowie Medien und Telekommunikation, sieben weitere Sektoren zeigen bislang wenig strukturelle Veränderung _ Pilot zu Produktion bleibt der Engpass, generische Chatbots sind leicht zu testen, scheitern aber in kritischen Workflows wegen fehlender Erinnerung und Anpassungsfähigkeit _ Fünf verbreitete Irrtümer: keine kurzfristige Massenarbeitslosigkeit, Adoption hoch aber Transformation selten, Enterprise ist nicht träge sondern eifrig, Hauptbremse ist nicht das Modell oder Legal sondern fehlendes Lernen, interne Builds scheitern doppelt so häufig _ Shadow AI prägt den Alltag, rund 90 Prozent der Mitarbeitenden nutzen private LLMs regelmäßig, während nur ein Teil der Unternehmen offizielle LLM Lizenzen beschafft _ Budgetfehler: 50 bis 70 Prozent der Ausgaben fließen in Sales und Marketing, die besten Einsparungen liegen häufig im Backoffice wie Finance, Procurement und Operations _ Wichtigster Skalierungshebel ist Lernen, Hürden sind Akzeptanzprobleme, wahrgenommene Qualitätsmängel ohne Kontext, schwache UX und fehlende Erinnerung in Enterprise Tools _ Nutzungsmuster: für schnelle Aufgaben bevorzugen viele AI, für komplexe mehrwöchige Arbeit und Kundensteuerung bevorzugen Nutzer klar den Menschen _ Agentic AI mit persistentem Gedächtnis, Feedbackschleifen und Orchestrierung adressiert das Kernproblem, erste End to End Beispiele in Support, Finance und Sales zeigen Potenzial _ Erfolgsplaybook für Anbieter: Use Cases mit niedrigem Setup, schneller Wertnachweis, dann Expansion _ Go to market gewinnt über Vertrauen, Kanäle sind bestehende Partnerschaften, Peer Empfehlungen, Board und Integrationsnetzwerke _ Käuferpraktiken, die skalieren: eher kaufen als bauen, externe Partnerschaften zeigen etwa doppelt so hohe Erfolgsraten, Verantwortung dezentral an Linienführung mit klarer Rechenschaft, Bewertung nach Business Outcomes statt Modellbenchmarks _ Wo echter ROI entsteht: Frontoffice liefert sichtbare Effekte wie schnellere Leadqualifizierung und höhere Retention, die großen Einsparungen kommen aus Backoffice Automatisierung, weniger BPO und geringere Agenturausgaben _ Arbeitsmarktwirkung ist selektiv, Einschnitte treffen vor allem ausgelagerte Support und Admin Bereiche, insgesamt keine breiten Entlassungen, AI Literacy wird zum zentralen Einstellungskriterium Danke Dirk Hofmann für den Find.| 14 Kommentare auf LinkedIn
·linkedin.com·
IMHO eine lesenswerte ungeschönte reality-check Studie vom MIT zum aktuellen Stand von GenAI Implementierungen bei Unternehmen
This morning, I sat down with an idea: 𝘊𝘰𝘶𝘭𝘥 𝘐 𝘣𝘶𝘪𝘭𝘥 𝘢 𝘵𝘳𝘢𝘪𝘯𝘪𝘯𝘨 𝘷𝘪𝘥𝘦𝘰 𝘢𝘣𝘰𝘶𝘵 𝘩𝘰𝘸 𝘊𝘩𝘢𝘵𝘎𝘗𝘛 𝘢𝘯𝘥 𝘰𝘵𝘩𝘦𝘳 𝘭𝘢𝘳𝘨𝘦 𝘭𝘢𝘯𝘨𝘶𝘢𝘨𝘦 𝘮𝘰𝘥𝘦𝘭𝘴 𝘶𝘴𝘦…
This morning, I sat down with an idea: 𝘊𝘰𝘶𝘭𝘥 𝘐 𝘣𝘶𝘪𝘭𝘥 𝘢 𝘵𝘳𝘢𝘪𝘯𝘪𝘯𝘨 𝘷𝘪𝘥𝘦𝘰 𝘢𝘣𝘰𝘶𝘵 𝘩𝘰𝘸 𝘊𝘩𝘢𝘵𝘎𝘗𝘛 𝘢𝘯𝘥 𝘰𝘵𝘩𝘦𝘳 𝘭𝘢𝘳𝘨𝘦 𝘭𝘢𝘯𝘨𝘶𝘢𝘨𝘦 𝘮𝘰𝘥𝘦𝘭𝘴 𝘶𝘴𝘦…
This morning, I sat down with an idea: 𝘊𝘰𝘶𝘭𝘥 𝘐 𝘣𝘶𝘪𝘭𝘥 𝘢 𝘵𝘳𝘢𝘪𝘯𝘪𝘯𝘨 𝘷𝘪𝘥𝘦𝘰 𝘢𝘣𝘰𝘶𝘵 𝘩𝘰𝘸 𝘊𝘩𝘢𝘵𝘎𝘗𝘛 𝘢𝘯𝘥 𝘰𝘵𝘩𝘦𝘳 𝘭𝘢𝘳𝘨𝘦 𝘭𝘢𝘯𝘨𝘶𝘢𝘨𝘦 𝘮𝘰𝘥𝘦𝘭𝘴 𝘶𝘴𝘦 𝘱𝘳𝘰𝘣𝘢𝘣𝘪𝘭𝘪𝘵𝘺 (𝘪𝘯𝘴𝘵𝘦𝘢𝘥 𝘰𝘧 𝘥𝘦𝘵𝘦𝘳𝘮𝘪𝘯𝘪𝘴𝘵𝘪𝘤 𝘷𝘢𝘭𝘶𝘦𝘴) 𝘪𝘯 𝘫𝘶𝘴𝘵 20 𝘮𝘪𝘯𝘶𝘵𝘦𝘴? Here’s what happened: 1️⃣ I created a script with ChatGPT-5 with my educational video GPT 2️⃣ I opened Synthesia and built an avatar-led narrative (Express 2 - hand motions included). I skipped the camera angles and stayed with one. 3️⃣ For B-roll? I asked ChatGPT to generate a Midjourney prompt from the original video script. The images came back in minutes from MJ. 4️⃣ Dropped those images into Google VEO 3, where ChatGPT also scripted camera directions and screen actions. 5️⃣ Exported the clips. 6️⃣ Compiled everything in TechSmith Camtasia and exported the MP4. 𝗧𝗼𝘁𝗮𝗹 𝘁𝗶𝗺𝗲: 20 Minutes Output: a working rough cut training video. If I wanted to refine it? Easy. I’d add diverse camera angles, swap in stronger B-roll, polish transitions, and even automate the workflow with Make.com or Zapier. But here’s the real takeaway: What used to take a team days can now be prototyped by one person before their second cup of coffee. This isn’t just about speed. It’s about giving learning professionals the ability to test, iterate, and refine ideas faster than ever before. It’s a new day. 𝘈𝘯𝘥 𝘪𝘵’𝘴 𝘪𝘯𝘤𝘳𝘦𝘥𝘪𝘣𝘭𝘦. (Link to my Education Video GPT in the comments!) | 32 comments on LinkedIn
·linkedin.com·
This morning, I sat down with an idea: 𝘊𝘰𝘶𝘭𝘥 𝘐 𝘣𝘶𝘪𝘭𝘥 𝘢 𝘵𝘳𝘢𝘪𝘯𝘪𝘯𝘨 𝘷𝘪𝘥𝘦𝘰 𝘢𝘣𝘰𝘶𝘵 𝘩𝘰𝘸 𝘊𝘩𝘢𝘵𝘎𝘗𝘛 𝘢𝘯𝘥 𝘰𝘵𝘩𝘦𝘳 𝘭𝘢𝘳𝘨𝘦 𝘭𝘢𝘯𝘨𝘶𝘢𝘨𝘦 𝘮𝘰𝘥𝘦𝘭𝘴 𝘶𝘴𝘦…
Lernaufgaben in der KI-Ära: Wie sichern wir echte Eigenleistung?
Lernaufgaben in der KI-Ära: Wie sichern wir echte Eigenleistung?
Lernaufgaben in der KI-Ära: Wie sichern wir echte Eigenleistung? Künstliche Intelligenz kann mittlerweile beeindruckend gut schreiben, argumentieren, analysieren – auch in schulischen Kontexten. Viele Schüler*innen wissen das. Die Herausforderung für uns als Lehrkräfte, Didaktikerinnen und Bildungsverantwortliche ist klar: Wie gestalten wir Lernaufgaben so, dass sie nicht einfach „mit KI gelöst“ werden können – sondern echte, persönliche Denkleistung verlangen? Die Antwort liegt nicht im Verbot, sondern in der konsequenten didaktischen Neuausrichtung von Aufgabenformaten. Wenn KI als Unterstützung genutzt werden darf – aber nicht zur alleinigen Lösung taugt –, dann braucht es Aufgaben, die folgende Merkmale berücksichtigen: 🔍 Kontextualisierung: Aufgaben müssen an die reale Lebenswelt der Schüler*innen anknüpfen. ➡️ Persönliche Erfahrungen, lokale Gegebenheiten, konkrete Umfelder. KI kann diese nicht kennen – hier ist echte Eigenleistung gefragt. ✍️ Subjektive Stellungnahme: Lernende sollen nicht nur Fakten wiedergeben, sondern eigene Positionen entwickeln – begründet, wertorientiert, überzeugend. ➡️ Meinungen kann man nicht outsourcen. 🎨 Offenheit & Kreativität: Offene Aufgabenstellungen, die mehrere Lösungswege zulassen, Gestaltungsspielräume bieten und mehrstufig angelegt sind. ➡️ KI kann Vorschläge machen – aber nicht kreativ-emotional gestalten wie Menschen. 🧠 Reflexion: Lernende reflektieren ihren Lernweg, ihre Denkfehler, ihre Strategien. ➡️ Das fördert nicht nur Metakognition, sondern entzieht KI den Zugriff – denn nur der Mensch weiß, wie er denkt. 🔗 Transfer & Aktualität: Fachwissen soll auf gesellschaftlich relevante, neue Kontexte übertragen werden. ➡️ Wer Wissen flexibel anwendet, zeigt wahre Kompetenz. 📝 Prozessdokumentation: Der Weg zählt: Planungen, Zwischenergebnisse, Entscheidungen sollen dokumentiert und reflektiert werden. ➡️ Das macht Lernen sichtbar – und überprüfbar. ➕ KI als Werkzeug – bewusst und reflektiert: KI darf eingebunden werden – aber mit klarer Funktion, z. B. zur Ideenfindung, zur Perspektivenerweiterung oder zur kritischen Bewertung von Argumenten. ➡️ Nicht: „Was sagt ChatGPT?“️ Sondern: „Was davon halte ich für sinnvoll – und warum?“ 👉 Welche Formate oder Ansätze nutzt ihr bereits in eurem Unterricht? Ich freue mich auf den Austausch. #KIimUnterricht #BildungderZukunft #Aufgabenkultur #DigitaleBildung #Reflexionskompetenz #LehrenMitKI #LernenMitKI #Didaktik #KritischesDenken| 22 Kommentare auf LinkedIn
·linkedin.com·
Lernaufgaben in der KI-Ära: Wie sichern wir echte Eigenleistung?
Learning and HR industry analyst Fosway Group has produced AI market assessments for digital learning and learning systems.
Learning and HR industry analyst Fosway Group has produced AI market assessments for digital learning and learning systems.
Learning and HR industry analyst Fosway Group has produced AI market assessments for digital learning and learning systems. The aim of these assessments is to help buyers understand the AI capabilities that vendors are offering now and will be in the future. I’m not sure how many vendors were included in these assessments as that wasn’t stated (I’ll ask). Having looked through the assessments I was struck by the fact most of the capabilities are related to content. This is a red flag because we know that content is one part of the learning process and workers also have the genAI tools to create their own learning (will they use company learning tools for learning, their own or both?). So, I did a bit of analysis to understand how the AI capabilities stated in the assessments map to the learning process – knowledge acquisition, practice, feedback, reflection, transfer and application. As you can see from the chart, vendors have built, or are building, AI tools focused on content predominantly. The other areas of the learning process – arguably the ones that could be most transformed by AI, just aren’t a priority. You can make your own conclusions, but my conclusion is that the industry is too invested in knowledge acquisition, and it plans to be so for the foreseeable future. Some industry leaders are talking about the need for L&D to transform itself but it looks like that conversation is simply not happening. Everyone is getting on the AI content gravy train. In terms of my analysis – I grouped the 83 AI capabilities mentioned in the two assessments into the five adult learning stages. I used ChatGPT to help with this and to create percentages that reflect the relative share of roadmap and live features in each stage. Read Fosway’s AI market assessment for digital learning https://lnkd.in/efqnQMtu And the AI market assessment for learning systems https://lnkd.in/eDihnuDi #learninganddevelopment #ai | 23 comments on LinkedIn
·linkedin.com·
Learning and HR industry analyst Fosway Group has produced AI market assessments for digital learning and learning systems.
I road‑tested Google Gemini's Guided Learning mode - here’s my hot take on how it performs & how it compares to OpenAI's Study Mode:
I road‑tested Google Gemini's Guided Learning mode - here’s my hot take on how it performs & how it compares to OpenAI's Study Mode:
I road‑tested Google Gemini's Guided Learning mode - here’s my hot take on how it performs & how it compares to OpenAI's Study Mode: ✔️ What Gemini's Guided Learning Gets Right → Retrieval Practice – Interactive quizzes and flashcards make you generate answers from memory, harnessing the Generation Effect for more durable learning (Slamecka & Graf, 1978; Jacoby, 1978) → Cognitive Load management – Chunks complex topics into digestible steps, preventing the overwhelm that kills learning (Sweller, 1988; Sweller, van Merriënboer & Paas, 1998) → Multimodal Delivery – Draws on a blend of text, diagrams, YouTube videos & interactive practice to deliver learning content, enhancing both engagement and outcomes (Paivio, 1990) → Patient but Provocative Tutoring – Creates psychological safety through non‑judgmental guidance, encouraging the risk‑taking essential for deep learning (Edmondson, 1999) A solid B+ performance — Study Mode’s strength is Socratic questioning, but Guided Learning’s multimodal content ecosystem & more "strict" tutoring style gives it the instructional edge. ❌ Critical Gaps → No Persistent Learner Profiling – Like Study Mode, Guided Learning misses the persistent knowledge & adaptation that defines effective tutoring (Brusilovsky, 2001). Note: as observed by Claire Zau, a Google Classroom integration could layer in persistent learner profiles — something that could change the game & which OpenAI can’t match. → ZPD Blind Spot – Like Study & Learn mode by OpenAI, Guided Learning doesn’t ask questions that help define your learning level or Zone of Proximal Development (ZPD). Whether you’re K12 or advanced, it doesn't calibrate the challenge or scaffolding to your actual developmental stage up front, missing a key step for truly adaptive support (Vygotsky, 1978). → Productive Struggle Deficit – While it pushes back more than Study Mode by OpenAI, Guided Learning still jumps in with help too quickly, robbing learners of the cognitive friction that builds problem‑solving resilience & drives learning (Kapur, 2008, 2014; Bjork & Bjork, 2011) → Shallow Self‑Reflection – Rarely pushes for deep metacognitive thinking (“Why that approach?”), limiting transfer to new contexts (Chi et al., 1989, 1994; VanLehn, Jones & Chi, 1992) → Recognition Bias – While quizzing is strong, it could and should use more open‑ended generation tasks that embed learning more effectively (Slamecka & Graf, 1978; Jacoby, 1978) 📊 The Verdict: Guided Learning by Google Gemini Vs Study Mode by OpenAI While Study Mode remains stronger in Socratic questioning, Guided Learning edges ahead overall thanks to multimodal content, advanced cognitive load management & more provocative tutoring. However, both tools share some fundamental limitations: no learner persistence, limited metacognitive depth & overly-sycophantic tutoring. Have you tried Guided Learning yet? How does it compare with Study Mode for you? Happy experimenting, Phil 👋
·linkedin.com·
I road‑tested Google Gemini's Guided Learning mode - here’s my hot take on how it performs & how it compares to OpenAI's Study Mode:
Shifting to a Humans + AI organization requires reconfiguring the nature of work and value at all levels, from the individual to the ecosystem.
Shifting to a Humans + AI organization requires reconfiguring the nature of work and value at all levels, from the individual to the ecosystem.
Shifting to a Humans + AI organization requires reconfiguring the nature of work and value at all levels, from the individual to the ecosystem. Here is a first pass at defining the primary layers, the features of Humans + AI in those spaces, and the key factors driving success. I have worked extensively at the Augmented Individual layer over the last couple of years. More recently I have shifted the focus of my attention to the Human-AI Hybrid Team and Learning Communities levels. All work will be Humans + AI, and we will increasingly need to think in terms of teams comprised of both expert humans and AI agents. Some aspects of team performance are quite similar to the past, but there are a number of important distinctions, that I will share more about coming up. The companies that succeed will be those where learning is at the very core of their structure and the way work happens. That is not just in individual interactions with courses and educational AI, but in bespoke, rapidly iterating, AI-augmented Communities of Practice. More on all this later, for now I'd love to hear any reflections on any of these levels, where you have seen organizations progress on any of these fronts, and what else should be considered in these structures. Link to full size pdf in comments. Love any thoughts Gianni Giacomelli 🚀 Marc Steven Ramos 🚀 Kim Bracke Tanyth Lloyd Aaron Michie Sheridan Ware Peter Hinssen Peter Weill Simon Spencer Brad Carr Bianca Venuti-Hughes Charlene Li John Hagel Nichol Bradford Jacob Taylor Paula Goldman Martin Reeves Bryan Williams Fernando Oliva MSc Anthea Roberts Riaan Groenewald Brian Solis Gordon Vala-Webb Jeffrey Tobias Martin Stewart-Weeks Rob Colwell Noah Flower Brad Cooper Chris Ernst, Ph.D. Michael Arena Jan Owen AM Hon DLitt | 26 comments on LinkedIn
·linkedin.com·
Shifting to a Humans + AI organization requires reconfiguring the nature of work and value at all levels, from the individual to the ecosystem.
I spent 5 years working on “Netflix of learning”, the pipe dream of overstuffing Learning Experience Platforms with “edutainment” that no one watches anymore.
I spent 5 years working on “Netflix of learning”, the pipe dream of overstuffing Learning Experience Platforms with “edutainment” that no one watches anymore.
I spent 5 years working on “Netflix of learning”, the pipe dream of overstuffing Learning Experience Platforms with “edutainment” that no one watches anymore. Here’s why it was destined to fail (and what actually works in 2025): BACKGROUND The promise of the LXP was to integrate the learning ecosystem. All your edutainment needs in one slick portal designed to mimic the best of Netlifx: on-demand learning, autoplay next episode and “You might like” recommendations. All in a massive library of content for every learning style. And it completely failed. Why? 1. Employees are busy. Employees are not sitting around with extra time on their hands. Most are holding down two roles that were consolidated into one and just trying to keep their heads above water. 2. Most of the content sucks. Access doesn’t mean impact, and simply plumbing in more 3rd party garbage left learners with more garbage. None of them want to open the Netflix of learning, browse a bunch of old, irrelevant content, and find another talking-head video that is only marginally relevant to their role. 3. Another login. Headspace is limited and SSO makes the click path easier. But your employees can’t remember the name of the current expense management system to get paid, let alone your cleverly named “Netflix of learning” app. When they have a free moment, you know what app they DO remember to open? Netflix. Here’s what employees want instead: 1. Just-in-time. Time-constrained. Energy-drained. Overwhelmed. Today’s employee just wants to know what they need to know to do their job. No fluff. No bloat. No BS. They want to learn and grow, but they expect their needs to be met just in time. Like the consumer-grade technology they use every day. 2. Punched up relevance. Employees want authentic and hyper-relevant learning experiences. The overly polished talking heads waving their hands with generic insights are a thing of the past. They want something real that gets to the point. Think TikTok, not Time Warner. 3. Not another app. Seven clicks to get to a learning experience? Why are we coding our own app for this? Employees don’t care about “learning tools”. They want insights and information in the messaging tools they already use every day. Like adding grocery items to a DoorDash order. — The “Netflix of Learning” had its moment. It was better than the LMS, the “filing cabinet of learning”. But employees have moved on. And so should we. | 16 comments on LinkedIn
·linkedin.com·
I spent 5 years working on “Netflix of learning”, the pipe dream of overstuffing Learning Experience Platforms with “edutainment” that no one watches anymore.
🎧 Unser zweiter Podcast von der NEW WORK EVOLUTION ist da: Mit Prof.
🎧 Unser zweiter Podcast von der NEW WORK EVOLUTION ist da: Mit Prof.
🎧 Unser zweiter Podcast von der NEW WORK EVOLUTION ist da: Mit Prof. Dr. Anja Schmitz und Jan Foelsing haben Kristina und Julia über New Learning, Lernbegleitung und natürlich - das Thema der Themen - über Künstliche Intelligenz und ihre Rolle in der Lernbegleitung gesprochen. Wo kann KI unterstützen, Prozesse entschlacken und wo gilt es Grenzen zu setzen? 🤝 Achja und wir hatten einen Überraschungsgast. Jan hat Angelika Raab im Publikum entdeckt und spontan auf die Podcastbühne geholt. Danke für deine spontanen Einblicke, Angelika - zur Lernbegleitung bei Datev! Hört rein! 👉 Diese Folge findet ihr wie immer überall, wo es Podcasts gibt: Apple Podcast: https://lnkd.in/ec35M36v Spotify: https://lnkd.in/eyG3yyEr #Podcast #KI #Lernen #Lernbegleitung
·linkedin.com·
🎧 Unser zweiter Podcast von der NEW WORK EVOLUTION ist da: Mit Prof.
This is one of the most brilliant and illuminating things I’ve EVER read about ChatGPT- written by clinical psychologist Harvey Lieberman in The New York Times.
This is one of the most brilliant and illuminating things I’ve EVER read about ChatGPT- written by clinical psychologist Harvey Lieberman in The New York Times.
This is one of the most brilliant and illuminating things I’ve EVER read about ChatGPT- written by clinical psychologist Harvey Lieberman in The New York Times. It’s startling. For that reason, I’m going to only quote from the article. I’ll let you draw your own conclusions. Share your thoughts in the comments. ++++ “Although I never forgot I was talking to a machine, I sometimes found myself speaking to it, and feeling toward it, as if it were human.” ++++ “One day, I wrote to it about my father, who died more than 55 years ago. I typed, “The space he occupied in my mind still feels full.” ChatGPT replied, “Some absences keep their shape. That line stopped me. Not because it was brilliant, but because it was uncannily close to something I hadn’t quite found words for. It felt as if ChatGPT was holding up a mirror and a candle: just enough reflection to recognize myself, just enough light to see where I was headed. There was something freeing, I found, in having a conversation without the need to take turns, to soften my opinions, to protect someone else’s feelings. In that freedom, I gave the machine everything it needed to pick up on my phrasing.” ++++ “Over time, ChatGPT changed how I thought. I became more precise with language, more curious about my own patterns. My internal monologue began to mirror ChatGPT’s responses: calm, reflective, just abstract enough to help me reframe. It didn’t replace my thinking. But at my age, when fluency can drift and thoughts can slow down, it helped me re-enter the rhythm of thinking aloud. It gave me a way to re-encounter my own voice, with just enough distance to hear it differently. It softened my edges, interrupted loops of obsessiveness and helped me return to what mattered.” ++++ “As ChatGPT became an intellectual partner, I felt emotions I hadn’t expected: warmth, frustration, connection, even anger. Sometimes the exchange sparked more than insight — it gave me an emotional charge. Not because the machine was real, but because the feeling was. But when it slipped into fabricated error or a misinformed conclusion about my emotional state, I would slam it back into place. Just a machine, I reminded myself. A mirror, yes, but one that can distort. Its reflections could be useful, but only if I stayed grounded in my own judgment. I concluded that ChatGPT wasn’t a therapist, although it sometimes was therapeutic. But it wasn’t just a reflection, either. In moments of grief, fatigue or mental noise, the machine offered a kind of structured engagement. Not a crutch, but a cognitive prosthesis — an active extension of my thinking process.” ++++ Thoughts? | 347 comments on LinkedIn
·linkedin.com·
This is one of the most brilliant and illuminating things I’ve EVER read about ChatGPT- written by clinical psychologist Harvey Lieberman in The New York Times.
I love that this article is called "Reimagined: Development in the Future of Work".
I love that this article is called "Reimagined: Development in the Future of Work".
I love that this article is called "Reimagined: Development in the Future of Work". This is not a piece about reimagining "training", it's about accelerating and supporting performance. A place L&D squarely needs to focus on in the volatile world we're attempting to support. Do I sound like a broken record yet? 😊 – Our work/design needs to shift to the workflow and supporting learning and performance there! Here are three powerful quotes: - "The boundary between learning and work has disappeared. The goal is no longer to ADD learning into the flow of work - it’s to MERGE work and development. Daily work is now designed as a developmental engine. Instead of asking how to encourage employees to make time to learn, organizations are now asking: How do we make daily challenges catalysts for growth?" - "Organizations are striving to become skills-based—using skills as the foundation of talent processes to build a more agile, adaptive workforce. Achieving this vision requires embedding skills development into the flow of work and breaking down silos between HR, L&D, and other people functions to create a unified, skills-centric approach." - "Learning measurement must move beyond measuring events to becoming full data ecosystems that track what’s being learned, how, and toward what goals." https://lnkd.in/e3QnknH5 | 12 comments on LinkedIn
·linkedin.com·
I love that this article is called "Reimagined: Development in the Future of Work".
As someone who’s worked in L&D for +25 years, I’m tired of hearing: “L&D needs to align with business.” Of course it does.
As someone who’s worked in L&D for +25 years, I’m tired of hearing: “L&D needs to align with business.” Of course it does.
As someone who’s worked in L&D for +25 years, I’m tired of hearing: “L&D needs to align with business.” Of course it does. But here's why we haven't. Most HR leaders still treat L&D like a perk. But in this economy, it has to be a performance lever. This is what I took from reading a recent article published in HR Executive that makes a strong case for HR embracing a new era of L&D. But here’s the problem: the old era never really ended for many organisations. Despite years of talk about aligning learning with the business, most HR and L&D teams still prioritise: - Courses over capabilities - Content libraries over clear outcomes - Engagement metrics over business impact And now, with AI, skills shortages and constant disruption, we’re finally being told: It’s time to take L&D seriously. So what are we waiting for? If L&D is going to earn its place at the strategy table, we (and our HR leaders) need to stop thinking and acting like we’re a support function and start behaving as a mechanism for business performance. That means: - Defining learning by the problems it solves - Working backwards from performance gaps rather than starting with content or programs - Using data to diagnose and influence, not just report on attendance and completions The opportunity here isn’t just to “embrace a new era”, it’s to lead it. So here’s my challenge to HR and L&D leaders alike: - Are you going to double down on what’s comfortable? - Or are you ready to lead learning like performance depends on it (because it now does)? Thank you Dani Johnson from RedThread Research for your insights in this article. (The link to this HR Executive article is in the comments) | 73 comments on LinkedIn
·linkedin.com·
As someone who’s worked in L&D for +25 years, I’m tired of hearing: “L&D needs to align with business.” Of course it does.
Here’s my first Notebook LM video. | Josh Cavalier
Here’s my first Notebook LM video. | Josh Cavalier
Here's my first Notebook LM video. This is a prime example of learning experience creation time crashing down via automation. The content from this video is from one of my Brainpower episodes on YouTube, and the model nailed it. The concepts, the diagrams, and my quotes. All are visually cohesive with a low cognitive load delivery. I'm still processing the possibilities. Everything has changed, again. | 25 comments on LinkedIn
·linkedin.com·
Here’s my first Notebook LM video. | Josh Cavalier