The future of the CLO: Leading in a world of merged work and learning
Explore the mandate for chief learning officers to cultivate environments in which work and learning seamlessly converge, driven by technology, data, and a robust culture of continuous development.
The agentic organization: Contours of the next paradigm for the AI era
Discover how agentic organizations use AI-first workflows, empowered teams, and real-time data to drive innovation, productivity, and competitive advantage.
Want to know what L&D is really doing with AI? Well, now you can. Today, Egle Vinauskaite and I publish our third annual report on AI in L&D. We’ve listened to more than 600 people in 53 countries, and there’s plenty to share.
To learn more, download the 54-page report AI in L&D 2025: The Race For Impact (link in comments and in my bio).
Inside, you’ll find:
· 10 ‘snapshot’ mini case studies
· 12 pages of detailed analysis of how L&D is using AI
· 12 pages of quantative analysis
· 14 pages of in-depth case studies from Microsoft, ServiceNow, TTEC, KPMG UK, Leyton and mci group.
· 1 framework: the Transformation Triangle. As AI makes is easier and faster to generated content, we explore the profound implications for L&D.
And all of this is illustrated with ample quotes from the people out there doing the work. This isn’t an armchair exercise. We’ve gone through countless interviews and around 20,000 words of text that our respondents generated in the survey describing their work.
This is a vivid illustration of what’s happening with AI in L&D today. We hope it will provide both insight, information and inspiration.
My key take away: we’ve passed an inflexion point.
For the first time, over half our respondents said they weren’t experimenting with AI, but actually using it. That’s a significant shift from last year.
AI has moved from being a novelty to being part of L&D’s regular toolkit.
And look at how they are using it – sure, content creation dominates. But look at the table of how things have changed since last year. Again, content dominates the top four place, but just beneath, there’s one extraordinary change.
Qualitative data analysis has leapt from 8 last year to 5 this year. The single biggest change from year to year.
This single points illustrates something we see across all our analysis, and in all of our case studies: a shift towards more sophisticated use, an increased focus on data, analysis and research.
The featured case studies illustrate some of these inventive new uses perfectly – to learn more, download the report now.
Our thanks to our report sponsors, OpenSesame, Speexx and The Regis Company for making this report possible.
To download, click the link in my profile, or go to the first comment. | 39 comments on LinkedIn
What's happening with AI in L&D? Well, here it is — the 2025 edition.
Today, Donald H Taylor and I are releasing our third annual report on AI in L&D: The Race for Impact. If you’ve been wondering whether you’re behind, which AI uses you haven’t yet tried, or how to take your work further, we’ve put this report together to give you answers and ideas.
Inside you'll find:
➡️ Fresh data on the most popular AI uses in L&D, how patterns are shifting, and what barriers teams still face
➡️ 12 pages detailing AI uses across learning design and content development, internal L&D ops, strategy and insight, and workforce enablement to inform and inspire your practice
➡️ 14 pages of in-depth AI in L&D case studies by Microsoft, ServiceNow, TTEC, KPMG UK, Leyton and mci group
➡️ A framework - the Transformation Triangle - exploring what AI’s move into “traditional” L&D work means for the function’s future role
600+ respondents. 53 countries. 20,000+ words in write-in responses. Days of interviews. Countless hours of deliberations and coffees trying to make sense of how the industry has evolved over the past 3 years and what it means for the road ahead.
These are extraordinary numbers and they wouldn’t exist without the community behind them. Thank you to everyone who took the time to complete the survey and share thoughtful answers. Thank you to our case study contributors, who gave hours of their own time to document their practice for the benefit of the wider industry. Thank you to our sponsors OpenSesame, The Regis Company and Speexx who made this work possible.
And thank you to Don: what started as a coffee conversation has grown into a three-year collaboration that keeps pushing both of us (and hopefully the field) forward.
The full report is free to download (link in the comments).
P.S. Below is a snapshot of the most common AI use cases we mapped this year. It gives a sense of where the field is and might spark a few new ideas 🙌
♻️ Share this post so more teams can find these insights and build on each other’s work. | 22 comments on LinkedIn
Die freundliche KI? Wie DHL eine "AI-Powered Workforce" aufbaut - GOOD MORNING L&D
Wie können Unternehmen durch KI die Effizienz steigern und gleichzeitig die menschliche Komponente stärken, anstatt sie zu ersetzen? In dieser Folge von GOOD...
This article dropped a few days ago 👉 https://lnkd.in/djktVNKi Main talking points:
💡 Companies are adopting AI like crazy, but they should invest in preparing people to work with AI just as much. Apparently, that doesn't happen nearly enough as it should
💡 The research presented in the article highlights that Get AI Tutors outperform classroom training by 32% on personalization and 17% on feedback relevance.
💡 Gen AI Tutors create space for self-reflection, which is awesome
💡 Learners finished training 23% faster while achieving the same results
💡 Frontline workers, culture change, and building AI competence were mentioned as applications for Gen AI
My thoughts:
💭 I think one of the hardest decisions we will face is where we should use Gen AI Tutors and where we should keep human interaction as part of learning
💭 The "results" in the research presented were, mostly, imho, still vanity metrics. I'm looking forward to seeing research done where analysis of results is more comprehensive (spanning a longer timeline, with clear leading indicators, etc). Until then, I can fully be convinced of the fact that Gen AI Tutors truly perform better on growing cognitive & behavioral skills
💭 While I find the culture change application interesting, I do hope Gen AI Tutors won't be used to absolve leaders of the responsibility THEY have for building cultures. I can't see a good result coming out of this.
Very curious to hear your thoughts 👀
#learninganddevelopment | 14 comments on LinkedIn
SAP and OpenAI partner to launch sovereign ‘OpenAI for Germany’ | Anja C. Wagner
Endlich geht mal was ordentlich voran: OpenAI for Germany – ein interessanter Schritt in Richtung digitaler Souveränität.
Heute haben SAP und OpenAI offiziell die Partnerschaft „OpenAI for Germany“ vorgestellt.
Das Ziel: KI-Technologien für den öffentlichen Sektor in Deutschland bereitzustellen – mit Fokus auf Datenschutz, Datensouveränität und rechtliche Compliance.
Die Lösung basiert auf SAPs Delos Cloud (unter Microsoft Azure-Technologie) – lokal in Deutschland betrieben.
Was steckt dahinter?
Millionen von Beschäftigten in Verwaltungen, Behörden und Forschungseinrichtungen sollen KI-gestützte Tools nutzen können, um Prozesse effizienter zu gestalten.
Geplant ist u.a. Automatisierung von Vorgängen, Datenanalysen und Workflow-Integration.
Zum Start sollen 4000 GPUs für KI-Rechenleistung zur Verfügung stehen — mit Ambition, weiter zu skalieren.
Es passt zur deutschen KI-Ambition: Der Staat sieht KI bis 2030 als wichtigen Werttreiber, mit Potenzial für bis zu 10 % BIP-Beitrag.
Warum ich das für wichtig halte:
Es ist ein klares Signal: KI-Lösungen müssen nicht zwangsläufig „Alles ins Ausland“ oder "Alles nur national" bedeuten. Lokale, hybride Infrastrukturen können Teil der Lösung sein.
Gerade im öffentlichen Sektor lassen sich durch KI echte Mehrwerte schaffen — wenn Sicherheit, Recht und Vertrauen stimmen.
Es ist ein gutes Beispiel dafür, wie Partnerschaften (Tech + Industrie + Staat) zusammenspielen können.
Start soll 2026 sein. Bravo, denke ich 🙏
Wird aber nicht allen gefallen ...
⛓️💥 https://lnkd.in/dE3q9Jys
#digital #souveraen #verwaltung #ki
AI will find its way into schools whether we like it or not. The danger lies in ignoring it; that’s how ‘workslop’ takes root.
‘We define workslop as AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.’
So begins a great piece in the Harvard Business Review which has coined a new term referring to poor AI practices which are developing: employees are producing sloppy work with AI and actually creating more work down the line for the person they pass the ‘workslop’ onto.
The article offers some clear pointers on how organisations can move on to better AI practice, summed up in the conclusion:
‘Leaders will do best to model thoughtful AI use that has purpose and intention. Set clear guardrails for your teams around norms and acceptable use. Frame AI as a collaborative tool, not a shortcut. Embody a pilot mindset, with high agency and optimism, using AI to accelerate specific outcomes with specific usage. And uphold the same standards of excellence for work done by bionic human-AI duos as by humans alone.’
These lessons are just as applicable to schools as to businesses. The key difference is that we not only need leaders to model best practice, but teachers to help students understand what this looks like. It’s vital we take active steps now to shape habits: AI can be a force for innovation and amplify what’s best in our schools, or it can drive ‘workslop’ in staff and students. Surely the choice is a no brainer?
(Link to piece in comments via post on this from David Monis-Weston)
The development team behind the Model Context Protocol (MCP) has introduced the MCP Registry
– an open catalog and API to discover and use publicly available MCP servers.
Finally, MCP servers can be discovered through a central catalog. Think of it as an App Store for scanning and searching your MCPs:
→ An open catalog + API for discovering MCP servers
→ One-click install in VS Code
→ Servers from npm, PyPI, DockerHub
→ Sub-registries possible for security and curation
→ Works across Copilot, Claude, Perplexity, Figma, Terraform, Dynatrace etc.
Although it is still in preview and being worked on, it will definitely serve a major problem.
Github link: https://lnkd.in/df-qTnYe
So, New Zealanders and Australians are the most similar to AI, and Ethopians and Pakistanis the least.
So, New Zealanders and Australians are the most similar to AI, and Ethopians and Pakistanis the least. This study shows that WEiRD (Western, Educated, Industrialized, Rich, and Democratic) populations are the most aligned with ChatGPT.
It is intriguing why some nations, including Netherlands and Germany, are more GPT-similar than Americans.
The paper uses cognitive tasks such as the “triad task,” which distinguishes between analytic (category-based) and holistic (relationship-based) thinking, GPT tends to analytic thinking, which aligns with countries like the Netherlands and Sweden, that value rationality. This contrasts with holistic thinkers found in many non-WEIRD cultures.
GPT tends to describe the "average human" in terms that are aligned with WEIRD norms.
In short, with the overwhelmingly skewed data used to train AI and the outputs, it's “WEIRD in, WEIRD out”.
The size of the models or training data is not the issue, it's the diversity and representativeness of the data.
All of which underlines the value and importance of sovereign AI, potentially for regions or values-aligned cultures, not just at the national level. | 18 comments on LinkedIn
ALARM !!! Das OpenAI Zertifikat steht vor der Tür – und es wird den gesamten Bildungsmarkt verändern. Hier die wichtigsten Fakten, alle Details gibt’s im verlinkten Artikel.
„𝗗𝗮𝘀 𝗶𝘀𝘁 𝗱𝗮𝘀 𝗹𝗲𝘁𝘇𝘁𝗲 𝗝𝗮𝗵𝗿, 𝗶𝗻 𝗱𝗲𝗺 𝘄𝗶𝗿 𝗣𝗼𝘄𝗲𝗿𝗽𝗼𝗶𝗻𝘁𝘀 𝗮𝗹𝘀 𝗣𝗿𝗼𝗱𝘂𝗸𝘁 𝗹𝗶𝗲𝗳𝗲𝗿𝗻. 𝗞𝘂̈𝗻𝗳𝘁𝗶𝗴 𝗹𝗶𝗲𝗳𝗲𝗿𝗻 𝘄𝗶𝗿 𝗕𝗼𝘁𝘀.“
Die alte Ordnung der Wissensarbeit gerät ins Rutschen. Im Open-AI‑Forum diskutierten Aaron "Ronnie" Chatterji, Chefökonom von Open AI, und Joseph Fuller von der Harvard Business School, wie Künstliche Intelligenz Aufgaben, Organisationen und Karrieren neu gestaltet. Ihre Diagnose: Die Technologie sprintet voraus, doch Unternehmen verharren in Prozessen, die für eine analoge Welt eines klassischen Foliensatzes gebaut wurden.
Fuller bringt die Zäsur auf eine Formel, die die Beratungsbranche aufhorchen lässt: „Das ist das letzte Jahr, in dem wir Powerpoints als Produkt liefern. Künftig liefern wir Bots.“ Hinter dem Bonmot steckt eine nüchterne Wirtschaftsanalyse: Regelbasierte Tätigkeiten lassen sich skalieren. Kunden verlangen weniger Informationsarbitrage, dafür mehr umsetzbare Systeme.
Viele Beratungsmodelle lebten von der Fähigkeit, Informationen quer durch Silos zu heben, zu strukturieren und zu deuten. Jetzt rücken Entscheidungsplattformen in den Vordergrund, die Livedaten aus Finanz‑, Personal‑ und Vertriebssystemen integrieren und die Frage „Was jetzt?“ automatisiert beantworten. So entsteht eine Overlay‑Schicht für Entscheidungsintelligenz, und Powerpoint-Folien werden zur Randnotiz. Stattdessen werden Bots und Workflows zum Produkt, erwarten die beiden Experten.
Weiterlesen auf F.A.Z. PRO Digitalwirtschaft (FAZ+) ▶︎ https://lnkd.in/eFjyCYGw
Frankfurter Allgemeine Zeitung
___________
Mein neuer Online-Kurs: 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝘃𝗲 𝗞𝗜 𝗳𝘂̈𝗿 𝗙𝘂̈𝗵𝗿𝘂𝗻𝗴𝘀𝗸𝗿𝗮̈𝗳𝘁𝗲
Mein Kurs erläutert in kompakter Form die wesentlichen strategischen und ökonomischen Effekte der generativen KI für Unternehmen. Im Zentrum stehen die Produktivitätseffekte, die Auswirkungen auf Geschäftsmodelle und die Wettbewerbsfähigkeit der Unternehmen sowie der Zusammenhang zwischen KI und Arbeit. Den Abschluss bildet der Blick auf den Status der KI in Deutschland.
Der Kurs richtet sich an Führungskräfte, die an den ökonomischen Implikationen der generativen KI für Unternehmen, Wirtschaft und Wettbewerb interessiert sind.
Der Kurs berücksichtigt den aktuellen Stand in Wirtschaft und Wissenschaft.
▪️ Dauer: 80 Minuten
▪️ Inhalt: 9 Videos / 66 Slides
Zur Buchung ▶︎ https://lnkd.in/ezsB-KDg
This interesting Deloitte report is framed around AI for HR, but the lessons are applicable across organizations, and support the broader issue of transformation to a Humans + AI organization.
The report is definitely worth a look, perhaps especially the Appendix. Below sharing a few of the distilled highlights.
🔄 Multi-agent systems (MAS) are the next-gen operating model.
In the next 12–18 months, expect a shift from siloed APIs to MAS that can reason, plan, and act across business units—enabling autonomous execution with governance and “human in the loop” oversight.
📈 Human–AI collaboration boosts decision-making capacity.
AI can instantly synthesize vast datasets into contextual, role-specific insights, allowing executives and managers to make better, faster, and more informed decisions across the enterprise.
💡 Workforce roles are redesigned, not just replaced.
Agentic AI shifts roles across the board—from purely executional to more analytical, creative, and relationship-focused work—impacting job design in marketing, operations, R&D, and beyond.
📊 AI standardizes excellence across the enterprise.
By codifying best practices into AI systems, organizations can eliminate “pockets of excellence” and ensure consistent quality across all teams and regions—not just in HR but in sales, operations, and service delivery.
🔍 Predictive intervention beats reactive problem-solving.
AI can detect signals—like turnover risk, performance decline, or customer churn—before they become problems. This enables leaders to act early with targeted, data-backed interventions.
🛠 Orchestration of multi-step, cross-functional workflows.
Agentic AI can coordinate tasks across multiple business functions without manual handoffs—e.g., onboarding a new employee touches HR, IT, facilities, and finance, yet AI can plan, execute, and monitor the entire process end-to-end.
🗺 AI’s biggest impact areas are mapped.
A “heatmap” of hundreds of HR processes pinpoints where AI should be fully powered (e.g., data analysis, reporting, inquiries), augmented (e.g., recruiting, performance reviews), or assistive—helping leaders invest for maximum ROI.
🚀 80%+ of admin work can be automated.
In future HR operations, AI will handle over 80% of administrative and operational tasks, freeing HR teams to focus on strategy, workforce planning, and proactive talent interventions. | 14 comments on LinkedIn
AI is transforming education in all dimensions, with accelerating urgency and new pathways for learning, and educators now using AI extensively.
This study from Anthropic on how higher education professors use AI uncovers very interesting insights.
The chart here shows far more augmentation than automation, as you would hope, with the more automated tasks in particular focused on administration.
The most augmented tasks are university teaching and classroom instruction, including creating educational materials and practice problems.
Many of those interview describe AI as a "thought partner" in helping them create better, more effective learning experiences.
But there are deep dilemmas for educators, beyond just choosing where to augment themselves, and where to automate for efficiency. It is about balancing efficiency versus the integrity of the teaching process.
Overall professors said they thought AI was least effective at grading and assessment, and some refuse to use it. However half of grading tasks were automated. It's not in the slightest surprising, but certainly concerning.
The most common AI creations by the educators were:
🎮 Interactive educational games
📝 Assessment and evaluation tools
📊 Data visualization
📚 Subject-specific learning tools
📅 Academic calendars and scheduling tools
💰 Budget planning and analysis tools
📄 Academic documents
Some of these can be very effective learning tools, others can take away administrative burden.
This is all amidst a redefinition of the learning system, with deep shifts in learners, the learning process, educational institutions, educators, and those using academic credentialing.
We are early in the transformation of education.
July 25, 2025 My Ethical AI Principles Image: Sign posted in the common kitchen at the campground in Selfoss, Iceland, where this article was written I understand. Artificial Intelligence (AI) is a technology unlike any we've seen before.
New research finally offers a robust answer to the question, "Does using AI make our Instructional Designs BETTER, or just faster?"
👉 In a controlled test, 27 Instructional Design postgrads at Carnegie Mellon created designs both with & without GPT-4 assistance.
👉 Every design was blind-scored on quality by expert instructors.
👉The result: Design with AI was not not just faster, but produced better quality designs in 100% of the cases.
But the detail is where it gets interesting...👇
The research also revealed a "capability frontier"—a clear boundary between where AI helps Instructional Design quality most, and where it might actually compromise it.
TLDR:
🚀 USE AI FOR: Designs which use well-established design methodologies, step-by-step processes & widely-discussed topics.
❌ BE MORE CAUTIOUS WHEN USING AI FOR: Designs on niche, novel & complex topics which use less well-established design methodologies.
💡Bonus insight: In line with broader research on the impact of AI on knowledge work, the research also suggests that novice Instructional Designers benefit *most* from AI design assistance (but only when we are strict on what sorts of tasks they use it for).
To learn more about the research & what it tells us about how to work with AI in our day to day work, check out my latest blog post (link in comments).
Happy innovating!
Phil 👋
ByteBot OS: First-Ever AI Operating System IS INSANE! (Opensource)
Meet ByteBot OS – the first-ever open-source & self-hosted AI Operating System!🔗 My Links:Sponsor a Video or Do a Demo of Your Product, Contact me: inthewor...
There are massive disparities in how people view AI, in their degree of nervousness, excitement, trust in systems, and personal impact. This updated Ipsos AI Monitor 2025 shares many fascinating insights.
English-speaking countries remain the most nervous and unexcited, with Asia dominating as most positive nations.
The second chart I've shared here is interesting, in that while people are relatively positive about the impact of AI on their job and also the economy, they are considerably less positive about the impact of AI on the job market.
Not surprisingly, those nations that believe AI will benefit the economy are most likely to be excited.
The global average for believing AI will profoundly change their life in the next 3-5 years is 67%, ranging from 52% in Britain to 84% in Indonesia. So most people
Of course, if people believe AI will profoundly change their lives there is likely cause for at least some nervousness and hopefully also excitement.
Where the balance lies in a nation, and within any specific organization, must shape governance and change initiatives to maximize good cause for excitement and minimize cause for nervousness.
Because it is a wild ride.
The top priority for most leaders is integrating AI into the business. And AI itself is transforming leadership and leadership development.
Some great insights in this report from Harvard Business Review, particularly interesting to me, not only as much of my work is in client leadership development, but also the reality that effective leadership will be critical in us navigating the challenging path to prosperous Humans + AI organizations.
Not surprisingly, 55% of survey respondents said incorporating GenAI into business practices is their #1 priority this year.
To support that, the top human capital project - at 53% - is adopting or expanding AI-based talent management.
The real headline is that over 80% of HR leaders expect that every level of leader will spend more time on leadership development this year, in many cases significantly more. The question is: how to design the programs and the time spent to result in true expansion of leadership capabilities.
"Speed to skill is the metric in focus". Which requires a very different, intrinsically Humans + AI approach:
"In a two-way information exchange, AI is fed an organization’s domain-specific knowledge and humans access AI-generated learning resources based on that knowledge. AI systems learn from human inputs, improving over time, while humans gain insights from AI-generated data. Properly done, these efforts can build the collective intelligence of humans and machines, enhancing the organization’s ability to solve complex problems and adapt to changing environments."
A lot more in the report.
What is clear is that in an accelerating world leadership development is more important than ever, both to address AI-driven change, and supported by AI.
Introduction to AI Safety, Ethics, and Society | Peter Slattery, PhD | 10 comments
📢 Free Book: "Introduction to AI Safety, Ethics, and Society is a free online textbook by Center for AI Safety Executive Director Dan Hendrycks. It is designed to be accessible for a non-technical audience and integrates insights from a range of disciplines to cover how modern AI systems work, technical and societal challenges we face in ensuring that these systems are developed safely, and strategies for effectively managing risks from AI while capturing its benefits.
This book has been endorsed by leading AI researchers, including Yoshua Bengio and Boaz Barak, and has already been used to teach over 500 students through our virtual course. It is available at no cost in downloadable text and audiobook formats, as well as in print from Taylor & Francis. We also offer lecture slides and other supplementary resources for educators on our website."
Thanks to Connor Smith for sharing this with me. Due to file limit issues, I have only attached the first 17 pages of the much larger textbook. See link in comments.
| 10 comments on LinkedIn
Der KI-Filmemacher Alex Patrascu zeigt in folgendem Video, wie aus Gemälden ganze Szene entstehen, die dann zusammengeführt werden
Die Bilder wurden KI-generiert, dann in Videos mit Übergängen animiert und am Schluss noch ein Audio drübergelegt. Zack, fertig ist der Short Movie. Dies wirkt schon fast wie eine Sitcom mit Mona Lisa, Van Gogh & Co.
Zum Linkedin-Beitrag
https://lnkd.in/enY82ifv
Apple’s latest announcement is worth paying attention to. They’ve just introduced an AI model that doesn’t need the cloud – it runs straight in your browser.
The specs are impressive:
Up to 85x faster
3.4x smaller footprint
Real-time performance directly in-browser
Capable of live video captioning – fully local
No external infrastructure. No latency. No exposure of sensitive data.
Simply secure, on-device AI.
Yes, the technical benchmarks will be debated. But the bigger story is Apple’s positioning. This is about more than numbers – it’s about shaping a narrative where AI is personal, private, and seamlessly integrated.
At Copenhagen Institute for Futures Studies, we’ve been tracking the rise of small-scale, locally running AI models for some time. We believe this shift has the potential to redefine how organizations and individuals interact with intelligent systems – moving AI from “out there” in the cloud to right here, at the edge. | 10 comments on LinkedIn
Apertus: Ein vollständig offenes, transparentes und mehrsprachiges Sprachmodell
Die EPFL, die ETH Zürich und das Schweizerische Supercomputing-Zentrum CSCS haben heute Apertus veröffentlicht: Das erste umfangreiche, offene und mehrsprachige Sprachmodell aus der Schweiz. Damit setzen sie einen Meilenstein für eine transparente und vielfältige generative KI.
The full report is being presented from 2–4 September at UNESCO’s Digital Learning Week 2025 in Paris. It’s a must-read for anyone interested in learning, technology, and the future of education — packed with insights and practical perspectives.
𝗧𝗵𝗲 𝗿𝗲𝗽𝗼𝗿𝘁 𝗰𝗼𝘃𝗲𝗿𝘀 𝘁𝗵𝗲 𝗳𝗼𝗹𝗹𝗼𝘄𝗶𝗻𝗴 𝗮𝗿𝗲𝗮𝘀: ⬇️
𝟭. 𝗔𝗜 𝗳𝘂𝘁𝘂𝗿𝗲𝘀 𝗶𝗻 𝗲𝗱𝘂𝗰𝗮𝘁𝗶𝗼𝗻: 𝗣𝗵𝗶𝗹𝗼𝘀𝗼𝗽𝗵𝗶𝗰𝗮𝗹 𝗽𝗿𝗼𝘃𝗼𝗰𝗮𝘁𝗶𝗼𝗻𝘀
→ AI futures aren’t just about intelligence scores — they push us to rethink what “knowing” really means. And the whole debate isn’t only technical but philosophical: how do we define learning, progress, and human identity in an AI age?
𝟮. 𝗗𝗲𝗯𝗮𝘁𝗶𝗻𝗴 𝘁𝗵𝗲 𝗽𝗼𝘄𝗲𝗿𝘀 𝗮𝗻𝗱 𝗽𝗲𝗿𝗶𝗹𝘀 𝗼𝗳 𝗔𝗜
→ AI in schools and universities is not inevitable — education systems have choices, agency, and the power to shape direction. The core tension here: opportunity for reinvention vs. risks of over-automation and cultural bias.
𝟯. 𝗔𝗜 𝗽𝗲𝗱𝗮𝗴𝗼𝗴𝗶𝗲𝘀, 𝗮𝘀𝘀𝗲𝘀𝘀𝗺𝗲𝗻𝘁 𝗮𝗻𝗱 𝗲𝗺𝗲𝗿𝗴𝗶𝗻𝗴 𝗲𝗱𝘂𝗰𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝗳𝘂𝘁𝘂𝗿𝗲𝘀
→ Classrooms can’t be reduced to data points — AI must respect the incomputable nature of learning. And hyper-personalization risks turning education into an isolated bubble rather than a social dialogue.
𝟰. 𝗥𝗲𝘃𝗮𝗹𝘂𝗶𝗻𝗴 𝗮𝗻𝗱 𝗿𝗲𝗰𝗲𝗻𝘁𝗲𝗿𝗶𝗻𝗴 𝗵𝘂𝗺𝗮𝗻 𝘁𝗲𝗮𝗰𝗵𝗲𝗿𝘀
→ Teachers remain the backbone of education — AI should amplify their work, not sideline it. Building AI “with” educators, not “for” them, is the only path to trust and adoption.
𝟱. 𝗘𝘁𝗵𝗶𝗰𝗮𝗹 𝗮𝗻𝗱 𝗴𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 𝗶𝗺𝗽𝗲𝗿𝗮𝘁𝗶𝘃𝗲𝘀 𝗳𝗼𝗿 𝗔𝗜 𝗳𝘂𝘁𝘂𝗿𝗲𝘀 𝗶𝗻 𝗲𝗱𝘂𝗰𝗮𝘁𝗶𝗼𝗻
→ AI in schools demands an ethics of care — transparent, fair, and accountable by design. Governance can’t be outsourced to tech — it requires democratic oversight and public participation.
𝟲. 𝗖𝗼𝗻𝗳𝗿𝗼𝗻𝘁𝗶𝗻𝗴 𝗰𝗼𝗱𝗲𝗱 𝗶𝗻𝗲𝗾𝘂𝗮𝗹𝗶𝘁𝗶𝗲𝘀 𝗶𝗻 𝗲𝗱𝘂𝗰𝗮𝘁𝗶𝗼𝗻
→ AI can close divides — but only if it is localized, contextualized, and designed for inclusion. Without clarity, bias will persist: marginalized groups risk being left behind.
𝟳. 𝗥𝗲𝗶𝗺𝗮𝗴𝗶𝗻𝗶𝗻𝗴 𝗔𝗜 𝗶𝗻 𝗲𝗱𝘂𝗰𝗮𝘁𝗶𝗼𝗻 𝗽𝗼𝗹𝗶𝗰𝘆: 𝗘𝘃𝗶𝗱𝗲𝗻𝗰𝗲 𝗮𝗻𝗱 𝗴𝗲𝗼𝗽𝗼𝗹𝗶𝘁𝗶𝗰𝗮𝗹 𝗿𝗲𝗮𝗹𝗶𝘁𝗶𝗲𝘀
→ Policy must keep pace with fast-moving AI capabilities, balancing human and machine intelligence.
AI will shape every industry — but in education, it will shape society itself.
Download: https://lnkd.in/dbc6ZJi4
Enjoy reading! And please share your views: ⬇️
𝗣.𝗦. 𝗜𝗳 𝘆𝗼𝘂 𝗹𝗶𝗸𝗲 𝘁𝗵𝗶𝘀, 𝘆𝗼𝘂’𝗹𝗹 𝗹𝗼𝘃𝗲 𝗺𝘆 𝗻𝗲𝘄 𝗻𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿. 𝗜𝘁’𝘀 𝗳𝗿𝗲𝗲 𝗮𝗻𝗱 𝗿𝗲𝗮𝗱 𝗯𝘆 𝟮𝟬,𝟬𝟬𝟬+ 𝗽𝗲𝗼𝗽𝗹𝗲: https://lnkd.in/dbf74Y9E | 39 comments on LinkedIn
Wird fachliches Wissen durch KI überflüssig? 🤔 Gabi Reinmann stellt diese weit verbreitete Annahme in ihrem Beitrag radikal infrage.
Ihre überzeugende Argumentation zeigt, warum wir im KI-Zeitalter nicht weniger, sondern mehr Fachwissen benötigen. In ihrer Analyse deckt sie außerdem problematische Denkfehler in der aktuellen Bildungsdebatte auf.
Zentrale Argumente des Beitrags:
📚 Verengter Wissensbegriff: Aktuell wird Wissen oft fälschlicherweise nur als auswendig gelerntes Faktenwissen verstanden. Dabei umfasst echtes Fachwissen viel mehr. Es reicht von prozeduralem Wissen bis hin zu verkörpertem, intuitivem Verstehen.
🧠 Kritisches Denken ist domänenspezifisch: Die Expertiseforschung zeigt eindeutig, dass kritisches Denken nicht als generische „Zukunftskompetenz” funktioniert, sondern tiefes fachspezifisches Wissen als Fundament braucht.
⚠️ Paradox der KI-Integration: Je mehr wir KI einsetzen, desto wichtiger wird menschliche Expertise. Denn nur, wer über fundiertes Fachwissen verfügt, kann KI-generierte Inhalte wirklich kritisch bewerten und validieren.
🚨 Risiko des „Deskilling“: Ein übermäßiges Vertrauen in KI kann zwar zu einer kognitiven Entlastung führen, reduziert aber gleichzeitig unsere Bereitschaft zum kritischen Denken.
🎯 Meine Ergänzung: Ich bin überzeugt, dass wir Fachwissen nicht nur zur Prüfung von KI-Ergebnissen benötigen. Um verschiedene KIs zu orchestrieren und mit ihnen einen profunden Fachdialog zu führen, ist tiefes Domänenwissen unverzichtbar. Die Qualität der Ausgaben von KI ist abhängig von der Qualität der Eingaben, da sich KI auf das Niveau der nutzenden Person einstellt. Nur mit fundiertem Fachwissen können präzise Fragen gestellt, komplexe Zusammenhänge erklärt und hochwertige, nuancierte Ergebnisse erzielt werden.
Was bedeutet das für uns?
Anstatt die Wissensvermittlung zu reduzieren, müssen wir sie stärken. Der Aufbau von tiefem, vernetztem Fachwissen ist entscheidend, um Lernende zu einem kritischen und erfolgreichen Umgang mit KI zu befähigen. Es geht nicht um ein Entweder-Oder. Die so oft geforderten „Future Skills” wachsen vielmehr erst auf dem fruchtbaren Boden von solidem Fachwissen.| 22 Kommentare auf LinkedIn
Der AI Transformation Compass als Orientierungshilfe im KI Dschungel
🌟 In der Welt der KI den Überblick behalten: Der AI Transformation Compass 🤖✨Die Einführung von KI in Unternehmen ist eine der spannendsten, aber auch komp...