Open New Learning Lab Resources

Open New Learning Lab Resources

1119 bookmarks
Newest
A lot of firms - virtually all firms now - are shaping their AI strategy. Or, better, they’re adapting their strategy in light of the new capabilities we have and will have, thanks to AI.
A lot of firms - virtually all firms now - are shaping their AI strategy. Or, better, they’re adapting their strategy in light of the new capabilities we have and will have, thanks to AI.
But people have reacted to genrative AI so differently. Some have embraced it with gusto. Many have shrunk away from it. Thre vast majority of AI experimentation and usage still happens outside of work (ChatGPT has 800m weekly mostly-consumer users now). Most firms don’t have a very good idea of where the individuals and teams that make up their workforce are. Well, a 2x2 matrix almost always helps - so simple, so illuminating. It’s my favourite mental model. In this situation, adoption and capability are two pertinent axes to think about this. It gives a sense of where there’s overconfidence, underconfidence and appropriate confidence. And what actions you might take for populations in each of the quadrants. This enables you to better serve your people, and be better served by them. If you’re interested in a 30-question survey which generates the data behind each axis and forms part of and builds on my AI in the Wild use case research, send me a message. ♻️Please REPOST if people you’re connected to may like to be updated on how AI is being used, out in the Wild. #aiinthewild
·linkedin.com·
A lot of firms - virtually all firms now - are shaping their AI strategy. Or, better, they’re adapting their strategy in light of the new capabilities we have and will have, thanks to AI.
In 2014 Michael Staton detailed how the university experience is a bundle of many things rolled up together (see image below), and suggested that many components could be disaggregated, or… | Jonathan Boymal
In 2014 Michael Staton detailed how the university experience is a bundle of many things rolled up together (see image below), and suggested that many components could be disaggregated, or… | Jonathan Boymal
In 2014 Michael Staton detailed how the university experience is a bundle of many things rolled up together (see image below), and suggested that many components could be disaggregated, or “unbundled” and provided in alternative ways. He drew on this framework in his “The Degree is Doomed” piece in the Harvard Business Review in 2014. https://lnkd.in/gYdkhCb In 2025, Shannon McKeen, writing in Forbes, considers where we are now https://lnkd.in/gYir-XDe: “Most college students now use AI tools for academic work, yet employers consistently report that new graduates lack the critical thinking and decision-making skills needed in an AI-augmented workplace. This disconnect signals the beginning of higher education's great unbundling. For decades, universities have operated on a bundled model: combining information delivery, skill development, credentialing, and social networking into a premium package. AI is now attacking the most profitable part of that bundle—information transfer—while employers increasingly value what machines cannot replicate: human judgment under uncertainty. Higher education represents a massive market built largely on controlling access to specialized knowledge. Students pay premium prices for information that AI now delivers instantly and for free. A business student can ask ChatGPT to explain supply chain optimization or generate market analysis in seconds. The traditional lecture-and-test model faces its Blockbuster moment. This is classic disruption theory in action. The incumbent model optimized for information scarcity while a new technology makes that core offering abundant. Universities that continue competing with AI on content delivery are fighting the wrong battle. The real value is migrating from information transfer to judgment development, from transactional learning to transformational learning. In an AI saturated world, premium skills are distinctly human: verification of sources, contextual decision-making, ethical reasoning under ambiguity, and accountability for real-world outcomes. This shift mirrors what happened to other information-based industries. When Google made basic research free, management consulting pivoted to implementation and change management. When smartphones made maps ubiquitous, GPS companies focused on real-time optimization and personalized routing. Higher education must make the same transition… The great unbundling of higher education is underway. Information delivery is becoming commoditized while judgment development becomes premium. Institutions that recognize this shift early will capture disproportionate value in the new market structure.” H/t Sinclair Davidson
·linkedin.com·
In 2014 Michael Staton detailed how the university experience is a bundle of many things rolled up together (see image below), and suggested that many components could be disaggregated, or… | Jonathan Boymal
I learned AI Agents for absolutely free, you can do it too!
I learned AI Agents for absolutely free, you can do it too!
I learned AI Agents for absolutely free, you can do it too! AND... the best part is I got to learn from industry experts DeepLearning.AI has done a great job in making these courses. 1. Event-Driven Agentic Document Workflows with LlamaIndex - https://lnkd.in/d7vJEH4H 2. Long-Term Agentic Memory with LangGraph (LangChain) - https://lnkd.in/dKJ-B3ks   3. Build Apps with Windsurf's AI Coding Agents (Codeium) - https://lnkd.in/dTqjjt4Q   4. Building AI Applications with Haystack (by deepset) - https://lnkd.in/d7WnTvTr   5. Improving Accuracy of LLM Applications (Lamini) - https://lnkd.in/dcJvY6kg 6. Evaluating AI Agents (Arize AI) - https://lnkd.in/dvTNKSaq _______________ ♻️ Repost it to help others.  _______________ If you like this, and want more AI resources, images, tutorials, and tools, join Superhuman, my daily AI newsletter with 1M+ subs now: https://lnkd.in/dXQ9-B9A | 55 comments on LinkedIn
·linkedin.com·
I learned AI Agents for absolutely free, you can do it too!
AI isn’t just transforming Learning & Development.
AI isn’t just transforming Learning & Development.
AI isn’t just transforming Learning & Development. It’s revealing it. For years, we’ve talked about being strategic partners - about impact, performance, and business alignment - but much of L&D has still operated as a content-production function. We’ve equated “learning” with “stuff we make”. Now AI has arrived, and it’s showing us what’s really been going on. - If your value comes from creating courses and content, AI will replace you. - If your value comes from solving real problems for the business, AI will amplify you. That’s the pivot point we’re in. The new report, The Race for Impact written by Egle Vinauskaite and Donald H Taylor, captures this moment perfectly. Within it, they describe the “Implementation Inflexion” - the shift from experimenting with AI to actually using it - and revealing what L&D teams are doing as they lead the way. The “Transformation Triangle” lays out three models that go beyond content: Skills Authority - owning data and insight around workforce capability Enablement Partner - orchestrating systems that help others solve problems Adaptation Engine - continuously learning with the business to stay relevant Each one moves L&D closer to the business and further from being an internal production house. This isn’t about tech. It’s about identity. And the teams that figure that out now will define what L&D means in the age of AI. Hear more about this from Egle in the latest episode of The Learning & Development Podcast and what this all means in practice. A link to this episode is in the comments.
·linkedin.com·
AI isn’t just transforming Learning & Development.
TeachLM: six key findings from a study of the latest AI model fine-tuned for teaching & learning.
TeachLM: six key findings from a study of the latest AI model fine-tuned for teaching & learning.
TeachLM: six key findings from a study of the latest AI model fine-tuned for teaching & learning. LLMs are great assistants but ineffective instructional designers and teachers. This week, researchers at Polygence + Stanford University published a paper on a new model — TeachLM — which was built to address exactly this gap. In my latest blog post, I share the key findings from the study, including observations on what it tells us about AI’s instructional design skills. Here’s the TLDR: 🔥 TeachLM outperformed generic LLMs on six key education metrics, including improved question quality & increased personalisation 🔥TeachLM also outperformed “Educational LLMs” - e.g. Anthropic’s Learning Mode, OpenAI’s Study Mode and Google’s Guided Learning - which fail to deliver the productive struggle, open exploration and specialised dialogue required for substantive learning 🔥TeachLM flourished at developing some teaching skills (e.g. being succinct in its comms) but struggled with others (e.g. asking enough of the right sorts of probing questions) 🔥 Training TeachLM on real educational interactions rather than relying on prompts or synthetic data lead to improved model performance 🔥TeachLM was trained primarily for delivery, leaving significant gaps in its ability to “design the right experience”, e.g. by failing to define learners’ start points and goal 🔥Overall, human educators still outperform all LLMs, including TeachLM, on both learning design and delivery Learn more & access the full paper in my latest blog post (link in comments). Phil 👋
·linkedin.com·
TeachLM: six key findings from a study of the latest AI model fine-tuned for teaching & learning.
KI-Adoption Selbsteinstufung
KI-Adoption Selbsteinstufung
Ermitteln Sie den Reifegrad Ihrer KI-Transformation - Kostenlose Selbsteinschätzung basierend auf dem Learning Ecosystem Framework
·aitransformationassessment.lovable.app·
KI-Adoption Selbsteinstufung
Sag mir, was du klickst und ich sage dir, was du lernst - eLearning Journal Online
Sag mir, was du klickst und ich sage dir, was du lernst - eLearning Journal Online
Ein Impuls für individuelle Lernpfade durch Künstliche Intelligenz im Bereich L&D „Kunden, die diese Bettwäsche kauften, kauften auch …“ Dieses Prinzip kennen Sie sicher von E-Commerce-Plattformen. Ihr Verhalten wird erfasst, verglichen und in Empfehlungen übersetzt. Das Ziel: Sie kaufen bestenfalls mehr als diesen einen Artikel. Übertragen auf Lernplattformen heißt das: Klicks, Quiz-Ergebnisse und Suchanfragen lassen […]
·elearning-journal.com·
Sag mir, was du klickst und ich sage dir, was du lernst - eLearning Journal Online
🚨 OpenAI 𝗷𝘂𝘀𝘁 𝗮𝗻𝗻𝗼𝘂𝗻𝗰𝗲𝗱 𝘁𝗵𝗲𝗶𝗿 𝗼𝘄𝗻 𝗔𝗴𝗲𝗻𝘁 𝗕𝘂𝗶𝗹𝗱𝗲𝗿.
🚨 OpenAI 𝗷𝘂𝘀𝘁 𝗮𝗻𝗻𝗼𝘂𝗻𝗰𝗲𝗱 𝘁𝗵𝗲𝗶𝗿 𝗼𝘄𝗻 𝗔𝗴𝗲𝗻𝘁 𝗕𝘂𝗶𝗹𝗱𝗲𝗿.
🚨 OpenAI 𝗷𝘂𝘀𝘁 𝗮𝗻𝗻𝗼𝘂𝗻𝗰𝗲𝗱 𝘁𝗵𝗲𝗶𝗿 𝗼𝘄𝗻 𝗔𝗴𝗲𝗻𝘁 𝗕𝘂𝗶𝗹𝗱𝗲𝗿. [And no — it didn’t kill 99% of startups overnight.] It’s called AgentKit — a no-code, full-stack platform to build, deploy, and optimize AI agents. The UI looks surprisingly clean, but let’s be clear: this doesn’t instantly replace Zapier, Make, n8n, or Lindy. AgentKit is impressive, yes — but it’s still early, still developer-focused, and far from being a plug-and-play automation killer. 𝗛𝗲𝗿𝗲’𝘀 𝘄𝗵𝗮𝘁 𝗔𝗴𝗲𝗻𝘁𝗞𝗶𝘁 𝗶𝗻𝗰𝗹𝘂𝗱𝗲𝘀: ⬇️ 1. Agent Builder – a visual interface to design and connect multiple AI agents → You can drag and drop steps, test them instantly, and track versions → Comes with built-in safety checks (guardrails) → It’s in beta — I haven’t tested it yet, but the interface looks quite polished 2. Connector Registry – a control center for all your data connections → Possible to manage integrations with MCP → Adds content and tools to keep it organized, secure, and compliant for enterprise use 3. ChatKit – provides an interface to add chat to your product → Turns agents into a chat interface that looks native → Handles threads, live responses, and context automatically 4. Evals 2.0 – a system to test and improve your agents → Lets you run evaluations using datasets and automated grading → According to OpenAI companies which used it saw up to 30% higher accuracy after using it None of the announced capabilities are truly new, and I doubt that building agents with OpenAI will offer a better experience than platforms like n8n or Zapier. The output still generates code, and the whole setup clearly targets developers (for now) — which explains why it was introduced at DevDay rather than rolled out to the broader user base. And for enterprise-ready AI agents, you still need solid frameworks like LangChain or CrewAI, not another drag-and-drop automation layer. AgentKit is a strong step, but there’s still a way to go before it becomes a production-grade enterprise solution and kills "99% of all other tools". 𝗣.𝗦. 𝗜 𝗿𝗲𝗰𝗲𝗻𝘁𝗹𝘆 𝗹𝗮𝘂𝗻𝗰𝗵𝗲𝗱 𝗮 𝗻𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿  — 𝘄𝗵𝗲𝗿𝗲 𝗜 𝘀𝗵𝗮𝗿𝗲 𝘁𝗵𝗲 𝗯𝗲𝘀𝘁 𝘄𝗲𝗲𝗸𝗹𝘆 𝗱𝗿𝗼𝗽𝘀 𝗼𝗻 𝗔𝗜 𝗮𝗴𝗲𝗻𝘁𝘀, 𝗲𝗺𝗲𝗿𝗴𝗶𝗻𝗴 𝘄𝗼𝗿𝗸𝗳𝗹𝗼𝘄𝘀, 𝗮𝗻𝗱 𝗵𝗼𝘄 𝘁𝗼 𝘀𝘁𝗮𝘆 𝗮𝗵𝗲𝗮𝗱 𝘄𝗵𝗶𝗹𝗲 𝗼𝘁𝗵𝗲𝗿𝘀 𝘄𝗮𝘁𝗰𝗵 𝗳𝗿𝗼𝗺 𝘁𝗵𝗲 𝘀𝗶𝗱𝗲𝗹𝗶𝗻𝗲𝘀. 𝗜𝘁’𝘀 𝗳𝗿𝗲𝗲 — 𝗮𝗻𝗱 𝗮𝗹𝗿𝗲𝗮𝗱𝘆 𝗿𝗲𝗮𝗱 𝗯𝘆 𝟮𝟬,𝟬𝟬𝟬+ 𝗽𝗲𝗼𝗽𝗹𝗲. https://lnkd.in/dbf74Y9E
·linkedin.com·
🚨 OpenAI 𝗷𝘂𝘀𝘁 𝗮𝗻𝗻𝗼𝘂𝗻𝗰𝗲𝗱 𝘁𝗵𝗲𝗶𝗿 𝗼𝘄𝗻 𝗔𝗴𝗲𝗻𝘁 𝗕𝘂𝗶𝗹𝗱𝗲𝗿.
ChatGPT's biggest update got leaked. Tomorrow, they will announce automation: ✦ It seems to be right using the API (not ChatGPT). ✦ I'm guessing it's a mix of Zapier, n8n, or Make. ✦ You can read "Agent Builder" or "Workflow".
ChatGPT's biggest update got leaked. Tomorrow, they will announce automation: ✦ It seems to be right using the API (not ChatGPT). ✦ I'm guessing it's a mix of Zapier, n8n, or Make. ✦ You can read "Agent Builder" or "Workflow".
PS: once it's live, I'll make a full guide on how-to-ai.guide. It's my newsletter, read by 132,000 people. Here is all of the (trusted) information I gathered: — The no-code AI era starts now — ✓ Drag-and-drop visual canvas for building agents. ✓ Templates for customer support, data, etc. ✓ Native OpenAI model access, including GPT-5. ✓ Full integration with external tools and services. But there was always a wall: coding. Now, anyone can build advanced AI agents. No code. No friction. Here’s how it (seems to) work: You want to automate customer support: 1. Pick a template for a support bot. But you need it to pull info from your database. 2. Drag in an MCP connector. Link your data. You want human approval for refunds. 3. Add a user approval step. Set the rules. You want to check documents for fraud. 4. Drop in a file search and comparison node. Test it. Preview it. Deploy it. All in one place. OpenAI is more than just an API company. It is building the backbone for the no-code AI economy. Now, anyone can create agents that work across systems, talk to users, and make decisions. The age of visual AI automation is here.
·linkedin.com·
ChatGPT's biggest update got leaked. Tomorrow, they will announce automation: ✦ It seems to be right using the API (not ChatGPT). ✦ I'm guessing it's a mix of Zapier, n8n, or Make. ✦ You can read "Agent Builder" or "Workflow".
I rarely use Bloom’s learning taxonomy. I much prefer L. Dee Fink’s (2003). It’s non-hierarchical. It doesn’t separate cognitive tasks from affective and psychomotor ones.
I rarely use Bloom’s learning taxonomy. I much prefer L. Dee Fink’s (2003). It’s non-hierarchical. It doesn’t separate cognitive tasks from affective and psychomotor ones.
It frontloads skills like learning about learning and adaptability, which seem very hard to arrive at with Bloom’s taxonomy. In Fink’s model, there are 6 dimensions (which are interconnected). 1️⃣ Learning about learning 2️⃣ Foundational knowledge 3️⃣ Application 4️⃣ Integration 5️⃣ Human Dimension 6️⃣ Caring My personal opinion is that Fink’s model is going to be much more useful than Bloom’s, when it comes to understanding how AI is changing learning. I made this case when taking to Tina Austin and Michelle Kassorla, Ph.D., when we talked about Bloom’s taxonomy in The Age of AI. (More on that soon!) ——— Image: a screenshot from Fink’s book “Creating Significant Learning Experiences” (2003). | 147 comments on LinkedIn
·linkedin.com·
I rarely use Bloom’s learning taxonomy. I much prefer L. Dee Fink’s (2003). It’s non-hierarchical. It doesn’t separate cognitive tasks from affective and psychomotor ones.
What is the role of instructional design? One of the best things about getting together is the conversations you have, with the people you meet. ...
What is the role of instructional design? One of the best things about getting together is the conversations you have, with the people you meet. ...
Yesterday I enjoyed a conversation about the role of instructional design: my perspective is that you can picture instructional design as one of three pillars, with experience design and performance support either side. But why? Let’s start with the easy one: Performance support: if I send you to the supermarket with a shopping list this is clearly not instructional design. This is clearly performance support. The point of the list is not to help you to memorise the items - it’s there so you don’t have to. It eliminates learning. Much of the value we can offer organisations as L&D people lies in learning elimination (AI guidance for example) Experience design: if I organise a get-together for new starters, where they meet, chat and make friends, then this is experience design. If I arrange a romantic evening - including lighting, music, the food that I cook and the conversation over dinner - this is clearly experience design, not instructional design. This is learning. That last part probably caught you by surprise - how is experience design learning? The definition I offered of learning in How People Learning is ‘a change in behaviour as a result of memory’. Does the new start get-together change their behaviour? Yes. They are demonstrably more likely to stay, more likely to say ‘I feel like I belong’ for example. What do they remember? Most likely that they enjoyed the event, and the friends that they made. We are not used to thinking of learning as learning. Which brings us to the instructional design column: instructional design largely describes techniques for improving the memorisation of facts - something we do often in education and which we have come to think of as learning. Technically, education does accomplish learning - but only in a very weird, narrow sense: it improves the likelihood that you will pass an exam. Most instructional design research tests people’s ability to recall facts. But passing exams is a very recent development in human history, and even today passing exams is not a significant part of most people’s lives - whilst learning is. Animals learn, but don’t pass exams. So instructional design is helpful to the extent that your organisation requires test-passing activity. But for performance and learning, you will need to look to performance support and experience design. A central problem we face today is that we have spent a lot of time looking into techniques for memorising facts, and barely begun thinking about experience design, and learning. #learning #education #instruction #performance #training | 25 comments on LinkedIn
·linkedin.com·
What is the role of instructional design? One of the best things about getting together is the conversations you have, with the people you meet. ...
Want to know what L&D is really doing with AI? Well, now you can. Today, Egle Vinauskaite and I publish our third annual report on AI in L&D. We’ve listened to more than 600 people in 53 countries, and there’s plenty to share.
Want to know what L&D is really doing with AI? Well, now you can. Today, Egle Vinauskaite and I publish our third annual report on AI in L&D. We’ve listened to more than 600 people in 53 countries, and there’s plenty to share.
To learn more, download the 54-page report AI in L&D 2025: The Race For Impact (link in comments and in my bio). Inside, you’ll find: · 10 ‘snapshot’ mini case studies · 12 pages of detailed analysis of how L&D is using AI · 12 pages of quantative analysis · 14 pages of in-depth case studies from Microsoft, ServiceNow, TTEC, KPMG UK, Leyton and mci group. · 1 framework: the Transformation Triangle. As AI makes is easier and faster to generated content, we explore the profound implications for L&D. And all of this is illustrated with ample quotes from the people out there doing the work. This isn’t an armchair exercise. We’ve gone through countless interviews and around 20,000 words of text that our respondents generated in the survey describing their work. This is a vivid illustration of what’s happening with AI in L&D today. We hope it will provide both insight, information and inspiration. My key take away: we’ve passed an inflexion point. For the first time, over half our respondents said they weren’t experimenting with AI, but actually using it. That’s a significant shift from last year. AI has moved from being a novelty to being part of L&D’s regular toolkit. And look at how they are using it – sure, content creation dominates. But look at the table of how things have changed since last year. Again, content dominates the top four place, but just beneath, there’s one extraordinary change. Qualitative data analysis has leapt from 8 last year to 5 this year. The single biggest change from year to year. This single points illustrates something we see across all our analysis, and in all of our case studies: a shift towards more sophisticated use, an increased focus on data, analysis and research. The featured case studies illustrate some of these inventive new uses perfectly – to learn more, download the report now. Our thanks to our report sponsors, OpenSesame, Speexx and The Regis Company for making this report possible. To download, click the link in my profile, or go to the first comment.  | 39 comments on LinkedIn
·linkedin.com·
Want to know what L&D is really doing with AI? Well, now you can. Today, Egle Vinauskaite and I publish our third annual report on AI in L&D. We’ve listened to more than 600 people in 53 countries, and there’s plenty to share.
What's happening with AI in L&D? Well, here it is — the 2025 edition.
What's happening with AI in L&D? Well, here it is — the 2025 edition.
Today, Donald H Taylor and I are releasing our third annual report on AI in L&D: The Race for Impact. If you’ve been wondering whether you’re behind, which AI uses you haven’t yet tried, or how to take your work further, we’ve put this report together to give you answers and ideas. Inside you'll find: ➡️ Fresh data on the most popular AI uses in L&D, how patterns are shifting, and what barriers teams still face ➡️ 12 pages detailing AI uses across learning design and content development, internal L&D ops, strategy and insight, and workforce enablement to inform and inspire your practice ➡️ 14 pages of in-depth AI in L&D case studies by Microsoft, ServiceNow, TTEC, KPMG UK, Leyton and mci group ➡️ A framework - the Transformation Triangle - exploring what AI’s move into “traditional” L&D work means for the function’s future role 600+ respondents. 53 countries. 20,000+ words in write-in responses. Days of interviews. Countless hours of deliberations and coffees trying to make sense of how the industry has evolved over the past 3 years and what it means for the road ahead. These are extraordinary numbers and they wouldn’t exist without the community behind them. Thank you to everyone who took the time to complete the survey and share thoughtful answers. Thank you to our case study contributors, who gave hours of their own time to document their practice for the benefit of the wider industry. Thank you to our sponsors OpenSesame, The Regis Company and Speexx who made this work possible. And thank you to Don: what started as a coffee conversation has grown into a three-year collaboration that keeps pushing both of us (and hopefully the field) forward. The full report is free to download (link in the comments). P.S. Below is a snapshot of the most common AI use cases we mapped this year. It gives a sense of where the field is and might spark a few new ideas 🙌 ♻️ Share this post so more teams can find these insights and build on each other’s work. | 22 comments on LinkedIn
·linkedin.com·
What's happening with AI in L&D? Well, here it is — the 2025 edition.
This article dropped a few days ago 👉 https://lnkd.in/djktVNKi Main talking points:
This article dropped a few days ago 👉 https://lnkd.in/djktVNKi Main talking points:
💡 Companies are adopting AI like crazy, but they should invest in preparing people to work with AI just as much. Apparently, that doesn't happen nearly enough as it should 💡 The research presented in the article highlights that Get AI Tutors outperform classroom training by 32% on personalization and 17% on feedback relevance. 💡 Gen AI Tutors create space for self-reflection, which is awesome 💡 Learners finished training 23% faster while achieving the same results 💡 Frontline workers, culture change, and building AI competence were mentioned as applications for Gen AI My thoughts: 💭 I think one of the hardest decisions we will face is where we should use Gen AI Tutors and where we should keep human interaction as part of learning 💭 The "results" in the research presented were, mostly, imho, still vanity metrics. I'm looking forward to seeing research done where analysis of results is more comprehensive (spanning a longer timeline, with clear leading indicators, etc). Until then, I can fully be convinced of the fact that Gen AI Tutors truly perform better on growing cognitive & behavioral skills 💭 While I find the culture change application interesting, I do hope Gen AI Tutors won't be used to absolve leaders of the responsibility THEY have for building cultures. I can't see a good result coming out of this. Very curious to hear your thoughts 👀 #learninganddevelopment | 14 comments on LinkedIn
·linkedin.com·
This article dropped a few days ago 👉 https://lnkd.in/djktVNKi Main talking points:
SAP and OpenAI partner to launch sovereign ‘OpenAI for Germany’ | Anja C. Wagner
SAP and OpenAI partner to launch sovereign ‘OpenAI for Germany’ | Anja C. Wagner
Endlich geht mal was ordentlich voran: OpenAI for Germany – ein interessanter Schritt in Richtung digitaler Souveränität. Heute haben SAP und OpenAI offiziell die Partnerschaft „OpenAI for Germany“ vorgestellt. Das Ziel: KI-Technologien für den öffentlichen Sektor in Deutschland bereitzustellen – mit Fokus auf Datenschutz, Datensouveränität und rechtliche Compliance. Die Lösung basiert auf SAPs Delos Cloud (unter Microsoft Azure-Technologie) – lokal in Deutschland betrieben. Was steckt dahinter? Millionen von Beschäftigten in Verwaltungen, Behörden und Forschungseinrichtungen sollen KI-gestützte Tools nutzen können, um Prozesse effizienter zu gestalten. Geplant ist u.a. Automatisierung von Vorgängen, Datenanalysen und Workflow-Integration. Zum Start sollen 4000 GPUs für KI-Rechenleistung zur Verfügung stehen — mit Ambition, weiter zu skalieren. Es passt zur deutschen KI-Ambition: Der Staat sieht KI bis 2030 als wichtigen Werttreiber, mit Potenzial für bis zu 10 % BIP-Beitrag. Warum ich das für wichtig halte: Es ist ein klares Signal: KI-Lösungen müssen nicht zwangsläufig „Alles ins Ausland“ oder "Alles nur national" bedeuten. Lokale, hybride Infrastrukturen können Teil der Lösung sein. Gerade im öffentlichen Sektor lassen sich durch KI echte Mehrwerte schaffen — wenn Sicherheit, Recht und Vertrauen stimmen. Es ist ein gutes Beispiel dafür, wie Partnerschaften (Tech + Industrie + Staat) zusammenspielen können. Start soll 2026 sein. Bravo, denke ich 🙏 Wird aber nicht allen gefallen ...  ⛓️‍💥 https://lnkd.in/dE3q9Jys #digital #souveraen #verwaltung #ki
·linkedin.com·
SAP and OpenAI partner to launch sovereign ‘OpenAI for Germany’ | Anja C. Wagner
AI will find its way into schools whether we like it or not. The danger lies in ignoring it; that’s how ‘workslop’ takes root.
AI will find its way into schools whether we like it or not. The danger lies in ignoring it; that’s how ‘workslop’ takes root.
‘We define workslop as AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.’ So begins a great piece in the Harvard Business Review which has coined a new term referring to poor AI practices which are developing: employees are producing sloppy work with AI and actually creating more work down the line for the person they pass the ‘workslop’ onto. The article offers some clear pointers on how organisations can move on to better AI practice, summed up in the conclusion: ‘Leaders will do best to model thoughtful AI use that has purpose and intention. Set clear guardrails for your teams around norms and acceptable use. Frame AI as a collaborative tool, not a shortcut. Embody a pilot mindset, with high agency and optimism, using AI to accelerate specific outcomes with specific usage. And uphold the same standards of excellence for work done by bionic human-AI duos as by humans alone.’ These lessons are just as applicable to schools as to businesses. The key difference is that we not only need leaders to model best practice, but teachers to help students understand what this looks like. It’s vital we take active steps now to shape habits: AI can be a force for innovation and amplify what’s best in our schools, or it can drive ‘workslop’ in staff and students. Surely the choice is a no brainer? (Link to piece in comments via post on this from David Monis-Weston)
·linkedin.com·
AI will find its way into schools whether we like it or not. The danger lies in ignoring it; that’s how ‘workslop’ takes root.
Landing an L&D leadership role should be the moment to shape the future of learning in your organisation. Yet, for many, the reality is frustratingly different but can be easily avoided. When strategies stall, sponsorship fails and momentum is lost it’s rarely because of a lack of ambition, ideas or intent. It’s usually because they skip the hard but essential steps that make a strategy stick. Here’s what I see happen most often:
Landing an L&D leadership role should be the moment to shape the future of learning in your organisation. Yet, for many, the reality is frustratingly different but can be easily avoided. When strategies stall, sponsorship fails and momentum is lost it’s rarely because of a lack of ambition, ideas or intent. It’s usually because they skip the hard but essential steps that make a strategy stick. Here’s what I see happen most often:
·linkedin.com·
Landing an L&D leadership role should be the moment to shape the future of learning in your organisation. Yet, for many, the reality is frustratingly different but can be easily avoided. When strategies stall, sponsorship fails and momentum is lost it’s rarely because of a lack of ambition, ideas or intent. It’s usually because they skip the hard but essential steps that make a strategy stick. Here’s what I see happen most often:
The development team behind the Model Context Protocol (MCP) has introduced the MCP Registry
The development team behind the Model Context Protocol (MCP) has introduced the MCP Registry
– an open catalog and API to discover and use publicly available MCP servers. Finally, MCP servers can be discovered through a central catalog. Think of it as an App Store for scanning and searching your MCPs: → An open catalog + API for discovering MCP servers → One-click install in VS Code → Servers from npm, PyPI, DockerHub → Sub-registries possible for security and curation → Works across Copilot, Claude, Perplexity, Figma, Terraform, Dynatrace etc. Although it is still in preview and being worked on, it will definitely serve a major problem. Github link: https://lnkd.in/df-qTnYe
·linkedin.com·
The development team behind the Model Context Protocol (MCP) has introduced the MCP Registry
So, New Zealanders and Australians are the most similar to AI, and Ethopians and Pakistanis the least.
So, New Zealanders and Australians are the most similar to AI, and Ethopians and Pakistanis the least.
So, New Zealanders and Australians are the most similar to AI, and Ethopians and Pakistanis the least. This study shows that WEiRD (Western, Educated, Industrialized, Rich, and Democratic) populations are the most aligned with ChatGPT. It is intriguing why some nations, including Netherlands and Germany, are more GPT-similar than Americans. The paper uses cognitive tasks such as the “triad task,” which distinguishes between analytic (category-based) and holistic (relationship-based) thinking, GPT tends to analytic thinking, which aligns with countries like the Netherlands and Sweden, that value rationality. This contrasts with holistic thinkers found in many non-WEIRD cultures. GPT tends to describe the "average human" in terms that are aligned with WEIRD norms. In short, with the overwhelmingly skewed data used to train AI and the outputs, it's “WEIRD in, WEIRD out”. The size of the models or training data is not the issue, it's the diversity and representativeness of the data. All of which underlines the value and importance of sovereign AI, potentially for regions or values-aligned cultures, not just at the national level. | 18 comments on LinkedIn
·linkedin.com·
So, New Zealanders and Australians are the most similar to AI, and Ethopians and Pakistanis the least.
„𝗗𝗮𝘀 𝗶𝘀𝘁 𝗱𝗮𝘀 𝗹𝗲𝘁𝘇𝘁𝗲 𝗝𝗮𝗵𝗿, 𝗶𝗻 𝗱𝗲𝗺 𝘄𝗶𝗿 𝗣𝗼𝘄𝗲𝗿𝗽𝗼𝗶𝗻𝘁𝘀 𝗮𝗹𝘀 𝗣𝗿𝗼𝗱𝘂𝗸𝘁 𝗹𝗶𝗲𝗳𝗲𝗿𝗻.
„𝗗𝗮𝘀 𝗶𝘀𝘁 𝗱𝗮𝘀 𝗹𝗲𝘁𝘇𝘁𝗲 𝗝𝗮𝗵𝗿, 𝗶𝗻 𝗱𝗲𝗺 𝘄𝗶𝗿 𝗣𝗼𝘄𝗲𝗿𝗽𝗼𝗶𝗻𝘁𝘀 𝗮𝗹𝘀 𝗣𝗿𝗼𝗱𝘂𝗸𝘁 𝗹𝗶𝗲𝗳𝗲𝗿𝗻.
„𝗗𝗮𝘀 𝗶𝘀𝘁 𝗱𝗮𝘀 𝗹𝗲𝘁𝘇𝘁𝗲 𝗝𝗮𝗵𝗿, 𝗶𝗻 𝗱𝗲𝗺 𝘄𝗶𝗿 𝗣𝗼𝘄𝗲𝗿𝗽𝗼𝗶𝗻𝘁𝘀 𝗮𝗹𝘀 𝗣𝗿𝗼𝗱𝘂𝗸𝘁 𝗹𝗶𝗲𝗳𝗲𝗿𝗻. 𝗞𝘂̈𝗻𝗳𝘁𝗶𝗴 𝗹𝗶𝗲𝗳𝗲𝗿𝗻 𝘄𝗶𝗿 𝗕𝗼𝘁𝘀.“ Die alte Ordnung der Wissensarbeit gerät ins Rutschen. Im Open-AI‑Forum diskutierten Aaron "Ronnie" Chatterji, Chefökonom von Open AI, und Joseph Fuller von der Harvard Business School, wie Künstliche Intelligenz Aufgaben, Organisationen und Karrieren neu gestaltet. Ihre Diagnose: Die Technologie sprintet voraus, doch Unternehmen verharren in Prozessen, die für eine analoge Welt eines klassischen Foliensatzes gebaut wurden. Fuller bringt die Zäsur auf eine Formel, die die Beratungsbranche aufhorchen lässt: „Das ist das letzte Jahr, in dem wir Powerpoints als Produkt liefern. Künftig liefern wir Bots.“ Hinter dem Bonmot steckt eine nüchterne Wirtschaftsanalyse: Regelbasierte Tätigkeiten lassen sich skalieren. Kunden verlangen weniger Informationsarbitrage, dafür mehr umsetzbare Systeme. Viele Beratungsmodelle lebten von der Fähigkeit, Informationen quer durch Silos zu heben, zu strukturieren und zu deuten. Jetzt rücken Entscheidungsplattformen in den Vordergrund, die Livedaten aus Finanz‑, Personal‑ und Vertriebssystemen integrieren und die Frage „Was jetzt?“ automatisiert beantworten. So entsteht eine Overlay‑Schicht für Entscheidungsintelligenz, und Powerpoint-Folien werden zur Randnotiz. Stattdessen werden Bots und Workflows zum Produkt, erwarten die beiden Experten. Weiterlesen auf F.A.Z. PRO Digitalwirtschaft (FAZ+) ▶︎ https://lnkd.in/eFjyCYGw Frankfurter Allgemeine Zeitung ___________ Mein neuer Online-Kurs: 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝘃𝗲 𝗞𝗜 𝗳𝘂̈𝗿 𝗙𝘂̈𝗵𝗿𝘂𝗻𝗴𝘀𝗸𝗿𝗮̈𝗳𝘁𝗲 Mein Kurs erläutert in kompakter Form die wesentlichen strategischen und ökonomischen Effekte der generativen KI für Unternehmen. Im Zentrum stehen die Produktivitätseffekte, die Auswirkungen auf Geschäftsmodelle und die Wettbewerbsfähigkeit der Unternehmen sowie der Zusammenhang zwischen KI und Arbeit. Den Abschluss bildet der Blick auf den Status der KI in Deutschland. Der Kurs richtet sich an Führungskräfte, die an den ökonomischen Implikationen der generativen KI für Unternehmen, Wirtschaft und Wettbewerb interessiert sind. Der Kurs berücksichtigt den aktuellen Stand in Wirtschaft und Wissenschaft. ▪️ Dauer: 80 Minuten ▪️ Inhalt: 9 Videos / 66 Slides Zur Buchung ▶︎ https://lnkd.in/ezsB-KDg
·linkedin.com·
„𝗗𝗮𝘀 𝗶𝘀𝘁 𝗱𝗮𝘀 𝗹𝗲𝘁𝘇𝘁𝗲 𝗝𝗮𝗵𝗿, 𝗶𝗻 𝗱𝗲𝗺 𝘄𝗶𝗿 𝗣𝗼𝘄𝗲𝗿𝗽𝗼𝗶𝗻𝘁𝘀 𝗮𝗹𝘀 𝗣𝗿𝗼𝗱𝘂𝗸𝘁 𝗹𝗶𝗲𝗳𝗲𝗿𝗻.