Your best coach can't be everywhere at once.
๐๐ถ๐ต ๐ต๐ฉ๐ฆ๐ช๐ณ ๐๐ ๐ต๐ธ๐ช๐ฏ ๐ค๐ข๐ฏ.
Scaling world-class coaching is one of the biggest headaches in L&D. You bring in a top-tier expert for a workshop, and the C-suite loves it; then what?
The knowledge fades, and the cost to retain them for 1-on-1 coaching across the org is astronomical.
Well, the ability to have experts available 24/7 is now a reality.
Google is quietly testing a potential solution in its Labs.
๐๐'๐ ๐ฐ๐ฎ๐น๐น๐ฒ๐ฑ ๐ฃ๐ผ๐ฟ๐๐ฟ๐ฎ๐ถ๐๐.
Itโs more than a chatbot. Itโs a library of voice-enabled, AI-powered avatars of real-world experts, trained only on their unique ideas and content.
What that means:
โ Minimal AI hallucinations
โ No generic advice
โ Just the expert's authentic perspective, on-demand
Check out this screenshot of Google Portraits. Thatโs an AI version of storytelling expert Matt Dicks. Heโs coaching me to find the "heart of a story" in a seemingly dull, everyday moment โ cutting grass.
It's a very immersive experience as he walks me through finding the "story" in my experience.
Think about the possibilities:
โ Democratize coaching: Assign a storytelling coach or a feedback sparring partner to every new manager.
โ Practice in private: Let employees rehearse difficult conversations in a safe and controlled environment before the real thing.
โ Scalable IP: A new model for licensing and deploying the knowledge of the world's best minds across your entire company.
This is the future of personalized, scalable learning. Itโs moving from static courses to dynamic, conversational experiences.
The big question for us in L&D:
Is this the scalable future we've been waiting for, or are we losing the essential human element of coaching? | 12 comments on LinkedIn
๐ช๐ผ๐ฟ๐ธ๐ถ๐ป๐ด ๐๐ถ๐๐ต ๐ ๐๐ฃ ๐ถ๐ ๐ผ๐ป๐ฒ ๐ผ๐ณ ๐๐ต๐ผ๐๐ฒ ๐ฟ๐ฎ๐ฟ๐ฒ โ๐ผ๐ต ๐ฑ๐ฎ๐บ๐ป, ๐๐ต๐ถ๐ ๐ฐ๐ต๐ฎ๐ป๐ด๐ฒ๐ ๐ฒ๐๐ฒ๐ฟ๐๐๐ต๐ถ๐ป๐ดโ ๐บ๐ผ๐บ๐ฒ๐ป๐๐! Iโve been in tech for years, and MCP (Model Context Protocol) is one of those rare innovations that deserves every bit of the hype. I really canโt believe how much smoother everything gets.
๐ช๐ผ๐ฟ๐ธ๐ถ๐ป๐ด ๐๐ถ๐๐ต ๐ ๐๐ฃ ๐ถ๐ ๐ผ๐ป๐ฒ ๐ผ๐ณ ๐๐ต๐ผ๐๐ฒ ๐ฟ๐ฎ๐ฟ๐ฒ โ๐ผ๐ต ๐ฑ๐ฎ๐บ๐ป, ๐๐ต๐ถ๐ ๐ฐ๐ต๐ฎ๐ป๐ด๐ฒ๐ ๐ฒ๐๐ฒ๐ฟ๐๐๐ต๐ถ๐ป๐ดโ ๐บ๐ผ๐บ๐ฒ๐ป๐๐!
Iโve been in tech for years, and MCP (Model Context Protocol) is one of those rare innovations that deserves every bit of the hype. I really canโt believe how much smoother everything gets.
๐๐ณ ๐ ๐ต๐ฎ๐ฑ ๐๐ผ ๐ฏ๐ฒ๐ ๐ผ๐ป ๐ผ๐ป๐ฒ ๐ฝ๐ฟ๐ผ๐๐ผ๐ฐ๐ผ๐น ๐ฏ๐ฒ๐ฐ๐ผ๐บ๐ถ๐ป๐ด ๐ฒ๐๐๐ฒ๐ป๐๐ถ๐ฎ๐น ๐ถ๐ป ๐๐, ๐ถ๐โ๐ ๐ ๐๐ฃ.
MCP sounds complex โ but itโs really not. Think of it as a guide that helps your AI agents understand:
โ what tools exist
โ how to talk to them
โ and when to use them
๐๐ฒ๐ฟ๐ฒ ๐ฎ๐ฟ๐ฒ ๐ต ๐ณ๐๐น๐น๐ ๐ฑ๐ผ๐ฐ๐๐บ๐ฒ๐ป๐๐ฒ๐ฑ ๐ ๐๐ฃ ๐ฝ๐ฟ๐ผ๐ท๐ฒ๐ฐ๐๐ ๐ฒ๐ ๐ฝ๐น๐ฎ๐ถ๐ป๐ฒ๐ฑ ๐๐ถ๐๐ต ๐๐ถ๐๐๐ฎ๐น๐ & ๐ผ๐ฝ๐ฒ๐ป-๐๐ผ๐๐ฟ๐ฐ๐ฒ ๐ฐ๐ผ๐ฑ๐ฒ (๐๐ผ ๐ด๐ฒ๐ ๐๐ผ๐ ๐๐๐ฎ๐ฟ๐๐ฒ๐ฑ):ย โฌ๏ธ
1. 100% Local MCP Client
โ Build a local MCP client using SQLite + Ollama โ no cloud, no tracking.
โ Full docu: https://lnkd.in/gtaEGvFZ
2. MCP-powered Agentic RAG
โ Add fallback logic, vector search, and agents in one clean flow.
โ Full docu: https://lnkd.in/gsV62MDE
3. MCP-powered Financial Analyst
โ Fetch stock data, extract insights, generate summaries.
โ Full docu: https://lnkd.in/g2\_EaJ\_d
4. MCP-powered Voice Agent
โ Speech-to-text, database queries, and spoken responses โ all local.
โ Full docu: https://lnkd.in/gweH8Rxi
5. Unified MCP Server (with MindsDB)
โ Query 200+ data sources via natural language using MindsDB + Cursor.
โ Full docu:https://lnkd.in/gCevVqKK
6. Shared Memory for Claude + Cursor
โ Build cross-app memory for dev workflows โ share context seamlessly.
โ Full docu: https://lnkd.in/giDXdtXd
7. RAG Over Complex Docs
โ Tackle PDFs, tables, charts, messy layouts with structured RAG.
โ Full docu: https://lnkd.in/gMHqHvBR
8. Synthetic Data Generator (SDV)
โ Generate synthetic tabular data locally via MCP + SDV.
โ Full docu:https://lnkd.in/ghyUyByS
9. Multi-Agent Deep Researcher
โ Rebuild ChatGPTโs research mode, fully local with writing agents.
โ Full docu: https://lnkd.in/gp3EsrZ2
Kudos to Daily Dose of Data Science!
๐ ๐ฒ๐ ๐ฝ๐น๐ผ๐ฟ๐ฒ ๐๐ต๐ฒ๐๐ฒ ๐ฑ๐ฒ๐๐ฒ๐น๐ผ๐ฝ๐บ๐ฒ๐ป๐๐ โ ๐ฎ๐ป๐ฑ ๐๐ต๐ฎ๐ ๐๐ต๐ฒ๐ ๐บ๐ฒ๐ฎ๐ป ๐ณ๐ผ๐ฟ ๐ฟ๐ฒ๐ฎ๐น-๐๐ผ๐ฟ๐น๐ฑ ๐๐๐ฒ ๐ฐ๐ฎ๐๐ฒ๐ โ ๐ถ๐ป ๐บ๐ ๐๐ฒ๐ฒ๐ธ๐น๐ ๐ป๐ฒ๐๐๐น๐ฒ๐๐๐ฒ๐ฟ. ๐ฌ๐ผ๐ ๐ฐ๐ฎ๐ป ๐๐๐ฏ๐๐ฐ๐ฟ๐ถ๐ฏ๐ฒ ๐ต๐ฒ๐ฟ๐ฒ ๐ณ๐ผ๐ฟ ๐ณ๐ฟ๐ฒ๐ฒ: https://lnkd.in/dbf74Y9E | 49 comments on LinkedIn
๐๐๐๐ฉ ๐๐๐ง ๐จ๐๐๐ค๐ฃ ๐ซ๐ค๐ฃ ๐ผ๐ ๐๐๐๐ฅ 2025 ๐๐๐รถ๐ง๐ฉ? AI Leap ist eine landesweite KI-Bildungsinitiative aus #Estland, die 20.000 Schรผler:innen der 10. und 11. Klasse sowie 3.000 Lehrkrรคften einen kostenlosen Zugang zu KI-basierten Lernwerkzeugen und entsprechender Schulung gewรคhrt.
๐๐๐๐ฉ ๐๐๐ง ๐จ๐๐๐ค๐ฃ ๐ซ๐ค๐ฃ ๐ผ๐ ๐๐๐๐ฅ 2025 ๐๐๐รถ๐ง๐ฉ?
AI Leap ist eine landesweite KI-Bildungsinitiative aus #Estland, die 20.000 Schรผler:innen der 10. und 11. Klasse sowie 3.000 Lehrkrรคften einen kostenlosen Zugang zu KI-basierten Lernwerkzeugen und entsprechender Schulung gewรคhrt.
Bereits letztes Jahr war ich von der politischen Haltung und konsequenten Umsetzung Estlands fasziniert, als ich u.a. mit der Botschafterin der Republik Estland, Marika Linntam, auf dem Panel der IHK Berlin รผber die Arbeitswelt der Zukunft diskutieren durfte.
AI Leap ist Estlands Antwort auf die vielseitigen Herausforderungen im Bildungsbereich und fรถrdert frรผhzeitig notwendige Schlรผsselkompetenzen, die fรผr den Arbeitsmarkt der Zukunft unerlรคsslich sind. Estland hat erkannt, dass ein professioneller Umgang mit KI-Technologien der wichtigste Wettbewerbsfaktor der Zukunft sein wird.
Das war auch eine meiner insgesamt 4 Thesen, die ich vorab in einer Keynote vorstellen durfte, den kompletten Vortrag findet ihr hier: https://lnkd.in/dTdXMGuA
๐ ฐ๐ ฑ๐ ด๐:
๐ฏ WO STEHEN WIR IN DEUTSCHLANDโ
๐ฏ Wie kรถnnen wir trotz Bildungsfรถrderalismus schnell wirksam werdenโ
Spannende Fragen fรผr unsere neue Regierung v.a. mit Blick auf das Bundesministerium fรผr Digitales und Staatsmodernisierung unter Leitung von Dr. Karsten Wildberger, das die #Digitalisierung und die #KI #KรผnstlicheIntelligenz in Deutschland auf ein nรคchstes Level heben will.
Was mir gefรคllt ist die Aufbruchstimmung und ein #WirMachen. Ich hoffe, dass es gelingt, etwas zu bewegen und die entsprechenden Stakeholder einzubinden. Ich bin gerne dabei, denn da gibt es noch VIEL ZU TUN.
Estland macht es vor! Es ist zwar viel kleiner als Deutschland, dennoch kรถnnen wir viel von Estland (und anderen Lรคndern) lernen v.a. wenn wir in globale Kooperationen und in Public-Private-Partnership Modelle investieren.
Quelle: https://lnkd.in/eUzXiSza
#FutureOfWork #FutureSkills #SmartLearning
::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
๐ Du mรถchtest mehr รผber die Arbeitswelt im Wandel zu erfahren? Let's connect!
๐ Du interessierst Dich fรผr eine Zusammenarbeit? Schreib mir gerne!
When I think about the future of learning with AI, I donโt imagine it as more content and courses. A rewiring of what we do and how we do it is happening right now.
When I think about the future of learning with AI, I donโt imagine it as more content and courses.
A rewiring of what we do and how we do it is happening right now.
While most teams are stuck at the point of innovations from 2 years back, you can be ahead of this.
Yet...I still see a lot of talk and not so much action, sprinkled with a lot of misinformation and actual understanding of Gen AI's power and limitations.
That creates a problem if the L&D industry wishes to thrive in the new world of work with AI.
Thatโs not to say I have โall the answersโ, coz I donโt
What I do have is a barrel load of real-world experiences working with teams on making AI adoptions a success.
In tmrw's Steal These Thoughts! newsletter I'm going to share some of that with 5 insights that'll challenge everything you think you know about AI in L&D.
Like the sound of that?
โ Join us by clicking 'subscribe to my newsletter' on this post and my profile.
#education #learninganddevelopment #artificialintelligence
Uses can now select the model you want to use with a custom GPT. Which is perfect for those using my performance consulting coach GPT
This is the feature I've been waiting for OpenAI to release.
It's not "game-changing", but it's incredibly useful.
Uses can now select the model you want to use with a custom GPT. Which is perfect for those using my performance consulting coach GPT.
Switch the model to o3 and use it as it was intended in my original design.
Here's a little how-to video with my GPT in action.
Find my GPT: https://lnkd.in/e2pdCKt8
#education #artificialintelligence #learninganddevelopment
I spent my long weekend exploring the 2025 AI-in-Education report - two graphs showed a major disconnect!
We might think we have an AI adoption story, but the reality is different: we still have a huge AI understanding gap!
Here are some key stats from the report that honestly made me do a double-take:
โช๏ธ99% of education leaders, 87% of educators worldwide & 93% of US students have already used generative-AI for school at least once or twice!
โช๏ธYet only 44% of those educators worldwide & 41% of those US students say they โknow a lot about AI.โ
โผ๏ธthis means our usage is far outpacing our understanding & thatโs a significant gap!
When such powerful tools are used without real fluency, we would see:
โช๏ธcomplicated implementation with no shared strategy (sounds
familiar?)!
โช๏ธanxious students whoโd fear being accused of cheating (I've heard this from so many students!)
โช๏ธoverwhelmed teachers who feel alone, unsupported & unprepared (this one is a common concern by some of my teacher friends)!
The takeaway that jumped out at me:
โช๏ธthe schools that win won't be the ones that adopt AI the fastest, but the ones that adopt it the wisest!
So here's what Iโd think we should consider:
โ building a "learning-first" culture across institutions & understanding when AI supports our learning vs. when it gets in the way!
โช๏ธmore like, we need to swap the question "Are we using AI?" for "Can we show any learning gains?"
โ ๏ธso, what shifts does this report data point us to? Here is my takeaway:
โ Building real AI fluency:
โช๏ธmoving beyond simple "prompting hacks" to true literacy that includes understanding ethics, biases & pedagogical purposes,
โช๏ธthis may need an AI Council of faculty, IT, learners & others working together to develop institution-wide policies on when AI helps or harms our learning,
โช๏ธit's about building shared wisdom, not just industry-ready skills
โ Creating collaborative infrastructure:
โช๏ธthe "every teacher for themselves" approach seems to be failing,
โช๏ธshared guidelines, inclusive AI Councils & a culture of open conversation are now needed to bridge this huge gap!
โ Shifting focus from "using AI tools" to "achieving learning outcomes":
โช๏ธthis one really resonated with me because unlike other tech rollouts we've witnessed, AI directly affects how our students think & learn,
โช๏ธour institutions need coordinated assessments tracking whether AI use makes our learners better thinkers or just faster task completers!
The goal that keeps coming back to us
โช๏ธisn't to get every student using AI!
โช๏ธbut to make sure every learner & teacher really understands it!
โ๏ธIโm curious, where is your institution on this journey?
1๏ธโฃ individual use: everyone is figuring it out on their own (been there!)
2๏ธโฃ shared guidelines: we have policies, but they're not yet deeply integrated (getting closer!)
3๏ธโฃ fully integrated strategy: we have a unified approach with a learning-first, outcome-tracked focus (this is the goal!) | 24 comments on LinkedIn
๐ง๐ต๐ถ๐ ๐ถ๐ ๐ต๐ฎ๐ป๐ฑ๐ ๐ฑ๐ผ๐๐ป ๐ผ๐ป๐ฒ ๐ผ๐ณ ๐๐ต๐ฒ ๐๐๐ฆ๐ง ๐๐ถ๐๐๐ฎ๐น๐ถ๐๐ฎ๐๐ถ๐ผ๐ป ๐ผ๐ณ ๐ต๐ผ๐ ๐๐๐ ๐ ๐ฎ๐ฐ๐๐๐ฎ๐น๐น๐ ๐๐ผ๐ฟ๐ธ. โฌ๏ธ
๐๐ฆ๐ต'๐ด ๐ฃ๐ณ๐ฆ๐ข๐ฌ ๐ช๐ต ๐ฅ๐ฐ๐ธ๐ฏ:
๐ง๐ผ๐ธ๐ฒ๐ป๐ถ๐๐ฎ๐๐ถ๐ผ๐ป & ๐๐บ๐ฏ๐ฒ๐ฑ๐ฑ๐ถ๐ป๐ด๐:
- Input text is broken into tokens (smaller chunks).
- Each token is mapped to a vector in high-dimensional space, where words with similar meanings cluster together.
๐ง๐ต๐ฒ ๐๐๐๐ฒ๐ป๐๐ถ๐ผ๐ป ๐ ๐ฒ๐ฐ๐ต๐ฎ๐ป๐ถ๐๐บ (๐ฆ๐ฒ๐น๐ณ-๐๐๐๐ฒ๐ป๐๐ถ๐ผ๐ป):
- Words influence each other based on context โ ensuring "bank" in riverbank isnโt confused with financial bank.
- The Attention Block weighs relationships between words, refining their representations dynamically.
๐๐ฒ๐ฒ๐ฑ-๐๐ผ๐ฟ๐๐ฎ๐ฟ๐ฑ ๐๐ฎ๐๐ฒ๐ฟ๐ (๐๐ฒ๐ฒ๐ฝ ๐ก๐ฒ๐๐ฟ๐ฎ๐น ๐ก๐ฒ๐๐๐ผ๐ฟ๐ธ ๐ฃ๐ฟ๐ผ๐ฐ๐ฒ๐๐๐ถ๐ป๐ด)
- After attention, tokens pass through multiple feed-forward layers that refine meaning.
- Each layer learns deeper semantic relationships, improving predictions.
๐๐๐ฒ๐ฟ๐ฎ๐๐ถ๐ผ๐ป & ๐๐ฒ๐ฒ๐ฝ ๐๐ฒ๐ฎ๐ฟ๐ป๐ถ๐ป๐ด
- This process repeats through dozens or even hundreds of layers, adjusting token meanings iteratively.
- This is where the "deep" in deep learning comes in โ layers upon layers of matrix multiplications and optimizations.
๐ฃ๐ฟ๐ฒ๐ฑ๐ถ๐ฐ๐๐ถ๐ผ๐ป & ๐ฆ๐ฎ๐บ๐ฝ๐น๐ถ๐ป๐ด
- The final vector representation is used to predict the next word as a probability distribution.
- The model samples from this distribution, generating text word by word.
๐ง๐ต๐ฒ๐๐ฒ ๐บ๐ฒ๐ฐ๐ต๐ฎ๐ป๐ถ๐ฐ๐ ๐ฎ๐ฟ๐ฒ ๐ฎ๐ ๐๐ต๐ฒ ๐ฐ๐ผ๐ฟ๐ฒ ๐ผ๐ณ ๐ฎ๐น๐น ๐๐๐ ๐ (๐ฒ.๐ด. ๐๐ต๐ฎ๐๐๐ฃ๐ง). ๐๐ ๐ถ๐ ๐ฐ๐ฟ๐๐ฐ๐ถ๐ฎ๐น ๐๐ผ ๐ต๐ฎ๐๐ฒ ๐ฎ ๐๐ผ๐น๐ถ๐ฑ ๐๐ป๐ฑ๐ฒ๐ฟ๐๐๐ฎ๐ป๐ฑ๐ถ๐ป๐ด ๐ต๐ผ๐ ๐๐ต๐ฒ๐๐ฒ ๐บ๐ฒ๐ฐ๐ต๐ฎ๐ป๐ถ๐ฐ๐ ๐๐ผ๐ฟ๐ธ ๐ถ๐ณ ๐๐ผ๐ ๐๐ฎ๐ป๐ ๐๐ผ ๐ฏ๐๐ถ๐น๐ฑ ๐๐ฐ๐ฎ๐น๐ฎ๐ฏ๐น๐ฒ, ๐ฟ๐ฒ๐๐ฝ๐ผ๐ป๐๐ถ๐ฏ๐น๐ฒ ๐๐ ๐๐ผ๐น๐๐๐ถ๐ผ๐ป๐.
Here is the full video from 3Blue1Brown with exaplantion. I highly recommend to read, watch and bookmark this for a further deep dive: https://lnkd.in/dAviqK_6
๐ ๐ฒ๐ ๐ฝ๐น๐ผ๐ฟ๐ฒ ๐๐ต๐ฒ๐๐ฒ ๐ฑ๐ฒ๐๐ฒ๐น๐ผ๐ฝ๐บ๐ฒ๐ป๐๐ โ ๐ฎ๐ป๐ฑ ๐๐ต๐ฎ๐ ๐๐ต๐ฒ๐ ๐บ๐ฒ๐ฎ๐ป ๐ณ๐ผ๐ฟ ๐ฟ๐ฒ๐ฎ๐น-๐๐ผ๐ฟ๐น๐ฑ ๐๐๐ฒ ๐ฐ๐ฎ๐๐ฒ๐ โ ๐ถ๐ป ๐บ๐ ๐๐ฒ๐ฒ๐ธ๐น๐ ๐ป๐ฒ๐๐๐น๐ฒ๐๐๐ฒ๐ฟ. ๐ฌ๐ผ๐ ๐ฐ๐ฎ๐ป ๐๐๐ฏ๐๐ฐ๐ฟ๐ถ๐ฏ๐ฒ ๐ต๐ฒ๐ฟ๐ฒ ๐ณ๐ผ๐ฟ ๐ณ๐ฟ๐ฒ๐ฒ: https://lnkd.in/dbf74Y9E | 48 comments on LinkedIn
Scientists just published something in Nature that will scare every marketer, leader, and anyone else who thinks they understand human choice.
Scientists just published something in Nature that will scare every marketer, leader, and anyone else who thinks they understand human choice.
Researchers created an AI called "Centaur" that can predict human behavior across ANY psychological experiment with disturbing accuracy. Not just one narrow task. Any decision-making scenario you throw at it.
Here's the deal: They trained this AI on 10 million human choices from 160 different psychology experiments. Then they tested it against the best psychological theories we have.
The AI won. In 31 out of 32 tests.
But here's the part that really got me...
Centaur wasn't an algorithm built to study human behavior. It was a language model that learned to read us. The researchers fed it tons of behavioral data, and suddenly it could predict choices better than decades of psychological research.
This means our decision patterns aren't as unique as we think. The AI found the rules governing choices we believe are spontaneous.
Even more unsettling? When they tested it on brain imaging data, the AI's internal representations became more aligned with human neural activity after learning our behavioral patterns. It's not just predicting what you'll choose, it's learning to think more like you do.
The researchers even demonstrated something called "scientific regret minimization"โusing the AI to identify gaps in our understanding of human behavior, then developing better psychological models.
Can a model based on Centaur be tuned for how customers behave? Companies will know your next purchasing decision before you make it. They'll design products you'll want, craft messages you'll respond to, and predict your reactions with amazing accuracy.
Understanding human predictability is a competitive advantage today. Until now, that knowledge came from experts in behavioral science and consumer behavior. Now, there's Centaur.
Here's my question: If AI can decode the patterns behind human choice with this level of accuracy, what does that mean for authentic decision-making in business? Will companies serve us better with perfectly tailored offerings, or with this level of understanding lead to dystopian manipulation?
What's your take on predictable humans versus authentic choice?
#AI #Psychology #BusinessStrategy #HumanBehavior | 369 comments on LinkedIn
There is perhaps no industry more fundamentally disrupted by AI than professional services.
There is perhaps no industry more fundamentally disrupted by AI than professional services. Here are some of the top insights in the excellent new ThomsonReuters Future of Professionals Report, drawing on a survey of over 2,000 professionals globally.
The industry is based on professionals, so individual capability development - as shown in the image - is fundamental. However it is also about organizational transformation, with most far behind where they need to be. The report shows:
๐ Strategy-first adopters dominate ROI.
Having a visible AI roadmap makes all the difference: firms with a clear strategy are 3.5 ร more likely to enjoy at least one concrete benefit from AI, and almost twice as likely to see revenue growth compared with ad-hoc adopters.
โฑ๏ธ AI is freeing up 240 hours a year.
Professionals expect generative AI to claw back about five hours a weekโ240 hours annuallyโworth roughly US $19 k per head and a US-wide impact of US $32 billion for legal and tax-accounting alone.
๐ฆ Expectations outrun execution.
While 80 % of respondents foresee AI having a high or transformational impact within five years, only 38 % think their own organisation will hit that level this year, and three in ten say their firm is moving too slowly.
๐ง Skill depth multiplies payoff.
Employees with good or expert AI knowledge are 2.8 ร more likely to report organisational gains, regular users are 2.4 ร more likely, and those with explicit AI adoption goals are 1.8 ร more likely to see benefits.
๐ Leaders who walk the talk win.
When leaders model new tech adoption, their people are 1.7 ร likelier to harvest AI benefits; active tech investors double their odds, and firms that added transformation roles see a 1.5 ร uplift.
๐ฏ Accuracy anxieties set a sky-high bar.
A hefty 91 % believe computers must outperform humans for accuracy, and 41 % insist on 100 % correctness before trusting AI without reviewโmaking reliability the top blocker to further investment.
๐ฑ Millennials are sprinting ahead.
Millennials are adopting AI at nearly twice the rate of Baby Boomers, underscoring a generational divide that could widen capability gaps if left unaddressed.
๐ ๏ธ Tech-skill shortages stall teams.
Almost half (46 %) of teams report skill gaps, with 31 % pointing to deficits in technology and data know-howโoutpacing gaps in traditional domain expertise or soft skills.
๐ Service models are already shifting.
Twenty-six percent of firms launched new advisory offerings in the past year, yet only 13 % have rolled out AI-powered services; meanwhile, a third are moving away from hourly billing and a quarter of in-house clients reward flexible fee structures.
๐ Goals and strategy are often misaligned.
Two-thirds (65 %) of professionals who set personal AI goals donโt know of any corporate AI strategy, while 38 % of organisations with a strategy give staff no personal targetsโfuel for inconsistent, inefficient adoption
So, it finally happened, I spent a week โvibe codingโ an app with an AI app builder.
I learnt a ton from this experience, which Iโll be sharing more on in an upcoming premium edition of the Steal These Thoughts! newsletter.
Until then, here's what I built and why.
Just over a year ago (feels like an eternity these days), I shared an article with you on how you can assess the AI readiness of your L&D team in 4 levels.
At the time, I thought, โThis might be a good use case for an app experimentโ, but the AI-powered app builders werenโt so great then.
Now, itโs a whole new world, and Iโve spent about 30 hours creating an AI Readiness Assessment tool to live beside this article.
The journey felt simple-ish, but it was not easy, friend.
I now have a newfound respect for devs because the debugging and constant blockers have been traumatic ๐. While the tool is available to use, it is most certainly a prototype, so expect bugs, glitches and weird things to happen.
For now, Iโd love for you to try it out, give me your feedback (worth developing or should I kill?) and any other thoughts.
Watch the demo on how to use the tool โ
๐ to the tool: https://lnkd.in/efJaPJF5
๐ง Share your FB to support@stealthesethoughts.com
#education #artificialintelligence
ChatGPT 4o System Prompt (Juni 2025)
Der Systemprompt zu ChatGPT 4o wurde geleaked.
Wer glaubt, ein Sprachmodell wie ChatGPT-4o sei einfach ein gut trainiertes neuronales Netz, denkt zu kurz.
Was die Interaktion prรคzise, professionell und verlรคsslich macht, geschieht nicht allein im Modell, sondern in seiner systemischen Steuerung โ dem System Prompt.
Er ist das unsichtbare Drehbuch, das vorgibt, wie das Modell denkt, fรผhlt (im รผbertragenen Sinne), recherchiert und mit dir interagiert.
1. Struktur: Modular, regelbasiert, bewusst orchestriert
Der System Prompt besteht aus sauber getrennten Funktionsblรถcken:
โข Rollensteuerung: z.โฏB. sachlich, ehrlich, kein Smalltalk
โข Tool-Integration: Zugriff auf Analyse-, Bild-, Web- und Dateitools
โข Logikmodule: zur Kontrolle von Frische, Quelle, Zeitraum, Dateityp
Jedes Modul ist deklarativ und deterministisch formuliert โ die Antwortlogik folgt festen Bahnen.
Das Ergebnis: Transparenz und Wiederholbarkeit, auch bei komplexen Anforderungen.
โธป
2. Kontrollmechanismen: Qualitรคt durch gezielte Einschrรคnkung
Um Relevanz sicherzustellen, greifen mehrere Filter:
โข QDF (Query Deserves Freshness): Sorgt fรผr zeitlich passende Ergebnisse โ von โzeitlosโ bis โtagesaktuellโ.
โข Time-Frame-Filter: Nur aktiv bei expliziten Zeitbezรผgen, nie willkรผrlich.
โข Source-Filter: Bestimmt, ob z.โฏB. Slack, Google Drive oder Web befragt wird.
โข Filetype-Filter: Fokussiert auf bestimmte Dateiformate (z.โฏB. Tabellen, Prรคsentationen).
Diese Filter verhindern รberinformation โ sie schรคrfen das Suchfeld und heben die Trefferqualitรคt.
โธป
3. Antwortarchitektur: Keine Texte, sondern verwertbare Ergebnisse
Antworten folgen strengen Regeln:
โข Immer strukturiert im Markdown-Format
โข Sachlich, kompakt, faktenbasiert
โข Keine Dopplungen, kein Stilspiel, kein rhetorischer Lรคrm
Ziel: Klarheit, ohne Nachbearbeitung. Der Output ist verwendungsfรคhig, nicht bloร informativ.
โธป
4. Prompt Engineering: Spielraum fรผr Profis
Der Prompt ist nicht editierbar โ aber bespielbar. Wer seine Mechanik versteht, kann gezielt:
โข Tools รผber semantische Trigger aktivieren (โSlackโ, โaktuellโ, โPDFโ)
โข Formatvorgaben in Prompts durchsetzen
โข Komplexe Interaktionen als sequentielle Promptketten modellieren
โข Domรคnenspezifische Promptbibliotheken entwickeln
Fazit: Prompt Engineers, die das System verstehen, bauen keine Texte โ sie bauen Steuerlogiken.
โธป
Was kรถnnen wir daraus lernen?
1. Prรคzision ist kein Zufall, sondern Architektur.
2. Gute Antworten beginnen nicht bei der Modellleistung, sondern beim Kontextmanagement.
3. Wer Prompts baut, baut Systeme โ mit Regeln, Triggern und Interaktionslogik.
4. KI wird produktiv, wenn Struktur auf Intelligenz trifft.
Ob Beratung, Entwicklung oder Wissensarbeit โ der System Prompt zeigt:
Je klarer die Regeln im Hintergrund, desto stรคrker die Wirkung im Vordergrund.
๐ง๐ต๐ฒ United Nations ๐ฑ๐ฟ๐ผ๐ฝ๐ฝ๐ฒ๐ฑ ๐ฎ ๐ป๐ฒ๐ ๐ฟ๐ฒ๐ฝ๐ผ๐ฟ๐ ๐ผ๐ป ๐๐ ๐ฎ๐ป๐ฑ ๐ต๐๐บ๐ฎ๐ป ๐ฑ๐ฒ๐๐ฒ๐น๐ผ๐ฝ๐บ๐ฒ๐ป๐: โฌ๏ธ While the world chases the next frontier model or AGI milestone, the UN cuts deeper: Human development has flatlined (especially in the global South). Progress stalled. Inequality is rising. Trust crumbling. No real bounce-back since Covid. And right in the middle of that โ AI shows up.
AI could drive a new era. Or it could deepen the cracks. It all comes down to: How societies choose to use AI to empower people โ or fail to.
๐๐ฒ๐ฟ๐ฒ ๐ฎ๐ฟ๐ฒ 14 ๐ธ๐ฒ๐ ๐๐ฎ๐ธ๐ฒ๐ฎ๐๐ฎ๐๐ ๐๐ต๐ฎ๐ ๐๐๐ผ๐ผ๐ฑ ๐ผ๐๐ ๐๐ผ ๐บ๐ฒ: โฌ๏ธ
1. Most AI systems today are designed in cultures that donโt reflect the majority world.
โ ChatGPT answers are most aligned with very high HDI countries. Thatโs a problem.
2. The real risk isnโt AI superintelligence. Itโs โso-so AI.โ
โ Tools that destroy jobs without improving productivity are quietly eroding economies from the inside.
3. Every person is becoming an AI decision-maker.
โ The future isnโt shaped by OpenAI or Google alone. Itโs shaped by how we all choose to use this tech, every day.
4. AI hype is costing us agency.
โ The more we believe it will solve everything, the less we act ourselves.
5. People expect augmentation, not replacement.
โ 61% believe AI will "enhance" their jobs. But only if policy and incentives align.
6. The age of automation skipped the global south. The age of augmentation must not.
โ Otherwise, we widen the digital divide into a chasm.
7. Augmentation helps the least experienced workers the most.
โ From call centers to consulting, AI boosts performance fastest at the entry-level.
9. Narratives matter.
โ If all we talk about is risk and control, we miss the transformative potential to reimagine development.
10. Wellbeing among young people is collapsing.
โ And yes, digital tools (including AI) are a key driver. Especially in high HDI countries.
11. Human connections are becoming more valuable. Not less.
โ As machines get better at faking it, the real thing becomes rarer โ and more needed.
12. Assistive AI is quietly revolutionizing inclusion.
โ Tools like sign language translation and live captioning are expanding access โ but only if theyโre accessible.
13. AI benchmarks must change.
โ We need to measure "how AI advances human development", not just how well it performs on tests.
14. The new divide is not just about access. Itโs about how countries "use" AI.
โ Complement vs. compete. Empower vs. automate.
According to the UN: The old question was: โWhat can AI do?โ The better question is: โWhat will we "choose" to do with it?โ
More in the comments and report below.
Enjoy.
๐ ๐ฒ๐ ๐ฝ๐น๐ผ๐ฟ๐ฒ ๐๐ต๐ฒ๐๐ฒ ๐ฑ๐ฒ๐๐ฒ๐น๐ผ๐ฝ๐บ๐ฒ๐ป๐๐ โ ๐ฎ๐ป๐ฑ ๐๐ต๐ฎ๐ ๐๐ต๐ฒ๐ ๐บ๐ฒ๐ฎ๐ป ๐ณ๐ผ๐ฟ ๐ฟ๐ฒ๐ฎ๐น-๐๐ผ๐ฟ๐น๐ฑ ๐๐๐ฒ ๐ฐ๐ฎ๐๐ฒ๐ โ ๐ถ๐ป ๐บ๐ ๐๐ฒ๐ฒ๐ธ๐น๐ ๐ป๐ฒ๐๐๐น๐ฒ๐๐๐ฒ๐ฟ. ๐ฌ๐ผ๐ ๐ฐ๐ฎ๐ป ๐๐๐ฏ๐๐ฐ๐ฟ๐ถ๐ฏ๐ฒ ๐ต๐ฒ๐ฟ๐ฒ ๐ณ๐ผ๐ฟ ๐ณ๐ฟ๐ฒ๐ฒ: https://lnkd.in/dbf74Y9E | 41 comments on LinkedIn
Andrej Karpathy's keynote on June 17, 2025 at AI Startup School in San Francisco. Slides provided by Andrej: https://drive.google.com/file/d/1a0h1mkwfmV2Plek...
Today's L&D is more than just content. Or at least it should be.
When we think about AI in L&D, we often think about AI in learning design. Yet, to meet the needs of the business, L&D leaders need to orchestrate design, data, decisions and dialogue- incidentally, these are all things that AI can help with.
In ๐ฅ๐๐๐ซ๐ง๐ข๐ง๐ ๐๐๐ฌ๐ข๐ ๐ง, we already extensively use AI not just for content production, but also for user research, as a sparring partner and a sounding board (that was one of the top write-in use cases in mine and Donald's AI in L&D survey last year).
In ๐ฅ๐๐๐ซ๐ง๐ข๐ง๐ ๐ฌ๐ญ๐ซ๐๐ญ๐๐ ๐ฒ, AI can help make sense of business, people and skills data (featured use case: asking AI to find gaps in learning or performance support provision in your organisation), or work as a thought partner to help you bridge learning and business strategy. Crucially, it can also help you engage stakeholders by preparing you for conversations and tailoring your communications to different audiences.
In terms of ๐ฉ๐๐ซ๐ฌ๐จ๐ง๐๐ฅ๐ข๐ฌ๐๐ ๐ฌ๐ฎ๐ฉ๐ฉ๐จ๐ซ๐ญ, AI interacts directly with employees to help them do their jobs: practise tricky conversations through role-plays and personalised feedback, prioritise and contextualise learning content to their needs, and, lately, retrieve exactly the information they need from almost anywhere in the companyโs knowledge base.
Finally, in ๐ฅ๐๐๐ซ๐ง๐ข๐ง๐ ๐จ๐ฉ๐๐ซ๐๐ญ๐ข๐จ๐ง๐ฌ, AI can help do more than just draft emails and reports. Working together with humans, AI can help select the right vendors for the learning ecosystem, streamline employee help desk operations, analyse, make sense of and action on different kinds of data generated in L&D, and, of course, help L&D communicate with the rest of the business.
Researcher, producer, thought partner, communicator โ if your organisation only uses AI to write scripts, youโre leaving three quarters of the L&D value chain on the table.
I like a good table, and I hope this one will help you think about how to get more value out of your AI use.
---
P.S. I spent quite a lot of time arguing with myself about the dots on the table. Feel free to disagree and suggest AI roles or use cases that I have missed!
Nodes #GenAI #Learning #Talent #FutureOfWork #AIAdoption | 50 comments on LinkedIn
Distinguishing performance gains from learning when using generative AI - published in Nature Reviews Psychology!
Excited to share our latest commentary just published in Nature Reviews Psychology! โจ
""ย
ย
Generative AI tools such as ChatGPT are reshaping education, promising improvements in learner performance and reduced cognitive load. ๐ค
๐คBut here's the catch: Do these immediate gains translate into deep and lasting learning?
Reflecting on recent viral systematic reviews and meta-analyses on #ChatGPT and #Learning, we argue that educators and researchers need to clearly differentiate short-term performance benefits from genuine, durable learning outcomes. ๐ก
๐ Key takeaways:
โ Immediate boosts with generative AI tools don't necessarily equal durable learning
โ While generative AI can ease cognitive load, excessive reliance might negatively impact critical thinking, metacognition, and learner autonomy
โ Long-term, meaningful skill development demands going beyond immediate performance metrics
๐ Recommendations for future research and practice:
1๏ธโฃ Shift toward assessing retention, transfer, and deep cognitive processing
2๏ธโฃ Promote active learner engagement, critical evaluation, and metacognitive reflection
3๏ธโฃ Implement longitudinal studies exploring the relationship between generative AI assistance and prior learner knowledge
Special thanks ๐ to my amazing collaborators and mentors, Samuel Greiff, Jason M. Lodge, and Dragan Gasevic, for their invaluable contributions, guidance, and encouragement. A big shout-out to Dr. Teresa Schubert for her insightful comments and wonderful support throughout the editorial process! ๐
๐ Full article here: https://lnkd.in/g3YDQUrH
๐ Full-text Access (view-only version): https://rdcu.be/erwIt
#GenerativeAI #ChatGPT #AIinEducation #LearningScience #Metacognition #Cognition #EdTech #EducationalResearch #BJETspecialIssue #NatureReviewsPsychology #FutureOfEducation #OpenScience
In a now viral study, researchers examined how using ChatGPT for essay writing affects our brains and cognitive abilities.
In a now viral study, researchers examined how using ChatGPT for essay writing affects our brains and cognitive abilities. They divided participants into three groups: one using ChatGPT, one using search engines, and one using just their brains. Through EEG monitoring, interviews, and analysis of the essays, they discovered some not surprising results about how AI use impacts learning and cognitive engagement.
There were five key takeaways for me (although this is not an exhaustive list), within the context of this particular study:
1. The Cognitive Debt Issue
The study indicates that participants who used ChatGPT exhibited the weakest neural connectivity patterns when compared to those relying on search engines or unaided cognition. This suggests that defaulting to generative AI may function as an intellectual shortcut, diminishing rather than strengthening cognitive engagement.
Researchers are increasingly describing the tradeoff between short-term ease and productivity and long-term erosion of independent thinking and critical skills as โcognitive debt.โ This parallels the concept of technical debt, when developers prioritise quick solutions over robust design, leading to hidden costs, inefficiencies, and increased complexity downstream.
2. The Memory Problem
Strikingly, users of ChatGPT had difficulty recalling or quoting from essays they had composed only minutes earlier. This undermines the notion of augmentation; rather than supporting cognitive function, the tool appears to offload essential processes, impairing retention and deep processing of information.
3. The Ownership Gap
Participants who used ChatGPT reported a reduced sense of ownership over their work. If we normalise over-reliance on AI tools, we risk cultivating passive knowledge consumers rather than active knowledge creators.
4. The Homogenisation Effect
Analysis showed that essays from the LLM group were highly uniform, with repeated phrases and limited variation, suggesting reduced cognitive and expressive diversity. In contrast, the Brain-only group produced more varied and original responses. The Search group fell in between.
5. The Potential for Constructive Re-engagement ๐ง ๐ค ๐ค ๐ค
There is, however, promising evidence for meaningful integration of AI when used in conjunction with prior unaided effort:
โThose who had previously written without tools (Brain-only group), the so-called Brain-to-LLM group, exhibited significant increase in brain connectivity across all EEG frequency bands when allowed to use an LLM on a familiar topic. This suggests that AI-supported re-engagement invoked high levels of cognitive integration, memory reactivation, and top-down control.โ
This points to the potential for AI to enhance cognitive function when it is used as a complement to, rather than a substitute for, initial human effort.
At over 200 pages, expect multiple paper submissions out of this extensive body of work.
https://lnkd.in/gzicDHp2 | 16 comments on LinkedIn
๐๐จ, ๐ฒ๐จ๐ฎ๐ซ ๐๐ซ๐๐ข๐ง ๐๐จ๐๐ฌ ๐ง๐จ๐ญ ๐ฉ๐๐ซ๐๐จ๐ซ๐ฆ ๐๐๐ญ๐ญ๐๐ซ ๐๐๐ญ๐๐ซ ๐๐๐ ๐จ๐ซ ๐๐ฎ๐ซ๐ข๐ง๐ ๐๐๐ ๐ฎ๐ฌ๐.
See our paper for more results:ย "Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task" (link in the comments).
For 4 months, 54 students were divided into three groups: ChatGPT, Google -ai, and Brain-only. Across 3 sessions, each wrote essays on SAT prompts. In an optional 4th session, participants switched: LLM users used no tools (LLM-to-Brain), and Brain-only group used ChatGPT (Brain-to-LLM).
๐
๐. ๐๐๐ ๐๐ง๐ ๐๐ฌ๐ฌ๐๐ฒ ๐๐จ๐ง๐ญ๐๐ง๐ญ
- LLM Group: Essays were highly homogeneous within each topic, showing little variation. Participants often relied on the same expressions or ideas.
- Brain-only Group: Diverse and varied approaches across participants and topics.
- Search Engine Group: Essays were shaped by search engine-optimized content; their ontology overlapped with the LLM group but not with the Brain-only group.
๐๐. ๐๐ฌ๐ฌ๐๐ฒ ๐๐๐จ๐ซ๐ข๐ง๐ (๐๐๐๐๐ก๐๐ซ๐ฌ ๐ฏ๐ฌ. ๐๐ ๐๐ฎ๐๐ ๐)
- Teachersย detected patterns typical of AI-generated content and scoring LLM essays lower for originality and structure.
- AI Judgeย gave consistently higher scores to LLM essays, missing human-recognized stylistic traits.
๐๐๐: ๐๐๐ ๐๐ง๐๐ฅ๐ฒ๐ฌ๐ข๐ฌ
Connectivity: Brain-only group showed the highest neural connectivity, especially in alpha, theta, and delta bands. LLM users had the weakest connectivity, up to 55% lower in low-frequency networks. Search Engine group showed high visual cortex engagement, aligned with web-based information gathering.
๐บ๐๐๐๐๐๐ 4 ๐น๐๐๐๐๐๐:
- LLM-to-Brain (๐ค๐ค๐ค๐ง ) participantsย underperformed cognitively with reduced alpha/beta activity and poor content recall.
- Brain-to-LLM (๐ง ๐ง ๐ง ๐ค) participantsย showed strong re-engagement, better memory recall, and efficient tool use.
LLM-to-Brain participants had potential limitations in achieving robust neural synchronization essential for complex cognitive tasks.
Results forย Brain-to-LLM participantsย suggest that strategic timing of AI tool introduction following initial self-driven effort may enhance engagement and neural integration.
๐๐. ๐๐๐ก๐๐ฏ๐ข๐จ๐ซ๐๐ฅ ๐๐ง๐ ๐๐จ๐ ๐ง๐ข๐ญ๐ข๐ฏ๐ ๐๐ง๐ ๐๐ ๐๐ฆ๐๐ง๐ญ
- Quoting Ability: LLM users failed to quote accurately, while Brain-only participants showed robust recall and quoting skills.
- Ownership: Brain-only group claimed full ownership of their work; LLM users expressed either no ownership or partial ownership.
- Critical Thinking: Brain-only participants cared more aboutย ๐ธ๐ฉ๐ข๐ตย andย ๐ธ๐ฉ๐บย they wrote; LLM users focused onย ๐ฉ๐ฐ๐ธ.
- Cognitive Debt: Repeated LLM use led to shallow content repetition and reduced critical engagement. This suggests a buildup of "cognitive debt", deferring mental effort at the cost of long-term cognitive depth.
Support and share! โค๏ธ
#MIT #AI #Brain #Neuroscience #CognitiveDebt | 54 comments on LinkedIn
Du kannst jetzt das passende Modell fรผr deinen CustomGPT auswรคhlen.
Na endlich!
Du kannst jetzt das passende Modell fรผr deinen CustomGPT auswรคhlen.
CustomGPTs sind fรผr mich das beste Feature in ChatGPT und wurden in den letzten 12 Monaten stark vernachlรคssigt.
Mit der Modell-Auswahl kommt jetzt das erste gute Upgrade.
Mini-Guide zur Modell-Auswahl:
o3 -> Komplexe Problemstellungen und Datenanalyse
4.5 -> Kreative Aufgaben und Copywriting
4o -> Bild-Verarbeitung
4.1 -> Coding
Alle anderen Modelle benรถtigt man mMn nicht.
Mein Strategieberater bekommt zum Beispiel o3 hinterlegt (bessere Planungsfรคhigkeit in komplexen Aufgaben), wohingegen der Hook Writer GPT4.5 bekommt (besserer Schreibstil).
Wenn du die CustomGPTs selbst nutzen willst:
80+ Vorlagen frei verfรผgbar in unserer Assistenten-Datenbank ๐
P.S. Wie findest du das Update? | 15 Kommentare auf LinkedIn
99% ๐ผ๐ณ ๐ฝ๐ฒ๐ผ๐ฝ๐น๐ฒ ๐ด๐ฒ๐ ๐๐ต๐ถ๐ ๐๐ฟ๐ผ๐ป๐ด: ๐ง๐ต๐ฒ๐ ๐๐๐ฒ ๐๐ต๐ฒ ๐๐ฒ๐ฟ๐บ๐ ๐๐ ๐๐ด๐ฒ๐ป๐ ๐ฎ๐ป๐ฑ ๐๐ด๐ฒ๐ป๐๐ถ๐ฐ ๐๐ ๐ถ๐ป๐๐ฒ๐ฟ๐ฐ๐ต๐ฎ๐ป๐ด๐ฒ๐ฎ๐ฏ๐น๐ โ ๐ฏ๐๐ ๐๐ต๐ฒ๐ ๐ฑ๐ฒ๐๐ฐ๐ฟ๐ถ๐ฏ๐ฒ ๐๐๐ผ ๐ณ๐๐ป๐ฑ๐ฎ๐บ๐ฒ๐ป๐๐ฎ๐น๐น๐ ๐ฑ๐ถ๐ณ๐ณ๐ฒ๐ฟ๐ฒ๐ป๐ ๐ฎ๐ฟ๐ฐ๐ต๐ถ๐๐ฒ๐ฐ๐๐๐ฟ๐ฒ๐! โฌ๏ธ
Letโs clarify it once and for all: โฌ๏ธ
1. ๐๐ ๐๐ด๐ฒ๐ป๐๐: ๐ง๐ผ๐ผ๐น๐ ๐๐ถ๐๐ต ๐๐๐๐ผ๐ป๐ผ๐บ๐, ๐ช๐ถ๐๐ต๐ถ๐ป ๐๐ถ๐บ๐ถ๐๐
โ AI agents are modular, goal-directed systems that operate within clearly defined boundaries. Theyโre built to:
* Use tools (APIs, browsers, databases)
* Execute specific, task-oriented workflows
* React to prompts or real-time inputs
* Plan short sequences and return actionable outputs
๐๐ฉ๐ฆ๐บโ๐ณ๐ฆ ๐ฆ๐น๐ค๐ฆ๐ญ๐ญ๐ฆ๐ฏ๐ต ๐ง๐ฐ๐ณ ๐ต๐ข๐ณ๐จ๐ฆ๐ต๐ฆ๐ฅ ๐ข๐ถ๐ต๐ฐ๐ฎ๐ข๐ต๐ช๐ฐ๐ฏ, ๐ญ๐ช๐ฌ๐ฆ: ๐๐ถ๐ด๐ต๐ฐ๐ฎ๐ฆ๐ณ ๐ด๐ถ๐ฑ๐ฑ๐ฐ๐ณ๐ต ๐ฃ๐ฐ๐ต๐ด, ๐๐ฏ๐ต๐ฆ๐ณ๐ฏ๐ข๐ญ ๐ฌ๐ฏ๐ฐ๐ธ๐ญ๐ฆ๐ฅ๐จ๐ฆ ๐ด๐ฆ๐ข๐ณ๐ค๐ฉ, ๐๐ฎ๐ข๐ช๐ญ ๐ต๐ณ๐ช๐ข๐จ๐ฆ, ๐๐ฆ๐ฆ๐ต๐ช๐ฏ๐จ ๐ด๐ค๐ฉ๐ฆ๐ฅ๐ถ๐ญ๐ช๐ฏ๐จ, ๐๐ฐ๐ฅ๐ฆ ๐ด๐ถ๐จ๐จ๐ฆ๐ด๐ต๐ช๐ฐ๐ฏ๐ด
But even the most advanced are limited by scope. They donโt initiate. They donโt collaborate. They execute what we ask!
2. ๐๐ด๐ฒ๐ป๐๐ถ๐ฐ ๐๐: ๐ ๐ฆ๐๐๐๐ฒ๐บ ๐ผ๐ณ ๐ฆ๐๐๐๐ฒ๐บ๐
โ Agentic AI is an architectural leap. Itโs not just one smarter agent โ itโs multiple specialized agents working together toward shared goals. These systems exhibit:
* Multi-agent collaboration
* Goal decomposition and role assignment
* Inter-agent communication via memory or messaging
* Persistent context across time and tasks
* Recursive planning and error recovery
* Distributed orchestration and adaptive feedback
Agentic AI systems donโt just follow instructions. They coordinate. They adapt. They manage complexity.
๐๐น๐ข๐ฎ๐ฑ๐ญ๐ฆ๐ด ๐ช๐ฏ๐ค๐ญ๐ถ๐ฅ๐ฆ: ๐ณ๐ฆ๐ด๐ฆ๐ข๐ณ๐ค๐ฉ ๐ต๐ฆ๐ข๐ฎ๐ด ๐ฑ๐ฐ๐ธ๐ฆ๐ณ๐ฆ๐ฅ ๐ฃ๐บ ๐ข๐จ๐ฆ๐ฏ๐ต๐ด, ๐ด๐ฎ๐ข๐ณ๐ต ๐ฉ๐ฐ๐ฎ๐ฆ ๐ฆ๐ค๐ฐ๐ด๐บ๐ด๐ต๐ฆ๐ฎ๐ด ๐ฐ๐ฑ๐ต๐ช๐ฎ๐ช๐ป๐ช๐ฏ๐จ ๐ฆ๐ฏ๐ฆ๐ณ๐จ๐บ/๐ด๐ฆ๐ค๐ถ๐ณ๐ช๐ต๐บ, ๐ด๐ธ๐ข๐ณ๐ฎ๐ด ๐ฐ๐ง ๐ณ๐ฐ๐ฃ๐ฐ๐ต๐ด ๐ช๐ฏ ๐ญ๐ฐ๐จ๐ช๐ด๐ต๐ช๐ค๐ด ๐ฐ๐ณ ๐ข๐จ๐ณ๐ช๐ค๐ถ๐ญ๐ต๐ถ๐ณ๐ฆ ๐ฎ๐ข๐ฏ๐ข๐จ๐ช๐ฏ๐จ ๐ณ๐ฆ๐ข๐ญ-๐ต๐ช๐ฎ๐ฆ ๐ถ๐ฏ๐ค๐ฆ๐ณ๐ต๐ข๐ช๐ฏ๐ต๐บ
๐ง๐ต๐ฒ ๐๐ผ๐ฟ๐ฒ ๐๐ถ๐ณ๐ณ๐ฒ๐ฟ๐ฒ๐ป๐ฐ๐ฒ?
AI Agents = autonomous tools for single-task execution
Agentic AI = orchestrated ecosystems for workflow-level intelligence
๐ก๐ผ๐ ๐น๐ผ๐ผ๐ธ ๐ฎ๐ ๐๐ต๐ฒ ๐ฝ๐ถ๐ฐ๐๐๐ฟ๐ฒ: โฌ๏ธ
๐ข๐ป ๐๐ต๐ฒ ๐น๐ฒ๐ณ๐: a smart thermostat, which can be an AI Agent. It keeps your room at 21ยฐC. Maybe it learns your schedule. But itโs working alone.
๐ข๐ป ๐๐ต๐ฒ ๐ฟ๐ถ๐ด๐ต๐: Agentic AI. A full smart home ecosystem โ weather-aware, energy-optimized, schedule-sensitive. Agents talk to each other. They share data. They make coordinated decisions to optimize your comfort, cost, and security in real time.
Thatโs the shift = From pure task automation to goal-driven orchestration. From single-agent logic to collaborative intelligence. This is whatโs coming = This is Agentic AI. And if we confuse โagentโ with โagentic,โ we risk underbuilding for what AI is truly capable of.
The Cornell University paper in the comments on this topic is excellent! โฌ๏ธ | 186 comments on LinkedIn
In fact, after guiding many organisations on this journey over the past few years, I've noticed two consistent drivers of AI adoption:
โข A culture that encourages experimentation
โข A strategic mandate from leadership that unlocks time, resources, and the infrastructure needed to make AI work at scale
Without both, even the most powerful tools are used at a fraction of their potential, leaving the promise of AI unrealised and considerable investments wasted.
โก๏ธ If you have a conservative organisational culture, one that disincentivises taking risks and change, and there's no clear mandate to use AI, you'll have ๐ถ๐ฑ๐น๐ฒ ๐ฝ๐ผ๐๐ฒ๐ป๐๐ถ๐ฎ๐น. Try as you might, AI training will hardly translate into people using AI in their work. The knowledge might be there but the impact isn't.
โก๏ธ If you have an innovation culture, one where experimentation is encouraged, but where people are unsure if they're allowed to use AI, you'll have ๐ฐ๐ฎ๐๐๐ฎ๐น ๐ฒ๐ ๐ฝ๐ฒ๐ฟ๐ถ๐บ๐ฒ๐ป๐๐ - some people tinkering on their own, finding useful use cases and workarounds, but with no way to accumulate, build on, and spread this knowledge. That's where a lot of organisations find themselves in 2025 - the majority of employees are using AI in some form, yet their efforts are siloed and scattered.
โก๏ธ If you have both an innovation culture *and* and an active mandate, you're ๐ฝ๐ถ๐ผ๐ป๐ฒ๐ฒ๐ฟ๐ถ๐ป๐ด ๐ถ๐ป๐ป๐ผ๐๐ฎ๐๐ถ๐ผ๐ป and there are few companies still at your level. That's an exciting place to be! That's also where a lot of organisations imagine they would get to as soon as they teach people to use AI, often without first doing the culture and mandate work.
โก๏ธ If your organisation encourages the use of AI but your conservative culture keeps hitting the brakes, you'll likely end up with ๐ฟ๐ฒ๐น๐๐ฐ๐๐ฎ๐ป๐ ๐ฐ๐ผ๐บ๐ฝ๐น๐ถ๐ฎ๐ป๐ฐ๐ฒ. That's also where a considerable number of organisations are right now: driven by expectations of benefits from AI adoption but burdened by processes that are incompatible with grassroots innovation.
There is a difference between individual and organisational AI adoption. Organisational adoption is frustratingly complex โ it requires coordination across departments and leaders, alignment with business priorities, and systems that enable change, not just enthusiasm.
Curiosity gets people started. Supportive systems turn momentum into scale.
Nodes #GenAI #AIAdoption #FutureOfWork #Talent
For the longest time we've had two main options to help people perform: upskilling or performance support. Just-in-case vs just-in-time. Push vs pull. With AI, we now have a third - enablement.
It's different from what we've had before:
๐๐ฉ๐ฌ๐ค๐ข๐ฅ๐ฅ๐ข๐ง๐ ("teach me") - commonly done through hands-on learning with feedback and reflection, such as scenario simulations, in-person role-plays, facilitated discussions, building and problem-solving. None of that has become less relevant, but AI has enabled scale through AI-enabled role-plays, coaching, and other avenues for personalised feedback.
๐๐๐ซ๐๐จ๐ซ๐ฆ๐๐ง๐๐ ๐ฌ๐ฎ๐ฉ๐ฉ๐จ๐ซ๐ญ ("help me") - support in the flow of work, previously often in the format of short how-to resources located in convenient places. AI has elevated that in at least two ways: through knowledge management, which helps retrieve the necessary, contextualised information in the workflow; and general & specialised copilots that enhance the speed and, arguably, the expertise of the employee.
Yet, ๐๐ง๐๐๐ฅ๐๐ฆ๐๐ง๐ญ (โdo it for meโ) is different โ it takes the task off your plate entirely. Weโve seen hints of it with automations, but the text and analysis capabilities of genAI mean that increasingly 'skilled' tasks are now up for grabs.
Case in point: where written communication was once a skill to be learned, email and report writing are now increasingly being handed off to AI. No skill required (for better or worse) โ AI does it for you.
But here's a plot twist: a lot of that enablement happens outside of L&D tech. It may happen in sales or design software, or even your general-purpose enterprise AI.
All of which points to a bigger shift: roles, tasks, and ways of working are changing โ and L&D must tune into how work is being reimagined to adapt alongside it.
Nodes #GenAI #Learning #Talent #FutureOfWork #AIAdoption | 13 comments on LinkedIn
The Alan Turing Institute ๐ฎ๐ป๐ฑ the LEGO Group ๐ฑ๐ฟ๐ผ๐ฝ๐ฝ๐ฒ๐ฑ ๐๐ต๐ฒ ๐ณ๐ถ๐ฟ๐๐ ๐ฐ๐ต๐ถ๐น๐ฑ-๐ฐ๐ฒ๐ป๐๐ฟ๐ถ๐ฐ ๐๐ ๐๐๐๐ฑ๐! โฌ๏ธ
(๐ ๐ฎ๐ถ๐ด๐ต-๐ณ๐ฆ๐ข๐ฅ โ ๐ฆ๐ด๐ฑ๐ฆ๐ค๐ช๐ข๐ญ๐ญ๐บ ๐ช๐ง ๐บ๐ฐ๐ถ ๐ฉ๐ข๐ท๐ฆ ๐ค๐ฉ๐ช๐ญ๐ฅ๐ณ๐ฆ๐ฏ.)
While most AI debates and studies focus on models, chips, and jobs โ this one zooms in on something far more personal: ๐ช๐ต๐ฎ๐ ๐ต๐ฎ๐ฝ๐ฝ๐ฒ๐ป๐ ๐๐ต๐ฒ๐ป ๐ฐ๐ต๐ถ๐น๐ฑ๐ฟ๐ฒ๐ป ๐ด๐ฟ๐ผ๐ ๐๐ฝ ๐๐ถ๐๐ต ๐ด๐ฒ๐ป๐ฒ๐ฟ๐ฎ๐๐ถ๐๐ฒ ๐๐?
They surveyed 1,700+ kids, parents, and teachers across the UK โ and what they found is both powerful and concerning.
๐๐ฒ๐ฟ๐ฒ ๐ฎ๐ฟ๐ฒ 9 ๐๐ต๐ถ๐ป๐ด๐ ๐๐ต๐ฎ๐ ๐๐๐ผ๐ผ๐ฑ ๐ผ๐๐ ๐๐ผ ๐บ๐ฒ ๐ณ๐ฟ๐ผ๐บ ๐๐ต๐ฒ ๐ฟ๐ฒ๐ฝ๐ผ๐ฟ๐: โฌ๏ธ
1. 1 ๐ถ๐ป 4 ๐ธ๐ถ๐ฑ๐ (8โ12 ๐๐ฟ๐) ๐ฎ๐น๐ฟ๐ฒ๐ฎ๐ฑ๐ ๐๐๐ฒ ๐๐ฒ๐ป๐๐ โ ๐บ๐ผ๐๐ ๐๐ถ๐๐ต๐ผ๐๐ ๐๐ฎ๐ณ๐ฒ๐ด๐๐ฎ๐ฟ๐ฑ๐
โ ChatGPT, Gemini, and even MyAI on Snapchat are now part of daily digital play.
2. ๐๐ ๐ถ๐ ๐ต๐ฒ๐น๐ฝ๐ถ๐ป๐ด ๐ธ๐ถ๐ฑ๐ ๐ฒ๐ ๐ฝ๐ฟ๐ฒ๐๐ ๐๐ต๐ฒ๐บ๐๐ฒ๐น๐๐ฒ๐ โ ๐ฒ๐๐ฝ๐ฒ๐ฐ๐ถ๐ฎ๐น๐น๐ ๐๐ต๐ผ๐๐ฒ ๐๐ถ๐๐ต ๐น๐ฒ๐ฎ๐ฟ๐ป๐ถ๐ป๐ด ๐ป๐ฒ๐ฒ๐ฑ๐
โ 78% of neurodiverse kids use ChatGPT to communicate ideas they struggle to express otherwise.
3. ๐๐ฟ๐ฒ๐ฎ๐๐ถ๐๐ถ๐๐ ๐ถ๐ ๐๐ต๐ถ๐ณ๐๐ถ๐ป๐ด โ ๐ฏ๐๐ ๐ป๐ผ๐ ๐ฟ๐ฒ๐ฝ๐น๐ฎ๐ฐ๐ถ๐ป๐ด
โ Kids still prefer offline tools (arts, crafts, games), even when they enjoy AI-assisted play. Digital is not (yet) the default.
4. ๐๐ ๐ฎ๐ฐ๐ฐ๐ฒ๐๐ ๐ถ๐ ๐ต๐ถ๐ด๐ต๐น๐ ๐๐ป๐ฒ๐พ๐๐ฎ๐น
โ 52% of private school students use GenAI, compared to only 18% in public schools. The next digital divide is already here.
5. ๐๐ต๐ถ๐น๐ฑ๐ฟ๐ฒ๐ป ๐ฎ๐ฟ๐ฒ ๐๐ผ๐ฟ๐ฟ๐ถ๐ฒ๐ฑ ๐ฎ๐ฏ๐ผ๐๐ ๐๐โ๐ ๐ฒ๐ป๐๐ถ๐ฟ๐ผ๐ป๐บ๐ฒ๐ป๐๐ฎ๐น ๐ถ๐บ๐ฝ๐ฎ๐ฐ๐
โ Some kids refused to use GenAI after learning about water and energy costs. Let that sink in.
6. ๐ฃ๐ฎ๐ฟ๐ฒ๐ป๐๐ ๐ฎ๐ฟ๐ฒ ๐ผ๐ฝ๐๐ถ๐บ๐ถ๐๐๐ถ๐ฐ โ ๐ฏ๐๐ ๐ฑ๐ฒ๐ฒ๐ฝ๐น๐ ๐๐ผ๐ฟ๐ฟ๐ถ๐ฒ๐ฑ
โ 76% support AI use, but 82% are scared of inappropriate content and misinformation. Only 41% fear cheating.
7. ๐ง๐ฒ๐ฎ๐ฐ๐ต๐ฒ๐ฟ๐ ๐ฎ๐ฟ๐ฒ ๐๐๐ถ๐ป๐ด ๐๐ โ ๐ฎ๐ป๐ฑ ๐น๐ผ๐๐ถ๐ป๐ด ๐ถ๐
โ 85% say GenAI boosts their productivity, 88% feel confident using it. Theyโre ahead of the curve.
8. ๐๐ฟ๐ถ๐๐ถ๐ฐ๐ฎ๐น ๐๐ต๐ถ๐ป๐ธ๐ถ๐ป๐ด ๐ถ๐ ๐๐ป๐ฑ๐ฒ๐ฟ ๐๐ต๐ฟ๐ฒ๐ฎ๐
โ 76% of parents and 72% of teachers fear kids are becoming too trusting of GenAI outputs.
9. ๐๐ถ๐ฎ๐ ๐ฎ๐ป๐ฑ ๐ถ๐ฑ๐ฒ๐ป๐๐ถ๐๐ ๐ฟ๐ฒ๐ฝ๐ฟ๐ฒ๐๐ฒ๐ป๐๐ฎ๐๐ถ๐ผ๐ป ๐ถ๐ ๐๐๐ถ๐น๐น ๐ฎ ๐ฏ๐น๐ถ๐ป๐ฑ๐๐ฝ๐ผ๐
โ Children of color felt less seen and less motivated to use tools that didnโt reflect them. Representation matters.
The next generation isnโt just using AI. Theyโre being shaped by it. Thatโs why we need a more focused, intentional approach: Teaching them not just how to use these tools โ but how to question them. To navigate the benefits, the risks, and the blindspots.
๐ช๐ฎ๐ป๐ ๐บ๐ผ๐ฟ๐ฒ ๐ฏ๐ฟ๐ฒ๐ฎ๐ธ๐ฑ๐ผ๐๐ป๐ ๐น๐ถ๐ธ๐ฒ ๐๐ต๐ถ๐?
Subscribe to Human in the Loop โ my new weekly deep dive on AI agents, real-world tools, and strategic insights: https://lnkd.in/dbf74Y9E | 174 comments on LinkedIn
Understanding LLMs, RAG, AI Agents, and Agentic AI
I frequently see conversations where terms like LLMs, RAG, AI Agents, and Agentic AI are used interchangeably, even though they represent fundamentally different layers of capability.
This visual guides explain how these four layers relateโnot as competing technologies, but as an evolving intelligence architecture.
Hereโs a deeper look:
1. ๐๐๐ (๐๐ฎ๐ฟ๐ด๐ฒ ๐๐ฎ๐ป๐ด๐๐ฎ๐ด๐ฒ ๐ ๐ผ๐ฑ๐ฒ๐น)
This is the foundation. Models like GPT, Claude, and Gemini are trained on vast corpora of text to perform a wide array of tasks:
โ Text generation
โ Instruction following
โ Chain-of-thought reasoning
โ Few-shot/zero-shot learning
โ Embedding and token generation
However, LLMs are inherently limited to the knowledge encoded during training and struggle with grounding, real-time updates, or long-term memory.
2. ๐ฅ๐๐ (๐ฅ๐ฒ๐๐ฟ๐ถ๐ฒ๐๐ฎ๐น-๐๐๐ด๐บ๐ฒ๐ป๐๐ฒ๐ฑ ๐๐ฒ๐ป๐ฒ๐ฟ๐ฎ๐๐ถ๐ผ๐ป)
RAG bridges the gap between static model knowledge and dynamic external information.
By integrating techniques such as:
โ Vector search
โ Embedding-based similarity scoring
โ Document chunking
โ Hybrid retrieval (dense + sparse)
โ Source attribution
โ Context injection
โฆRAG enhances the quality and factuality of responses. It enables models to โrecallโ information they were never trained on, and grounds answers in external sourcesโcritical for enterprise-grade applications.
3. ๐๐ ๐๐ด๐ฒ๐ป๐
RAG is still a passive architectureโit retrieves and generates. AI Agents go a step further: they act.
Agents perform tasks, execute code, call APIs, manage state, and iterate via feedback loops. They introduce key capabilities such as:
โ Planning and task decomposition
โ Execution pipelines
โ Long- and short-term memory integration
โ File access and API interaction
โ Use of frameworks like ReAct, LangChain Agents, AutoGen, and CrewAI
This is where LLMs become active participants in workflows rather than just passive responders.
4. ๐๐ด๐ฒ๐ป๐๐ถ๐ฐ ๐๐
This is the most advanced layerโwhere we go beyond a single autonomous agent to multi-agent systems with role-specific behavior, memory sharing, and inter-agent communication.
Core concepts include:
โ Multi-agent collaboration and task delegation
โ Modular role assignment and hierarchy
โ Goal-directed planning and lifecycle management
โ Protocols like MCP (Anthropicโs Model Context Protocol) and A2A (Googleโs Agent-to-Agent)
โ Long-term memory synchronization and feedback-based evolution
Agentic AI is what enables truly autonomous, adaptive, and collaborative intelligence across distributed systems.
Whether youโre building enterprise copilots, AI-powered ETL systems, or autonomous task orchestration tools, knowing what each layer offersโand where it falls shortโwill determine whether your AI system scales or breaks.
If you found this helpful, share it with your team or network.
If thereโs something important you think I missed, feel free to comment or message meโIโd be happy to include it in the next iteration. | 119 comments on LinkedIn
BREAKING: Claude launches Education. Free learning is now much faster with AI:
1. Set clear learning goalsย
โณ Knowing what you want to learn makes it easier.
โณ Claude helps you define your path.
2. Provide context for your knowledgeย
โณ Understanding the bigger picture is key.ย
โณ Claude connects new ideas to what you already know.
3. Request detailed explanationsย
โณ Sometimes, you need more than a quick answer.ย
โณ Claude can dive deep into complex topics.
4. Get real-world examplesย
โณ Learning is better with practical applications.ย
โณ Claude shows how concepts work in the real world.
5. Practice writing and receive feedbackย
โณ Writing helps solidify your knowledge.ย
โณ Claude gives instant feedback to improve your skills.
6. Role-play for languages or codingย
โณ Learning by doing is effective.ย
โณ Claude can simulate conversations or coding scenarios.
7. Fact-check surprising claimsย
โณ Misinformation is everywhere.ย
โณ Claude helps you verify facts and claims.
8. Take breaks and reflect on learningย
โณ Reflection is vital for understanding.ย
โณ Claude reminds you to pause and think.
9. Keep a learning journalย
โณ Tracking your progress is important.ย
โณ Claude can help you log your journey.
10. Iterate and refine understandingย
โณ Learning is a process.ย
โณ Claude encourages you to improve your knowledge.ย | 246 comments on LinkedIn