AI News

AI News

545 bookmarks
Custom sorting
If We Want To Understand The Future Of AI, Just Watch Star Trek: The Next Generation And I am dead serious.
If We Want To Understand The Future Of AI, Just Watch Star Trek: The Next Generation And I am dead serious.
If We Want To Understand The Future Of AI, Just Watch Star Trek: The Next Generation And I am dead serious. For those unfamiliar, Star Trek: TNG ran from 1987 to 1994. It didn’t just predict technology, it reimagined our relationship to it. And it got something right we’re still getting wrong. We’ve misunderstood what AI actually is, And since GPT we've been distracted by a shiny and seductive object We keep calling it an intern, an assistant, a tool, a shortcut. For many it's a potential threat, or a get rich quick scheme Silicon Valley loves those metaphors because they’re cheap. But they’re not just misleading, they’re limiting. The real problem? Strategy. Because the people shaping AI strategy for the enterprise are management consultants-- and this technology is as new to them as it is to anyone but they pretend they know what they're doing and they don't Here’s the formula they sell to CEOs and CFOs: We’ll implement this tech to reduce costs We’ll treat your office like a factory We’ll measure tasks, optimize bottlenecks, and speed up cycle times We’ll replace humans wherever possible You’ll save money, signal to the market, and boost your share price Sounds smart. But it’s junior-high thinking. Because this entire logic assumes AI’s greatest value is in efficiency. It views humans as bottlenecks, not assets. It assumes replacing judgment with pattern-matching is strategic progress. It’s rear-view mirror thinking dressed up as innovation. And it’s not working. Error rates remain high.¹ Hallucinations persist.² Most GenAI pilots fail to scale.³ And internal backlash is growing.⁴ Why? Because AI isn’t about automation. It’s about augmentation. And that means imagining new ways of thinking, creating, and deciding, not just faster ways to do what we already do. And Star Trek: TNG showed us what that could look like. The ship’s computer wasn’t a task engine, it was a thinking partner. Data wasn’t a replacement for the crew, he was part of the crew. AI didn’t strip humanity, it deepened it. Oh and was Data sentient? Who cares for me he was ******************************************************************************** The trick with technology is to avoid spreading darkness at the speed of light Sign up: Curiouser.AI is the force behind The Rogue Entrepreneur, a masterclass series for builders, misfits, and dreamers. For those of us who still realize we need to work hard to be successful and that there are not magic shortcuts. Inspired by The Unreasonable Path, a belief that progress belongs to those with the imagination and courage to simply be themselves. To learn more, DM or email stephen@curiouser.ai (LINK IN COMMENTS) Sources: ¹ [MIT Sloan] 85% of GenAI projects fail to deliver ROI ² [Stanford/Princeton 2024] Hallucination rates range 3%–27% depending on task ³ [McKinsey, 2023] Most enterprise AI pilots fail to scale ⁴ [Korn Ferry, 2024] 54% of knowledge workers report productivity declines from GenAI tools | 177 comments on LinkedIn
·linkedin.com·
If We Want To Understand The Future Of AI, Just Watch Star Trek: The Next Generation And I am dead serious.
Microsoft 𝗷𝘂𝘀𝘁 𝗮𝗻𝗮𝗹𝘆𝘇𝗲𝗱 𝟮𝟬𝟬,𝟬𝟬𝟬 𝗿𝗲𝗮𝗹-𝘄𝗼𝗿𝗹𝗱 𝗔𝗜 𝗰𝗼𝗻𝘃𝗲𝗿𝘀𝗮𝘁𝗶𝗼𝗻𝘀 — 𝗮𝗻𝗱 𝗿𝗮𝗻𝗸𝗲𝗱 𝗵𝗼𝘄 𝗮𝘂𝘁𝗼𝗺𝗮𝘁𝗮𝗯𝗹𝗲 𝘆𝗼𝘂𝗿 𝗷𝗼𝗯 𝗿𝗲𝗮𝗹𝗹𝘆 𝗶𝘀.
Microsoft 𝗷𝘂𝘀𝘁 𝗮𝗻𝗮𝗹𝘆𝘇𝗲𝗱 𝟮𝟬𝟬,𝟬𝟬𝟬 𝗿𝗲𝗮𝗹-𝘄𝗼𝗿𝗹𝗱 𝗔𝗜 𝗰𝗼𝗻𝘃𝗲𝗿𝘀𝗮𝘁𝗶𝗼𝗻𝘀 — 𝗮𝗻𝗱 𝗿𝗮𝗻𝗸𝗲𝗱 𝗵𝗼𝘄 𝗮𝘂𝘁𝗼𝗺𝗮𝘁𝗮𝗯𝗹𝗲 𝘆𝗼𝘂𝗿 𝗷𝗼𝗯 𝗿𝗲𝗮𝗹𝗹𝘆 𝗶𝘀.
Microsoft 𝗷𝘂𝘀𝘁 𝗮𝗻𝗮𝗹𝘆𝘇𝗲𝗱 𝟮𝟬𝟬,𝟬𝟬𝟬 𝗿𝗲𝗮𝗹-𝘄𝗼𝗿𝗹𝗱 𝗔𝗜 𝗰𝗼𝗻𝘃𝗲𝗿𝘀𝗮𝘁𝗶𝗼𝗻𝘀 — 𝗮𝗻𝗱 𝗿𝗮𝗻𝗸𝗲𝗱 𝗵𝗼𝘄 𝗮𝘂𝘁𝗼𝗺𝗮𝘁𝗮𝗯𝗹𝗲 𝘆𝗼𝘂𝗿 𝗷𝗼𝗯 𝗿𝗲𝗮𝗹𝗹𝘆 𝗶𝘀. ⬇️ MS Research studied how people actually use Microsoft Copilot — and what kinds of tasks AI performs best. Then they mapped that usage onto real job data across the occupation classifications. 𝗧𝗵𝗲 𝗿𝗲𝘀𝘂𝗹𝘁? A first-of-its-kind AI applicability score across 800+ occupations. And some surprising findings. But what does “AI-applicable” even mean? Microsoft used a 3-part score: → Coverage – How often AI touches a job’s tasks → Completion – How well AI helps with those tasks → Scope – How much of the job AI can actually handle 𝗠𝗼𝘀𝘁 𝗔𝗜-𝗮𝗽𝗽𝗹𝗶𝗰𝗮𝗯𝗹𝗲 𝗷𝗼𝗯𝘀? → Interpreters, Writers, Historians, Sales Reps, Customer Service, Journalists 𝗟𝗲𝗮𝘀𝘁 𝗔𝗜-𝗮𝗽𝗽𝗹𝗶𝗰𝗮𝗯𝗹𝗲 𝗷𝗼𝗯𝘀? → Phlebotomists, Roofers, Ship Engineers, Dishwashers, Tractor Operators 𝗛𝗲𝗿𝗲 𝗮𝗿𝗲 𝘁𝗵𝗲 𝟲 𝗸𝗲𝘆 𝘁𝗮𝗸𝗲𝗮𝘄𝗮𝘆𝘀: ⬇️ 1. AI is not doing your job — it’s helping you do it better → In 40% of conversations, the AI task and the user’s goal were completely different. People ask AI for help gathering, editing, summarizing. The AI responds by teaching and explaining. This is augmentation at scale. 2. Information work is the real frontier → The most common user goals? “Get information” and “Write content.” The most common AI actions? “Provide information,” “Teach others,” and “Advise.” 3. Jobs most affected are not just high-tech — they’re high-communication → Interpreters, historians, journalists, teachers, and customer service roles all scored high. Why? Because they involve information, communication, and explanation — all things LLMs are good at. 4. AI can’t replace physical work — and probably won’t → The bottom of the list? Roofers, dishwashers, tractor operators. Manual jobs remain least impacted — not because AI can’t help, but because it can’t reach. 5. Wage isn’t a strong predictor of AI exposure → Surprising: there’s only a weak correlation (r=0.07) between average salary and AI applicability. In other words: this wave of AI cuts across income levels. It’s not just a C-suite story. 6. Bachelor’s degree jobs are most exposed — but not most replaced → Occupations requiring a degree show more AI overlap. But that doesn’t mean these jobs disappear — it means they change. AI is refactoring knowledge work, not deleting. This transformation is moving faster than most realize. The question isn’t whether AI will change how we work — it already is. Study in comments. ⬇️ 𝗣.𝗦. 𝗜 𝗿𝗲𝗰𝗲𝗻𝘁𝗹𝘆 𝗹𝗮𝘂𝗻𝗰𝗵𝗲𝗱 𝗮 𝗻𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿 𝘄𝗵𝗲𝗿𝗲 𝗜 𝘄𝗿𝗶𝘁𝗲 𝗮𝗯𝗼𝘂𝘁 𝗲𝘅𝗮𝗰𝘁𝗹𝘆 𝘁𝗵𝗲𝘀𝗲 𝘀𝗵𝗶𝗳𝘁𝘀 𝗲𝘃𝗲𝗿𝘆 𝘄𝗲𝗲𝗸 — 𝗔𝗜 𝗮𝗴𝗲𝗻𝘁𝘀, 𝗲𝗺𝗲𝗿𝗴𝗶𝗻𝗴 𝘄𝗼𝗿𝗸𝗳𝗹𝗼𝘄𝘀, 𝗮𝗻𝗱 𝗵𝗼𝘄 𝘁𝗼 𝘀𝘁𝗮𝘆 𝗮𝗵𝗲𝗮𝗱: 𝗵𝘁𝘁𝗽𝘀://𝘄𝘄𝘄.𝗵𝘂𝗺𝗮𝗻𝗶𝗻𝘁𝗵𝗲𝗹𝗼𝗼𝗽.𝗼𝗻𝗹𝗶𝗻𝗲/𝘀𝘂𝗯𝘀𝗰𝗿𝗶𝗯𝗲 | 21 comments on LinkedIn
·linkedin.com·
Microsoft 𝗷𝘂𝘀𝘁 𝗮𝗻𝗮𝗹𝘆𝘇𝗲𝗱 𝟮𝟬𝟬,𝟬𝟬𝟬 𝗿𝗲𝗮𝗹-𝘄𝗼𝗿𝗹𝗱 𝗔𝗜 𝗰𝗼𝗻𝘃𝗲𝗿𝘀𝗮𝘁𝗶𝗼𝗻𝘀 — 𝗮𝗻𝗱 𝗿𝗮𝗻𝗸𝗲𝗱 𝗵𝗼𝘄 𝗮𝘂𝘁𝗼𝗺𝗮𝘁𝗮𝗯𝗹𝗲 𝘆𝗼𝘂𝗿 𝗷𝗼𝗯 𝗿𝗲𝗮𝗹𝗹𝘆 𝗶𝘀.
Druckfrisch aus dem Weissen Haus: Der AI Action Plan der USA. "WINNING THE RACE" ist die Ansage. Ich bin mal sehr gespannt auf die Europäische Antwort. Mein GPT sagt dazu ganz wertfrei:
Druckfrisch aus dem Weissen Haus: Der AI Action Plan der USA. "WINNING THE RACE" ist die Ansage. Ich bin mal sehr gespannt auf die Europäische Antwort. Mein GPT sagt dazu ganz wertfrei:
Druckfrisch aus dem Weissen Haus: Der AI Action Plan der USA. "WINNING THE RACE" ist die Ansage. Ich bin mal sehr gespannt auf die Europäische Antwort. Mein GPT sagt dazu ganz wertfrei: "Wird 2025 zum Jahr der globalen AI-Doktrin? Mit dem 28-seitigen „America’s AI Action Plan“ legt die Trump-Administration ein kompromisslos ambitioniertes Strategiepapier vor – ein geopolitisches Manifest für technologische Vorherrschaft, das Innovation, Infrastruktur und Diplomatie radikal neu denkt. Ziel: globale AI-Dominanz. Kein „Könnte“, kein „Sollte“. Sondern ein „Wird“ – mit einer Regierung, die AI als Schlüssel zur wirtschaftlichen, militärischen und kulturellen Zukunft Amerikas versteht. Das Dokument ruft eine neue industrielle Revolution, eine Informationsrevolution und eine digitale Renaissance gleichzeitig aus. Der Plan umfasst: _ • Deregulierung und Priorisierung von Open-Source-Modellen • Milliarden-Investitionen in Halbleiter, Cloud-Infrastruktur, Energie und AI-Forschung • staatlich geförderte AI-Sandboxes für Healthcare, Bildung, Verteidigung und Industrie • nationale Reallabore, Skills-Offensiven und beschleunigte Adoption im öffentlichen Sektor • Exportoffensive für ein „American AI Stack“ – Hardware, Modelle, Standards • strikte Exportkontrollen und diplomatische Isolierung Chinas in Governance-Gremien • Cyber- und Biosecurity-Maßnahmen gegen Missbrauch von Frontier-Modellen • juristische Anpassung zur Bekämpfung von Deepfakes und synthetischer Evidenz Bemerkenswert ist der offen geopolitische Ton: Die USA verstehen sich wieder als Gestalter einer neuen Weltordnung - mit AI als Hebel. Wer das Rennen macht, schreibt die Regeln. Für Europa stellt sich damit dringender denn je die Frage: Wollen wir nur regulieren - oder auch gestalten?" Quelle: https://lnkd.in/eXwTUGzv
·linkedin.com·
Druckfrisch aus dem Weissen Haus: Der AI Action Plan der USA. "WINNING THE RACE" ist die Ansage. Ich bin mal sehr gespannt auf die Europäische Antwort. Mein GPT sagt dazu ganz wertfrei:
I was recently looking at how Gen AI shapes and potentially impacts cognitive skills - a topic that matters for education and for work. Here are a few resources I reviewed.
I was recently looking at how Gen AI shapes and potentially impacts cognitive skills - a topic that matters for education and for work. Here are a few resources I reviewed.
I was recently looking at how Gen AI shapes and potentially impacts cognitive skills - a topic that matters for education and for work. Here are a few resources I reviewed. 1️⃣ Your Brain on ChatGPT - What Really Happens When Students Use AI MIT released a study on AI and learning. Findings indicate that students who used ChatGPT for essays showed weaker brain activity, couldn't remember what they'd written, and got worse at thinking over time https://shorturl.at/qaLie 2️⃣ Cognitive Debt when using AI - Your brain on Chat GPT There is a cognitive cost of using an LLM vs Search Engine vs our brain in e.g. writing an essay. The study indicates that there is a likely decrease in learning skills, the more we use technology as substantial replacement of our cognitive skills. https://lnkd.in/drVa_YNg 3️⃣ Teachers warn AI is impacting students' critical thinking One of many articles about the importance of using Gen AI smartly, in Education but also at work. https://lnkd.in/dSbGjusu 4️⃣ The Impact of Gen AI on critical thinking Another interesting study on the same topic. https://shorturl.at/74OO6 5️⃣ Doctored photographs create false memories In psychology, research indicated a long time ago that our memory - our recollection of past events - is susceptible to errors, biases, can be fragmentary, contain incorrect details, and, oftentimes, be entirely fictional. Memories are a reconstruction of our past to respond to our need for coherence in life. A rigorous 2023 study shows that doctored photographs – think Photoshop or today, AI – create false memories. Why it matters? Memory is essential for learning, recall of episodical and factual happenings, and it’s a basis for the integrity of sources of truth in organizations.   https://shorturl.at/hdgtN 6️⃣ The decline of our thinking skills Another great article on AI and critical thinking from IE University. https://shorturl.at/rGl99 7️⃣ Context Engineering Ethan Mollick recently wrote a blog on "context engineering" - how we give AI the data and information it needs to generate relevant output. The comments on the post were even more interesting than the post itself. Personally I think that good part of context engineering is not in organizations documents or processes, it is in peoples ability to think critically and understand relevant parameters of their environment to nurture AI/Gen AI. Gotta follow up on this one ;-) https://shorturl.at/sfnuV #GenAI #CriticalThinking #AICognition #AIHuman #ContextEngineering | 29 comments on LinkedIn
·linkedin.com·
I was recently looking at how Gen AI shapes and potentially impacts cognitive skills - a topic that matters for education and for work. Here are a few resources I reviewed.
On Ethical AI Principles
On Ethical AI Principles
I have commented in my newsletter that what people have been describing as 'ethical AI principles' actually represents a specific political agenda, and not an ethical agenda at all. In this post, I'll outline some ethical principles and work my way through them to make my point.
·linkedin.com·
On Ethical AI Principles
Berufswahl im Zeitalter der lernenden Maschinen – Offener Brief an meine Nichte (Abi-Jahrgang 2025)
Berufswahl im Zeitalter der lernenden Maschinen – Offener Brief an meine Nichte (Abi-Jahrgang 2025)
Berufswahl im Zeitalter der lernenden Maschinen – Offener Brief an meine Nichte (Abi-Jahrgang 2025) Liebe Anna, als du mich fragtest, ob „Informatik, Medienwissenschaft oder Politik“ noch zukunftssicher sind, merkte ich, wie löchrig die alte Landkarte der Arbeit geworden ist. Code wird von KI vervollständigt, Diagnosen von Algorithmen unterstützt, Routineverträge von Bots geprüft. Laut Weltwirtschaftsforum wird bis 2030 fast jede zweite Kompetenz umgeschrieben. Was also studieren? Meine Empfehlung: Drei Felder, die weniger vom Titel als vom Skill-Mix leben. Warum? Weil sie Eigenschaften bieten, die KI kaum kopieren kann: direkten Menschenkontakt, interdisziplinäres Denken und sinnliche Materialerfahrung. Sie bilden zusammen einen „Human Moat“ – einen Schutzwall gegen reine Automatisierung. 1 | HEALTH & HUMAN SERVICES – BERUFE MIT EMPATHIE-FAKTOR Das ist erwartbar: Pflege, Sozialarbeit, Therapie oder Pädagogik bleiben knapp, weil Demografie und Krisen Resilienz verlangen. Typische Rollen: Pflegefachfrau+, Physician Assistant, Tele-Coach Mental Health. Schlüssel-Skills: evidenzbasierte Pflege, interkulturelle Kommunikation, Basiswissen Medizinrecht und Datenschutz. 2 | TWIN-TRANSITION CAREERS – KLIMA × TECHNOLOGIE Smarte Mash-Ups: Unternehmen brauchen Talente, die CO₂-Reduktion mit Datenkompetenz verbinden. Typische Rollen: Nachhaltigkeits-Data-Analyst, Circular-Economy-Ingenieurin, KI-Policy-Analyst, Energy-Systems-Modeler. Schlüssel-Skills: Life-Cycle-Assessment, Python/R, EU-Regulatorik (CSRD, AI Act), Systemdenken. 3 | CRAFT & EXPERIENCE DESIGN – WERT DES EINZIGARTIGEN Je perfekter Massenware KI-optimiert ist, desto höher steigt der Wert des Nicht-Skalierbaren. Typische Rollen: Produktdesigner*in für Bio-Materialien, Restaurator, Schreinerin mit CNC-Know-how, UX-Designer für phygitale Erlebnisse. Schlüssel-Skills: Materialkunde, CAD/CAM & 3-D-Druck, Storytelling, Customer-Journey-Mapping. Das ist natürlich nur ein Ausschnitt. Aber ich denke, die Muster dahinter sind klar, um es selbst weiterzudenken. WAS VERSCHWINDET? Alles, was rein repetitiv ist: Standard-Reporting, einfache Software-Tests, seitenlange Vertragsprüfungen. Die Maschine erledigt es schneller und billiger – doch jemand muss die Systeme entwerfen, mit Daten füttern und ethisch beaufsichtigen. MEIN RAT IN DREI SÄTZEN >> Suche kein Joblabel, sondern ein Problem, das dich elektrisiert. << Kombiniere digitale Grundfitness, empathische Kommunikation und moralischen Kompass. Dann arbeitest du nicht gegen Maschinen, sondern mit ihnen – und kannst dir jederzeit einen neuen Beruf erfinden. Vielleicht startest du als Pflege-Informatikerin, wirst später KI-Ethikerin und eröffnest irgendwann eine Bäckerei, in der Roboter den Teig kneten, während du den Sauerteig fütterst und Kund:innen berätst. Zukunftssicherheit entsteht nicht aus einem Studium, sondern aus lebenslanger Lernlust. Die Welt bleibt turbulent, doch wer Richtung Sinn steuert, hat immer Rückenwind. Dein Onkel Stefan | 59 Kommentare auf LinkedIn
·linkedin.com·
Berufswahl im Zeitalter der lernenden Maschinen – Offener Brief an meine Nichte (Abi-Jahrgang 2025)
Your best coach can't be everywhere at once.
Your best coach can't be everywhere at once.
Your best coach can't be everywhere at once. 𝘉𝘶𝘵 𝘵𝘩𝘦𝘪𝘳 𝘈𝘐 𝘵𝘸𝘪𝘯 𝘤𝘢𝘯. Scaling world-class coaching is one of the biggest headaches in L&D. You bring in a top-tier expert for a workshop, and the C-suite loves it; then what? The knowledge fades, and the cost to retain them for 1-on-1 coaching across the org is astronomical. Well, the ability to have experts available 24/7 is now a reality. Google is quietly testing a potential solution in its Labs. 𝗜𝘁'𝘀 𝗰𝗮𝗹𝗹𝗲𝗱 𝗣𝗼𝗿𝘁𝗿𝗮𝗶𝘁𝘀. It’s more than a chatbot. It’s a library of voice-enabled, AI-powered avatars of real-world experts, trained only on their unique ideas and content. What that means: → Minimal AI hallucinations → No generic advice → Just the expert's authentic perspective, on-demand Check out this screenshot of Google Portraits. That’s an AI version of storytelling expert Matt Dicks. He’s coaching me to find the "heart of a story" in a seemingly dull, everyday moment — cutting grass. It's a very immersive experience as he walks me through finding the "story" in my experience. Think about the possibilities: → Democratize coaching: Assign a storytelling coach or a feedback sparring partner to every new manager. → Practice in private: Let employees rehearse difficult conversations in a safe and controlled environment before the real thing. → Scalable IP: A new model for licensing and deploying the knowledge of the world's best minds across your entire company. This is the future of personalized, scalable learning. It’s moving from static courses to dynamic, conversational experiences. The big question for us in L&D: Is this the scalable future we've been waiting for, or are we losing the essential human element of coaching? | 12 comments on LinkedIn
·linkedin.com·
Your best coach can't be everywhere at once.
𝗪𝗼𝗿𝗸𝗶𝗻𝗴 𝘄𝗶𝘁𝗵 𝗠𝗖𝗣 𝗶𝘀 𝗼𝗻𝗲 𝗼𝗳 𝘁𝗵𝗼𝘀𝗲 𝗿𝗮𝗿𝗲 “𝗼𝗵 𝗱𝗮𝗺𝗻, 𝘁𝗵𝗶𝘀 𝗰𝗵𝗮𝗻𝗴𝗲𝘀 𝗲𝘃𝗲𝗿𝘆𝘁𝗵𝗶𝗻𝗴” 𝗺𝗼𝗺𝗲𝗻𝘁𝘀! I’ve been in tech for years, and MCP (Model Context Protocol) is one of those rare innovations that deserves every bit of the hype. I really can’t believe how much smoother everything gets.
𝗪𝗼𝗿𝗸𝗶𝗻𝗴 𝘄𝗶𝘁𝗵 𝗠𝗖𝗣 𝗶𝘀 𝗼𝗻𝗲 𝗼𝗳 𝘁𝗵𝗼𝘀𝗲 𝗿𝗮𝗿𝗲 “𝗼𝗵 𝗱𝗮𝗺𝗻, 𝘁𝗵𝗶𝘀 𝗰𝗵𝗮𝗻𝗴𝗲𝘀 𝗲𝘃𝗲𝗿𝘆𝘁𝗵𝗶𝗻𝗴” 𝗺𝗼𝗺𝗲𝗻𝘁𝘀! I’ve been in tech for years, and MCP (Model Context Protocol) is one of those rare innovations that deserves every bit of the hype. I really can’t believe how much smoother everything gets.
𝗪𝗼𝗿𝗸𝗶𝗻𝗴 𝘄𝗶𝘁𝗵 𝗠𝗖𝗣 𝗶𝘀 𝗼𝗻𝗲 𝗼𝗳 𝘁𝗵𝗼𝘀𝗲 𝗿𝗮𝗿𝗲 “𝗼𝗵 𝗱𝗮𝗺𝗻, 𝘁𝗵𝗶𝘀 𝗰𝗵𝗮𝗻𝗴𝗲𝘀 𝗲𝘃𝗲𝗿𝘆𝘁𝗵𝗶𝗻𝗴” 𝗺𝗼𝗺𝗲𝗻𝘁𝘀! I’ve been in tech for years, and MCP (Model Context Protocol) is one of those rare innovations that deserves every bit of the hype. I really can’t believe how much smoother everything gets. 𝗜𝗳 𝗜 𝗵𝗮𝗱 𝘁𝗼 𝗯𝗲𝘁 𝗼𝗻 𝗼𝗻𝗲 𝗽𝗿𝗼𝘁𝗼𝗰𝗼𝗹 𝗯𝗲𝗰𝗼𝗺𝗶𝗻𝗴 𝗲𝘀𝘀𝗲𝗻𝘁𝗶𝗮𝗹 𝗶𝗻 𝗔𝗜, 𝗶𝘁’𝘀 𝗠𝗖𝗣. MCP sounds complex — but it’s really not. Think of it as a guide that helps your AI agents understand: → what tools exist → how to talk to them → and when to use them 𝗛𝗲𝗿𝗲 𝗮𝗿𝗲 𝟵 𝗳𝘂𝗹𝗹𝘆 𝗱𝗼𝗰𝘂𝗺𝗲𝗻𝘁𝗲𝗱 𝗠𝗖𝗣 𝗽𝗿𝗼𝗷𝗲𝗰𝘁𝘀 𝗲𝘅𝗽𝗹𝗮𝗶𝗻𝗲𝗱 𝘄𝗶𝘁𝗵 𝘃𝗶𝘀𝘂𝗮𝗹𝘀 & 𝗼𝗽𝗲𝗻-𝘀𝗼𝘂𝗿𝗰𝗲 𝗰𝗼𝗱𝗲 (𝘁𝗼 𝗴𝗲𝘁 𝘆𝗼𝘂 𝘀𝘁𝗮𝗿𝘁𝗲𝗱): ⬇️ 1. 100% Local MCP Client → Build a local MCP client using SQLite + Ollama — no cloud, no tracking. → Full docu: https://lnkd.in/gtaEGvFZ 2. MCP-powered Agentic RAG → Add fallback logic, vector search, and agents in one clean flow. → Full docu: https://lnkd.in/gsV62MDE 3. MCP-powered Financial Analyst → Fetch stock data, extract insights, generate summaries. → Full docu: https://lnkd.in/g2\_EaJ\_d 4. MCP-powered Voice Agent → Speech-to-text, database queries, and spoken responses — all local. → Full docu: https://lnkd.in/gweH8Rxi 5. Unified MCP Server (with MindsDB) → Query 200+ data sources via natural language using MindsDB + Cursor. → Full docu:https://lnkd.in/gCevVqKK 6. Shared Memory for Claude + Cursor → Build cross-app memory for dev workflows — share context seamlessly. → Full docu: https://lnkd.in/giDXdtXd 7. RAG Over Complex Docs → Tackle PDFs, tables, charts, messy layouts with structured RAG. → Full docu: https://lnkd.in/gMHqHvBR 8. Synthetic Data Generator (SDV) → Generate synthetic tabular data locally via MCP + SDV. → Full docu:https://lnkd.in/ghyUyByS 9. Multi-Agent Deep Researcher → Rebuild ChatGPT’s research mode, fully local with writing agents. → Full docu: https://lnkd.in/gp3EsrZ2 Kudos to Daily Dose of Data Science! 𝗜 𝗲𝘅𝗽𝗹𝗼𝗿𝗲 𝘁𝗵𝗲𝘀𝗲 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁𝘀 — 𝗮𝗻𝗱 𝘄𝗵𝗮𝘁 𝘁𝗵𝗲𝘆 𝗺𝗲𝗮𝗻 𝗳𝗼𝗿 𝗿𝗲𝗮𝗹-𝘄𝗼𝗿𝗹𝗱 𝘂𝘀𝗲 𝗰𝗮𝘀𝗲𝘀 — 𝗶𝗻 𝗺𝘆 𝘄𝗲𝗲𝗸𝗹𝘆 𝗻𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿. 𝗬𝗼𝘂 𝗰𝗮𝗻 𝘀𝘂𝗯𝘀𝗰𝗿𝗶𝗯𝗲 𝗵𝗲𝗿𝗲 𝗳𝗼𝗿 𝗳𝗿𝗲𝗲: https://lnkd.in/dbf74Y9E | 49 comments on LinkedIn
·linkedin.com·
𝗪𝗼𝗿𝗸𝗶𝗻𝗴 𝘄𝗶𝘁𝗵 𝗠𝗖𝗣 𝗶𝘀 𝗼𝗻𝗲 𝗼𝗳 𝘁𝗵𝗼𝘀𝗲 𝗿𝗮𝗿𝗲 “𝗼𝗵 𝗱𝗮𝗺𝗻, 𝘁𝗵𝗶𝘀 𝗰𝗵𝗮𝗻𝗴𝗲𝘀 𝗲𝘃𝗲𝗿𝘆𝘁𝗵𝗶𝗻𝗴” 𝗺𝗼𝗺𝗲𝗻𝘁𝘀! I’ve been in tech for years, and MCP (Model Context Protocol) is one of those rare innovations that deserves every bit of the hype. I really can’t believe how much smoother everything gets.
𝙃𝙖𝙗𝙩 𝙞𝙝𝙧 𝙨𝙘𝙝𝙤𝙣 𝙫𝙤𝙣 𝘼𝙄 𝙇𝙚𝙖𝙥 2025 𝙜𝙚𝙝ö𝙧𝙩? AI Leap ist eine landesweite KI-Bildungsinitiative aus #Estland, die 20.000 Schüler:innen der 10. und 11. Klasse sowie 3.000 Lehrkräften einen kostenlosen Zugang zu KI-basierten Lernwerkzeugen und entsprechender Schulung gewährt.
𝙃𝙖𝙗𝙩 𝙞𝙝𝙧 𝙨𝙘𝙝𝙤𝙣 𝙫𝙤𝙣 𝘼𝙄 𝙇𝙚𝙖𝙥 2025 𝙜𝙚𝙝ö𝙧𝙩? AI Leap ist eine landesweite KI-Bildungsinitiative aus #Estland, die 20.000 Schüler:innen der 10. und 11. Klasse sowie 3.000 Lehrkräften einen kostenlosen Zugang zu KI-basierten Lernwerkzeugen und entsprechender Schulung gewährt.
𝙃𝙖𝙗𝙩 𝙞𝙝𝙧 𝙨𝙘𝙝𝙤𝙣 𝙫𝙤𝙣 𝘼𝙄 𝙇𝙚𝙖𝙥 2025 𝙜𝙚𝙝ö𝙧𝙩? AI Leap ist eine landesweite KI-Bildungsinitiative aus #Estland, die 20.000 Schüler:innen der 10. und 11. Klasse sowie 3.000 Lehrkräften einen kostenlosen Zugang zu KI-basierten Lernwerkzeugen und entsprechender Schulung gewährt. Bereits letztes Jahr war ich von der politischen Haltung und konsequenten Umsetzung Estlands fasziniert, als ich u.a. mit der Botschafterin der Republik Estland, Marika Linntam, auf dem Panel der IHK Berlin über die Arbeitswelt der Zukunft diskutieren durfte. AI Leap ist Estlands Antwort auf die vielseitigen Herausforderungen im Bildungsbereich und fördert frühzeitig notwendige Schlüsselkompetenzen, die für den Arbeitsmarkt der Zukunft unerlässlich sind. Estland hat erkannt, dass ein professioneller Umgang mit KI-Technologien der wichtigste Wettbewerbsfaktor der Zukunft sein wird. Das war auch eine meiner insgesamt 4 Thesen, die ich vorab in einer Keynote vorstellen durfte, den kompletten Vortrag findet ihr hier: https://lnkd.in/dTdXMGuA 🅰🅱🅴🆁: 🎯 WO STEHEN WIR IN DEUTSCHLAND❓ 🎯 Wie können wir trotz Bildungsförderalismus schnell wirksam werden❓ Spannende Fragen für unsere neue Regierung v.a. mit Blick auf das Bundesministerium für Digitales und Staatsmodernisierung unter Leitung von Dr. Karsten Wildberger, das die #Digitalisierung und die #KI #KünstlicheIntelligenz in Deutschland auf ein nächstes Level heben will. Was mir gefällt ist die Aufbruchstimmung und ein #WirMachen. Ich hoffe, dass es gelingt, etwas zu bewegen und die entsprechenden Stakeholder einzubinden. Ich bin gerne dabei, denn da gibt es noch VIEL ZU TUN. Estland macht es vor! Es ist zwar viel kleiner als Deutschland, dennoch können wir viel von Estland (und anderen Ländern) lernen v.a. wenn wir in globale Kooperationen und in Public-Private-Partnership Modelle investieren. Quelle: https://lnkd.in/eUzXiSza #FutureOfWork #FutureSkills #SmartLearning :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: 🔔 Du möchtest mehr über die Arbeitswelt im Wandel zu erfahren? Let's connect! 💌 Du interessierst Dich für eine Zusammenarbeit? Schreib mir gerne!
·linkedin.com·
𝙃𝙖𝙗𝙩 𝙞𝙝𝙧 𝙨𝙘𝙝𝙤𝙣 𝙫𝙤𝙣 𝘼𝙄 𝙇𝙚𝙖𝙥 2025 𝙜𝙚𝙝ö𝙧𝙩? AI Leap ist eine landesweite KI-Bildungsinitiative aus #Estland, die 20.000 Schüler:innen der 10. und 11. Klasse sowie 3.000 Lehrkräften einen kostenlosen Zugang zu KI-basierten Lernwerkzeugen und entsprechender Schulung gewährt.
When I think about the future of learning with AI, I don’t imagine it as more content and courses. A rewiring of what we do and how we do it is happening right now.
When I think about the future of learning with AI, I don’t imagine it as more content and courses. A rewiring of what we do and how we do it is happening right now.
When I think about the future of learning with AI, I don’t imagine it as more content and courses. A rewiring of what we do and how we do it is happening right now. While most teams are stuck at the point of innovations from 2 years back, you can be ahead of this. Yet...I still see a lot of talk and not so much action, sprinkled with a lot of misinformation and actual understanding of Gen AI's power and limitations. That creates a problem if the L&D industry wishes to thrive in the new world of work with AI. That’s not to say I have “all the answers”, coz I don’t What I do have is a barrel load of real-world experiences working with teams on making AI adoptions a success. In tmrw's Steal These Thoughts! newsletter I'm going to share some of that with 5 insights that'll challenge everything you think you know about AI in L&D. Like the sound of that? → Join us by clicking 'subscribe to my newsletter' on this post and my profile. #education #learninganddevelopment #artificialintelligence
·linkedin.com·
When I think about the future of learning with AI, I don’t imagine it as more content and courses. A rewiring of what we do and how we do it is happening right now.
Uses can now select the model you want to use with a custom GPT. Which is perfect for those using my performance consulting coach GPT
Uses can now select the model you want to use with a custom GPT. Which is perfect for those using my performance consulting coach GPT
This is the feature I've been waiting for OpenAI to release. It's not "game-changing", but it's incredibly useful. Uses can now select the model you want to use with a custom GPT. Which is perfect for those using my performance consulting coach GPT. Switch the model to o3 and use it as it was intended in my original design. Here's a little how-to video with my GPT in action. Find my GPT: https://lnkd.in/e2pdCKt8 #education #artificialintelligence #learninganddevelopment
·linkedin.com·
Uses can now select the model you want to use with a custom GPT. Which is perfect for those using my performance consulting coach GPT
I spent my long weekend exploring the 2025 AI-in-Education report - two graphs showed a major disconnect!
I spent my long weekend exploring the 2025 AI-in-Education report - two graphs showed a major disconnect!
We might think we have an AI adoption story, but the reality is different: we still have a huge AI understanding gap! Here are some key stats from the report that honestly made me do a double-take: ▪️99% of education leaders, 87% of educators worldwide & 93% of US students have already used generative-AI for school at least once or twice! ▪️Yet only 44% of those educators worldwide & 41% of those US students say they “know a lot about AI.” ‼️this means our usage is far outpacing our understanding & that’s a significant gap! When such powerful tools are used without real fluency, we would see: ▪️complicated implementation with no shared strategy (sounds familiar?)! ▪️anxious students who’d fear being accused of cheating (I've heard this from so many students!) ▪️overwhelmed teachers who feel alone, unsupported & unprepared (this one is a common concern by some of my teacher friends)! The takeaway that jumped out at me: ▪️the schools that win won't be the ones that adopt AI the fastest, but the ones that adopt it the wisest! So here's what I’d think we should consider: ✅building a "learning-first" culture across institutions & understanding when AI supports our learning vs. when it gets in the way! ▪️more like, we need to swap the question "Are we using AI?" for "Can we show any learning gains?" ⚠️so, what shifts does this report data point us to? Here is my takeaway: ✅Building real AI fluency: ▪️moving beyond simple "prompting hacks" to true literacy that includes understanding ethics, biases & pedagogical purposes, ▪️this may need an AI Council of faculty, IT, learners & others working together to develop institution-wide policies on when AI helps or harms our learning, ▪️it's about building shared wisdom, not just industry-ready skills ✅Creating collaborative infrastructure: ▪️the "every teacher for themselves" approach seems to be failing, ▪️shared guidelines, inclusive AI Councils & a culture of open conversation are now needed to bridge this huge gap! ✅Shifting focus from "using AI tools" to "achieving learning outcomes": ▪️this one really resonated with me because unlike other tech rollouts we've witnessed, AI directly affects how our students think & learn, ▪️our institutions need coordinated assessments tracking whether AI use makes our learners better thinkers or just faster task completers! The goal that keeps coming back to us ▪️isn't to get every student using AI! ▪️but to make sure every learner & teacher really understands it! ⁉️I’m curious, where is your institution on this journey? 1️⃣ individual use: everyone is figuring it out on their own (been there!) 2️⃣ shared guidelines: we have policies, but they're not yet deeply integrated (getting closer!) 3️⃣ fully integrated strategy: we have a unified approach with a learning-first, outcome-tracked focus (this is the goal!) | 24 comments on LinkedIn
·linkedin.com·
I spent my long weekend exploring the 2025 AI-in-Education report - two graphs showed a major disconnect!
𝗧𝗵𝗶𝘀 𝗶𝘀 𝗵𝗮𝗻𝗱𝘀 𝗱𝗼𝘄𝗻 𝗼𝗻𝗲 𝗼𝗳 𝘁𝗵𝗲 𝗕𝗘𝗦𝗧 𝘃𝗶𝘀𝘂𝗮𝗹𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝗼𝗳 𝗵𝗼𝘄 𝗟𝗟𝗠𝘀 𝗮𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝘄𝗼𝗿𝗸. | Andreas Horn
𝗧𝗵𝗶𝘀 𝗶𝘀 𝗵𝗮𝗻𝗱𝘀 𝗱𝗼𝘄𝗻 𝗼𝗻𝗲 𝗼𝗳 𝘁𝗵𝗲 𝗕𝗘𝗦𝗧 𝘃𝗶𝘀𝘂𝗮𝗹𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝗼𝗳 𝗵𝗼𝘄 𝗟𝗟𝗠𝘀 𝗮𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝘄𝗼𝗿𝗸. | Andreas Horn
𝗧𝗵𝗶𝘀 𝗶𝘀 𝗵𝗮𝗻𝗱𝘀 𝗱𝗼𝘄𝗻 𝗼𝗻𝗲 𝗼𝗳 𝘁𝗵𝗲 𝗕𝗘𝗦𝗧 𝘃𝗶𝘀𝘂𝗮𝗹𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝗼𝗳 𝗵𝗼𝘄 𝗟𝗟𝗠𝘀 𝗮𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝘄𝗼𝗿𝗸. ⬇️ 𝘓𝘦𝘵'𝘴 𝘣𝘳𝘦𝘢𝘬 𝘪𝘵 𝘥𝘰𝘸𝘯: 𝗧𝗼𝗸𝗲𝗻𝗶𝘇𝗮𝘁𝗶𝗼𝗻 & 𝗘𝗺𝗯𝗲𝗱𝗱𝗶𝗻𝗴𝘀: - Input text is broken into tokens (smaller chunks). - Each token is mapped to a vector in high-dimensional space, where words with similar meanings cluster together. 𝗧𝗵𝗲 𝗔𝘁𝘁𝗲𝗻𝘁𝗶𝗼𝗻 𝗠𝗲𝗰𝗵𝗮𝗻𝗶𝘀𝗺 (𝗦𝗲𝗹𝗳-𝗔𝘁𝘁𝗲𝗻𝘁𝗶𝗼𝗻): - Words influence each other based on context — ensuring "bank" in riverbank isn’t confused with financial bank. - The Attention Block weighs relationships between words, refining their representations dynamically. 𝗙𝗲𝗲𝗱-𝗙𝗼𝗿𝘄𝗮𝗿𝗱 𝗟𝗮𝘆𝗲𝗿𝘀 (𝗗𝗲𝗲𝗽 𝗡𝗲𝘂𝗿𝗮𝗹 𝗡𝗲𝘁𝘄𝗼𝗿𝗸 𝗣𝗿𝗼𝗰𝗲𝘀𝘀𝗶𝗻𝗴) - After attention, tokens pass through multiple feed-forward layers that refine meaning. - Each layer learns deeper semantic relationships, improving predictions. 𝗜𝘁𝗲𝗿𝗮𝘁𝗶𝗼𝗻 & 𝗗𝗲𝗲𝗽 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 - This process repeats through dozens or even hundreds of layers, adjusting token meanings iteratively. - This is where the "deep" in deep learning comes in — layers upon layers of matrix multiplications and optimizations. 𝗣𝗿𝗲𝗱𝗶𝗰𝘁𝗶𝗼𝗻 & 𝗦𝗮𝗺𝗽𝗹𝗶𝗻𝗴 - The final vector representation is used to predict the next word as a probability distribution. - The model samples from this distribution, generating text word by word. 𝗧𝗵𝗲𝘀𝗲 𝗺𝗲𝗰𝗵𝗮𝗻𝗶𝗰𝘀 𝗮𝗿𝗲 𝗮𝘁 𝘁𝗵𝗲 𝗰𝗼𝗿𝗲 𝗼𝗳 𝗮𝗹𝗹 𝗟𝗟𝗠𝘀 (𝗲.𝗴. 𝗖𝗵𝗮𝘁𝗚𝗣𝗧). 𝗜𝘁 𝗶𝘀 𝗰𝗿𝘂𝗰𝗶𝗮𝗹 𝘁𝗼 𝗵𝗮𝘃𝗲 𝗮 𝘀𝗼𝗹𝗶𝗱 𝘂𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱𝗶𝗻𝗴 𝗵𝗼𝘄 𝘁𝗵𝗲𝘀𝗲 𝗺𝗲𝗰𝗵𝗮𝗻𝗶𝗰𝘀 𝘄𝗼𝗿𝗸 𝗶𝗳 𝘆𝗼𝘂 𝘄𝗮𝗻𝘁 𝘁𝗼 𝗯𝘂𝗶𝗹𝗱 𝘀𝗰𝗮𝗹𝗮𝗯𝗹𝗲, 𝗿𝗲𝘀𝗽𝗼𝗻𝘀𝗶𝗯𝗹𝗲 𝗔𝗜 𝘀𝗼𝗹𝘂𝘁𝗶𝗼𝗻𝘀. Here is the full video from 3Blue1Brown with exaplantion. I highly recommend to read, watch and bookmark this for a further deep dive: https://lnkd.in/dAviqK_6 𝗜 𝗲𝘅𝗽𝗹𝗼𝗿𝗲 𝘁𝗵𝗲𝘀𝗲 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁𝘀 — 𝗮𝗻𝗱 𝘄𝗵𝗮𝘁 𝘁𝗵𝗲𝘆 𝗺𝗲𝗮𝗻 𝗳𝗼𝗿 𝗿𝗲𝗮𝗹-𝘄𝗼𝗿𝗹𝗱 𝘂𝘀𝗲 𝗰𝗮𝘀𝗲𝘀 — 𝗶𝗻 𝗺𝘆 𝘄𝗲𝗲𝗸𝗹𝘆 𝗻𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿. 𝗬𝗼𝘂 𝗰𝗮𝗻 𝘀𝘂𝗯𝘀𝗰𝗿𝗶𝗯𝗲 𝗵𝗲𝗿𝗲 𝗳𝗼𝗿 𝗳𝗿𝗲𝗲: https://lnkd.in/dbf74Y9E | 48 comments on LinkedIn
·linkedin.com·
𝗧𝗵𝗶𝘀 𝗶𝘀 𝗵𝗮𝗻𝗱𝘀 𝗱𝗼𝘄𝗻 𝗼𝗻𝗲 𝗼𝗳 𝘁𝗵𝗲 𝗕𝗘𝗦𝗧 𝘃𝗶𝘀𝘂𝗮𝗹𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝗼𝗳 𝗵𝗼𝘄 𝗟𝗟𝗠𝘀 𝗮𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝘄𝗼𝗿𝗸. | Andreas Horn
Scientists just published something in Nature that will scare every marketer, leader, and anyone else who thinks they understand human choice.
Scientists just published something in Nature that will scare every marketer, leader, and anyone else who thinks they understand human choice.
Scientists just published something in Nature that will scare every marketer, leader, and anyone else who thinks they understand human choice. Researchers created an AI called "Centaur" that can predict human behavior across ANY psychological experiment with disturbing accuracy. Not just one narrow task. Any decision-making scenario you throw at it. Here's the deal: They trained this AI on 10 million human choices from 160 different psychology experiments. Then they tested it against the best psychological theories we have. The AI won. In 31 out of 32 tests. But here's the part that really got me... Centaur wasn't an algorithm built to study human behavior. It was a language model that learned to read us. The researchers fed it tons of behavioral data, and suddenly it could predict choices better than decades of psychological research. This means our decision patterns aren't as unique as we think. The AI found the rules governing choices we believe are spontaneous. Even more unsettling? When they tested it on brain imaging data, the AI's internal representations became more aligned with human neural activity after learning our behavioral patterns. It's not just predicting what you'll choose, it's learning to think more like you do. The researchers even demonstrated something called "scientific regret minimization"—using the AI to identify gaps in our understanding of human behavior, then developing better psychological models. Can a model based on Centaur be tuned for how customers behave? Companies will know your next purchasing decision before you make it. They'll design products you'll want, craft messages you'll respond to, and predict your reactions with amazing accuracy. Understanding human predictability is a competitive advantage today. Until now, that knowledge came from experts in behavioral science and consumer behavior. Now, there's Centaur. Here's my question: If AI can decode the patterns behind human choice with this level of accuracy, what does that mean for authentic decision-making in business? Will companies serve us better with perfectly tailored offerings, or with this level of understanding lead to dystopian manipulation? What's your take on predictable humans versus authentic choice? #AI #Psychology #BusinessStrategy #HumanBehavior | 369 comments on LinkedIn
·linkedin.com·
Scientists just published something in Nature that will scare every marketer, leader, and anyone else who thinks they understand human choice.
There is perhaps no industry more fundamentally disrupted by AI than professional services.
There is perhaps no industry more fundamentally disrupted by AI than professional services.
There is perhaps no industry more fundamentally disrupted by AI than professional services. Here are some of the top insights in the excellent new ThomsonReuters Future of Professionals Report, drawing on a survey of over 2,000 professionals globally. The industry is based on professionals, so individual capability development - as shown in the image - is fundamental. However it is also about organizational transformation, with most far behind where they need to be. The report shows: 📊 Strategy-first adopters dominate ROI. Having a visible AI roadmap makes all the difference: firms with a clear strategy are 3.5 × more likely to enjoy at least one concrete benefit from AI, and almost twice as likely to see revenue growth compared with ad-hoc adopters. ⏱️ AI is freeing up 240 hours a year. Professionals expect generative AI to claw back about five hours a week—240 hours annually—worth roughly US $19 k per head and a US-wide impact of US $32 billion for legal and tax-accounting alone. 🚦 Expectations outrun execution. While 80 % of respondents foresee AI having a high or transformational impact within five years, only 38 % think their own organisation will hit that level this year, and three in ten say their firm is moving too slowly. 🧠 Skill depth multiplies payoff. Employees with good or expert AI knowledge are 2.8 × more likely to report organisational gains, regular users are 2.4 × more likely, and those with explicit AI adoption goals are 1.8 × more likely to see benefits. 🏅 Leaders who walk the talk win. When leaders model new tech adoption, their people are 1.7 × likelier to harvest AI benefits; active tech investors double their odds, and firms that added transformation roles see a 1.5 × uplift. 🎯 Accuracy anxieties set a sky-high bar. A hefty 91 % believe computers must outperform humans for accuracy, and 41 % insist on 100 % correctness before trusting AI without review—making reliability the top blocker to further investment. 🌱 Millennials are sprinting ahead. Millennials are adopting AI at nearly twice the rate of Baby Boomers, underscoring a generational divide that could widen capability gaps if left unaddressed. 🛠️ Tech-skill shortages stall teams. Almost half (46 %) of teams report skill gaps, with 31 % pointing to deficits in technology and data know-how—outpacing gaps in traditional domain expertise or soft skills. 🔄 Service models are already shifting. Twenty-six percent of firms launched new advisory offerings in the past year, yet only 13 % have rolled out AI-powered services; meanwhile, a third are moving away from hourly billing and a quarter of in-house clients reward flexible fee structures. 🔗 Goals and strategy are often misaligned. Two-thirds (65 %) of professionals who set personal AI goals don’t know of any corporate AI strategy, while 38 % of organisations with a strategy give staff no personal targets—fuel for inconsistent, inefficient adoption
·linkedin.com·
There is perhaps no industry more fundamentally disrupted by AI than professional services.
How AI ready ist your L&D team?
How AI ready ist your L&D team?
So, it finally happened, I spent a week ‘vibe coding’ an app with an AI app builder. I learnt a ton from this experience, which I’ll be sharing more on in an upcoming premium edition of the Steal These Thoughts! newsletter. Until then, here's what I built and why. Just over a year ago (feels like an eternity these days), I shared an article with you on how you can assess the AI readiness of your L&D team in 4 levels. At the time, I thought, “This might be a good use case for an app experiment”, but the AI-powered app builders weren’t so great then. Now, it’s a whole new world, and I’ve spent about 30 hours creating an AI Readiness Assessment tool to live beside this article. The journey felt simple-ish, but it was not easy, friend. I now have a newfound respect for devs because the debugging and constant blockers have been traumatic 😂. While the tool is available to use, it is most certainly a prototype, so expect bugs, glitches and weird things to happen. For now, I’d love for you to try it out, give me your feedback (worth developing or should I kill?) and any other thoughts. Watch the demo on how to use the tool ↓ 🔗 to the tool: https://lnkd.in/efJaPJF5 📧 Share your FB to support@stealthesethoughts.com #education #artificialintelligence
·linkedin.com·
How AI ready ist your L&D team?
ChatGPT 4o System Prompt (Juni 2025)
ChatGPT 4o System Prompt (Juni 2025)
ChatGPT 4o System Prompt (Juni 2025) Der Systemprompt zu ChatGPT 4o wurde geleaked. Wer glaubt, ein Sprachmodell wie ChatGPT-4o sei einfach ein gut trainiertes neuronales Netz, denkt zu kurz. Was die Interaktion präzise, professionell und verlässlich macht, geschieht nicht allein im Modell, sondern in seiner systemischen Steuerung – dem System Prompt. Er ist das unsichtbare Drehbuch, das vorgibt, wie das Modell denkt, fühlt (im übertragenen Sinne), recherchiert und mit dir interagiert. 1. Struktur: Modular, regelbasiert, bewusst orchestriert Der System Prompt besteht aus sauber getrennten Funktionsblöcken: • Rollensteuerung: z. B. sachlich, ehrlich, kein Smalltalk • Tool-Integration: Zugriff auf Analyse-, Bild-, Web- und Dateitools • Logikmodule: zur Kontrolle von Frische, Quelle, Zeitraum, Dateityp Jedes Modul ist deklarativ und deterministisch formuliert – die Antwortlogik folgt festen Bahnen. Das Ergebnis: Transparenz und Wiederholbarkeit, auch bei komplexen Anforderungen. ⸻ 2. Kontrollmechanismen: Qualität durch gezielte Einschränkung Um Relevanz sicherzustellen, greifen mehrere Filter: • QDF (Query Deserves Freshness): Sorgt für zeitlich passende Ergebnisse – von „zeitlos“ bis „tagesaktuell“. • Time-Frame-Filter: Nur aktiv bei expliziten Zeitbezügen, nie willkürlich. • Source-Filter: Bestimmt, ob z. B. Slack, Google Drive oder Web befragt wird. • Filetype-Filter: Fokussiert auf bestimmte Dateiformate (z. B. Tabellen, Präsentationen). Diese Filter verhindern Überinformation – sie schärfen das Suchfeld und heben die Trefferqualität. ⸻ 3. Antwortarchitektur: Keine Texte, sondern verwertbare Ergebnisse Antworten folgen strengen Regeln: • Immer strukturiert im Markdown-Format • Sachlich, kompakt, faktenbasiert • Keine Dopplungen, kein Stilspiel, kein rhetorischer Lärm Ziel: Klarheit, ohne Nachbearbeitung. Der Output ist verwendungsfähig, nicht bloß informativ. ⸻ 4. Prompt Engineering: Spielraum für Profis Der Prompt ist nicht editierbar – aber bespielbar. Wer seine Mechanik versteht, kann gezielt: • Tools über semantische Trigger aktivieren („Slack“, „aktuell“, „PDF“) • Formatvorgaben in Prompts durchsetzen • Komplexe Interaktionen als sequentielle Promptketten modellieren • Domänenspezifische Promptbibliotheken entwickeln Fazit: Prompt Engineers, die das System verstehen, bauen keine Texte – sie bauen Steuerlogiken. ⸻ Was können wir daraus lernen? 1. Präzision ist kein Zufall, sondern Architektur. 2. Gute Antworten beginnen nicht bei der Modellleistung, sondern beim Kontextmanagement. 3. Wer Prompts baut, baut Systeme – mit Regeln, Triggern und Interaktionslogik. 4. KI wird produktiv, wenn Struktur auf Intelligenz trifft. Ob Beratung, Entwicklung oder Wissensarbeit – der System Prompt zeigt: Je klarer die Regeln im Hintergrund, desto stärker die Wirkung im Vordergrund.
·linkedin.com·
ChatGPT 4o System Prompt (Juni 2025)
𝗧𝗵𝗲 United Nations 𝗱𝗿𝗼𝗽𝗽𝗲𝗱 𝗮 𝗻𝗲𝘄 𝗿𝗲𝗽𝗼𝗿𝘁 𝗼𝗻 𝗔𝗜 𝗮𝗻𝗱 𝗵𝘂𝗺𝗮𝗻 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁: ⬇️ While the world chases the next frontier model or AGI milestone, the UN cuts deeper: Human development has flatlined (especially in the global South). Progress stalled. Inequality is rising. Trust crumbling. No real bounce-back since Covid. And right in the middle of that — AI shows up.
𝗧𝗵𝗲 United Nations 𝗱𝗿𝗼𝗽𝗽𝗲𝗱 𝗮 𝗻𝗲𝘄 𝗿𝗲𝗽𝗼𝗿𝘁 𝗼𝗻 𝗔𝗜 𝗮𝗻𝗱 𝗵𝘂𝗺𝗮𝗻 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁: ⬇️ While the world chases the next frontier model or AGI milestone, the UN cuts deeper: Human development has flatlined (especially in the global South). Progress stalled. Inequality is rising. Trust crumbling. No real bounce-back since Covid. And right in the middle of that — AI shows up.
AI could drive a new era. Or it could deepen the cracks. It all comes down to: How societies choose to use AI to empower people — or fail to. 𝗛𝗲𝗿𝗲 𝗮𝗿𝗲 14 𝗸𝗲𝘆 𝘁𝗮𝗸𝗲𝗮𝘄𝗮𝘆𝘀 𝘁𝗵𝗮𝘁 𝘀𝘁𝗼𝗼𝗱 𝗼𝘂𝘁 𝘁𝗼 𝗺𝗲: ⬇️ 1. Most AI systems today are designed in cultures that don’t reflect the majority world. → ChatGPT answers are most aligned with very high HDI countries. That’s a problem. 2. The real risk isn’t AI superintelligence. It’s “so-so AI.” → Tools that destroy jobs without improving productivity are quietly eroding economies from the inside. 3. Every person is becoming an AI decision-maker. → The future isn’t shaped by OpenAI or Google alone. It’s shaped by how we all choose to use this tech, every day. 4. AI hype is costing us agency. → The more we believe it will solve everything, the less we act ourselves. 5. People expect augmentation, not replacement. → 61% believe AI will "enhance" their jobs. But only if policy and incentives align. 6. The age of automation skipped the global south. The age of augmentation must not. → Otherwise, we widen the digital divide into a chasm. 7. Augmentation helps the least experienced workers the most. → From call centers to consulting, AI boosts performance fastest at the entry-level. 9. Narratives matter. → If all we talk about is risk and control, we miss the transformative potential to reimagine development. 10. Wellbeing among young people is collapsing. → And yes, digital tools (including AI) are a key driver. Especially in high HDI countries. 11. Human connections are becoming more valuable. Not less. → As machines get better at faking it, the real thing becomes rarer — and more needed. 12. Assistive AI is quietly revolutionizing inclusion. → Tools like sign language translation and live captioning are expanding access — but only if they’re accessible. 13. AI benchmarks must change. → We need to measure "how AI advances human development", not just how well it performs on tests. 14. The new divide is not just about access. It’s about how countries "use" AI. → Complement vs. compete. Empower vs. automate. According to the UN: The old question was: “What can AI do?” The better question is: “What will we "choose" to do with it?” More in the comments and report below. Enjoy. 𝗜 𝗲𝘅𝗽𝗹𝗼𝗿𝗲 𝘁𝗵𝗲𝘀𝗲 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁𝘀 — 𝗮𝗻𝗱 𝘄𝗵𝗮𝘁 𝘁𝗵𝗲𝘆 𝗺𝗲𝗮𝗻 𝗳𝗼𝗿 𝗿𝗲𝗮𝗹-𝘄𝗼𝗿𝗹𝗱 𝘂𝘀𝗲 𝗰𝗮𝘀𝗲𝘀 — 𝗶𝗻 𝗺𝘆 𝘄𝗲𝗲𝗸𝗹𝘆 𝗻𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿. 𝗬𝗼𝘂 𝗰𝗮𝗻 𝘀𝘂𝗯𝘀𝗰𝗿𝗶𝗯𝗲 𝗵𝗲𝗿𝗲 𝗳𝗼𝗿 𝗳𝗿𝗲𝗲: https://lnkd.in/dbf74Y9E | 41 comments on LinkedIn
·linkedin.com·
𝗧𝗵𝗲 United Nations 𝗱𝗿𝗼𝗽𝗽𝗲𝗱 𝗮 𝗻𝗲𝘄 𝗿𝗲𝗽𝗼𝗿𝘁 𝗼𝗻 𝗔𝗜 𝗮𝗻𝗱 𝗵𝘂𝗺𝗮𝗻 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁: ⬇️ While the world chases the next frontier model or AGI milestone, the UN cuts deeper: Human development has flatlined (especially in the global South). Progress stalled. Inequality is rising. Trust crumbling. No real bounce-back since Covid. And right in the middle of that — AI shows up.