At the start of the year, I stuck my neck out with four predictions about Knowledge Graphs in 2025. so let's see how I actually did. ð¢ GraphRAG via Ontologies: I'm claiming this one. GraphRAG⊠| Tony Seale | 28 comments
At the start of the year, I stuck my neck out with four predictions about Knowledge Graphs in 2025.
Something interesting is happening to natural language. It's moving deeper into the machine. Large language models have shifted where prose sits in the technology stack. The Model Context Protocol⊠| Tony Seale | 63 comments
Something interesting is happening to natural language.
TeachLM: six key findings from a study of the latest AI model fine-tuned for teaching & learning.
TeachLM: six key findings from a study of the latest AI model fine-tuned for teaching & learning.
LLMs are great assistants but ineffective instructional designers and teachers. This week, researchers at Polygence + Stanford University published a paper on a new model â TeachLM â which was built to address exactly this gap.
In my latest blog post, I share the key findings from the study, including observations on what it tells us about AIâs instructional design skills.
Hereâs the TLDR:
ð¥ TeachLM outperformed generic LLMs on six key education metrics, including improved question quality & increased personalisation
ð¥TeachLM also outperformed âEducational LLMsâ - e.g. Anthropicâs Learning Mode, OpenAIâs Study Mode and Googleâs Guided Learning - which fail to deliver the productive struggle, open exploration and specialised dialogue required for substantive learning
ð¥TeachLM flourished at developing some teaching skills (e.g. being succinct in its comms) but struggled with others (e.g. asking enough of the right sorts of probing questions)
ð¥ Training TeachLM on real educational interactions rather than relying on prompts or synthetic data lead to improved model performance
ð¥TeachLM was trained primarily for delivery, leaving significant gaps in its ability to âdesign the right experienceâ, e.g. by failing to define learnersâ start points and goal
ð¥Overall, human educators still outperform all LLMs, including TeachLM, on both learning design and delivery
Learn more & access the full paper in my latest blog post (link in comments).
Phil ð
Appleâs latest announcement is worth paying attention to. Theyâve just introduced an AI model that doesnât need the cloud â it runs straight in your browser.
The specs are impressive:
Up to 85x faster
3.4x smaller footprint
Real-time performance directly in-browser
Capable of live video captioning â fully local
No external infrastructure. No latency. No exposure of sensitive data.
Simply secure, on-device AI.
Yes, the technical benchmarks will be debated. But the bigger story is Appleâs positioning. This is about more than numbers â itâs about shaping a narrative where AI is personal, private, and seamlessly integrated.
At Copenhagen Institute for Futures Studies, weâve been tracking the rise of small-scale, locally running AI models for some time. We believe this shift has the potential to redefine how organizations and individuals interact with intelligent systems â moving AI from âout thereâ in the cloud to right here, at the edge. | 10 comments on LinkedIn
Apertus: Ein vollstÀndig offenes, transparentes und mehrsprachiges Sprachmodell
Die EPFL, die ETH ZÌrich und das Schweizerische Supercomputing-Zentrum CSCS haben heute Apertus veröffentlicht: Das erste umfangreiche, offene und mehrsprachige Sprachmodell aus der Schweiz. Damit setzen sie einen Meilenstein fÌr eine transparente und vielfÀltige generative KI.
The AI Hype is a Dead Man Walking. The Math Finally Proves It.
For the past two years, the AI industry has been operating on a single, seductive promise: that if we just keep scaling our current models, we'll eventually arrive at AGI. A wave of new research, brilliantly summarized in a recent video analysis, has finally provided the mathematical proof that this promise is a lie.
This isn't just another opinion; it's a brutal, two-pronged assault on the very foundations of the current AI paradigm:
1. The Wall of Physics:
The first paper reveals a terrifying reality about the economics of reliability. To reduce the error rate of today's LLMs by even a few orders of magnitudeâto make them truly trustworthy for enterprise useâwould require 10^20 times more computing power. This isn't just a challenge; it's a physical impossibility. We have hit a hard wall where the cost of squeezing out the last few percentage points of reliability is computationally insane. The era of brute-force scaling is over.
2. The Wall of Reason:
The second paper is even more damning. It proves that "Chain-of-Thought," the supposed evidence of emergent reasoning in LLMs, is a "brittle mirage". The models aren't reasoning; they are performing a sophisticated pattern-match against their training data. The moment a problem deviates even slightly from that data, the "reasoning" collapses entirely. This confirms what skeptics have been saying all along: we have built a world-class "statistical parrot," not a thinking machine.
This is the end of the "Blueprint Battle." The LLM-only blueprint has failed. The path forward is not to build a bigger parrot, but to invest in the hard, foundational research for a new architecture. The future belongs to "world models," like those being pursued by Yann LeCun and othersâsystems that learn from interacting with a real or virtual world, not just from a library of text.
The "disappointing" GPT-5 launch wasn't a stumble; it was the first, visible tremor of this entire architectural paradigm hitting a dead end. The hype is over. Now the real, foundational work of inventing the next paradigm begins. | 554 comments on LinkedIn
ð§ðµð¶ð ð¶ð ðµð®ð»ð±ð ð±ðŒðð» ðŒð»ð² ðŒð³ ððµð² ðððŠð§ ðð¶ððð®ð¹ð¶ðð®ðð¶ðŒð» ðŒð³ ðµðŒð ððð ð ð®ð°ððð®ð¹ð¹ð ððŒð¿ðž. â¬ïž
ððŠðµ'ðŽ ð£ð³ðŠð¢ð¬ ðªðµ ð¥ð°ðžð¯:
ð§ðŒðžð²ð»ð¶ðð®ðð¶ðŒð» & ððºð¯ð²ð±ð±ð¶ð»ðŽð:
- Input text is broken into tokens (smaller chunks).
- Each token is mapped to a vector in high-dimensional space, where words with similar meanings cluster together.
ð§ðµð² ðððð²ð»ðð¶ðŒð» ð ð²ð°ðµð®ð»ð¶ððº (ðŠð²ð¹ð³-ðððð²ð»ðð¶ðŒð»):
- Words influence each other based on context â ensuring "bank" in riverbank isnât confused with financial bank.
- The Attention Block weighs relationships between words, refining their representations dynamically.
ðð²ð²ð±-ððŒð¿ðð®ð¿ð± ðð®ðð²ð¿ð (ðð²ð²ðœ ð¡ð²ðð¿ð®ð¹ ð¡ð²ðððŒð¿ðž ð£ð¿ðŒð°ð²ððð¶ð»ðŽ)
- After attention, tokens pass through multiple feed-forward layers that refine meaning.
- Each layer learns deeper semantic relationships, improving predictions.
ððð²ð¿ð®ðð¶ðŒð» & ðð²ð²ðœ ðð²ð®ð¿ð»ð¶ð»ðŽ
- This process repeats through dozens or even hundreds of layers, adjusting token meanings iteratively.
- This is where the "deep" in deep learning comes in â layers upon layers of matrix multiplications and optimizations.
ð£ð¿ð²ð±ð¶ð°ðð¶ðŒð» & ðŠð®ðºðœð¹ð¶ð»ðŽ
- The final vector representation is used to predict the next word as a probability distribution.
- The model samples from this distribution, generating text word by word.
ð§ðµð²ðð² ðºð²ð°ðµð®ð»ð¶ð°ð ð®ð¿ð² ð®ð ððµð² ð°ðŒð¿ð² ðŒð³ ð®ð¹ð¹ ððð ð (ð².ðŽ. ððµð®ððð£ð§). ðð ð¶ð ð°ð¿ðð°ð¶ð®ð¹ ððŒ ðµð®ðð² ð® ððŒð¹ð¶ð± ðð»ð±ð²ð¿ððð®ð»ð±ð¶ð»ðŽ ðµðŒð ððµð²ðð² ðºð²ð°ðµð®ð»ð¶ð°ð ððŒð¿ðž ð¶ð³ ððŒð ðð®ð»ð ððŒ ð¯ðð¶ð¹ð± ðð°ð®ð¹ð®ð¯ð¹ð², ð¿ð²ððœðŒð»ðð¶ð¯ð¹ð² ðð ððŒð¹ððð¶ðŒð»ð.
Here is the full video from 3Blue1Brown with exaplantion. I highly recommend to read, watch and bookmark this for a further deep dive: https://lnkd.in/dAviqK_6
ð ð²ð ðœð¹ðŒð¿ð² ððµð²ðð² ð±ð²ðð²ð¹ðŒðœðºð²ð»ðð â ð®ð»ð± ððµð®ð ððµð²ð ðºð²ð®ð» ð³ðŒð¿ ð¿ð²ð®ð¹-ððŒð¿ð¹ð± ððð² ð°ð®ðð²ð â ð¶ð» ðºð ðð²ð²ðžð¹ð ð»ð²ððð¹ð²ððð²ð¿. ð¬ðŒð ð°ð®ð» ððð¯ðð°ð¿ð¶ð¯ð² ðµð²ð¿ð² ð³ðŒð¿ ð³ð¿ð²ð²: https://lnkd.in/dbf74Y9E | 48 comments on LinkedIn
Scientists just published something in Nature that will scare every marketer, leader, and anyone else who thinks they understand human choice.
Scientists just published something in Nature that will scare every marketer, leader, and anyone else who thinks they understand human choice.
Researchers created an AI called "Centaur" that can predict human behavior across ANY psychological experiment with disturbing accuracy. Not just one narrow task. Any decision-making scenario you throw at it.
Here's the deal: They trained this AI on 10 million human choices from 160 different psychology experiments. Then they tested it against the best psychological theories we have.
The AI won. In 31 out of 32 tests.
But here's the part that really got me...
Centaur wasn't an algorithm built to study human behavior. It was a language model that learned to read us. The researchers fed it tons of behavioral data, and suddenly it could predict choices better than decades of psychological research.
This means our decision patterns aren't as unique as we think. The AI found the rules governing choices we believe are spontaneous.
Even more unsettling? When they tested it on brain imaging data, the AI's internal representations became more aligned with human neural activity after learning our behavioral patterns. It's not just predicting what you'll choose, it's learning to think more like you do.
The researchers even demonstrated something called "scientific regret minimization"âusing the AI to identify gaps in our understanding of human behavior, then developing better psychological models.
Can a model based on Centaur be tuned for how customers behave? Companies will know your next purchasing decision before you make it. They'll design products you'll want, craft messages you'll respond to, and predict your reactions with amazing accuracy.
Understanding human predictability is a competitive advantage today. Until now, that knowledge came from experts in behavioral science and consumer behavior. Now, there's Centaur.
Here's my question: If AI can decode the patterns behind human choice with this level of accuracy, what does that mean for authentic decision-making in business? Will companies serve us better with perfectly tailored offerings, or with this level of understanding lead to dystopian manipulation?
What's your take on predictable humans versus authentic choice?
#AI #Psychology #BusinessStrategy #HumanBehavior | 369 comments on LinkedIn
ChatGPT 4o System Prompt (Juni 2025)
Der Systemprompt zu ChatGPT 4o wurde geleaked.
Wer glaubt, ein Sprachmodell wie ChatGPT-4o sei einfach ein gut trainiertes neuronales Netz, denkt zu kurz.
Was die Interaktion prÀzise, professionell und verlÀsslich macht, geschieht nicht allein im Modell, sondern in seiner systemischen Steuerung â dem System Prompt.
Er ist das unsichtbare Drehbuch, das vorgibt, wie das Modell denkt, fÃŒhlt (im ÃŒbertragenen Sinne), recherchiert und mit dir interagiert.
1. Struktur: Modular, regelbasiert, bewusst orchestriert
Der System Prompt besteht aus sauber getrennten Funktionsblöcken:
⢠Rollensteuerung: z.â¯B. sachlich, ehrlich, kein Smalltalk
⢠Tool-Integration: Zugriff auf Analyse-, Bild-, Web- und Dateitools
⢠Logikmodule: zur Kontrolle von Frische, Quelle, Zeitraum, Dateityp
Jedes Modul ist deklarativ und deterministisch formuliert â die Antwortlogik folgt festen Bahnen.
Das Ergebnis: Transparenz und Wiederholbarkeit, auch bei komplexen Anforderungen.
âž»
2. Kontrollmechanismen: QualitÀt durch gezielte EinschrÀnkung
Um Relevanz sicherzustellen, greifen mehrere Filter:
⢠QDF (Query Deserves Freshness): Sorgt fÃŒr zeitlich passende Ergebnisse â von âzeitlosâ bis âtagesaktuellâ.
⢠Time-Frame-Filter: Nur aktiv bei expliziten ZeitbezÌgen, nie willkÌrlich.
⢠Source-Filter: Bestimmt, ob z.â¯B. Slack, Google Drive oder Web befragt wird.
⢠Filetype-Filter: Fokussiert auf bestimmte Dateiformate (z.â¯B. Tabellen, PrÀsentationen).
Diese Filter verhindern Ãberinformation â sie schÀrfen das Suchfeld und heben die TrefferqualitÀt.
âž»
3. Antwortarchitektur: Keine Texte, sondern verwertbare Ergebnisse
Antworten folgen strengen Regeln:
⢠Immer strukturiert im Markdown-Format
⢠Sachlich, kompakt, faktenbasiert
⢠Keine Dopplungen, kein Stilspiel, kein rhetorischer LÀrm
Ziel: Klarheit, ohne Nachbearbeitung. Der Output ist verwendungsfÀhig, nicht bloà informativ.
âž»
4. Prompt Engineering: Spielraum fÃŒr Profis
Der Prompt ist nicht editierbar â aber bespielbar. Wer seine Mechanik versteht, kann gezielt:
⢠Tools ÃŒber semantische Trigger aktivieren (âSlackâ, âaktuellâ, âPDFâ)
⢠Formatvorgaben in Prompts durchsetzen
⢠Komplexe Interaktionen als sequentielle Promptketten modellieren
⢠DomÀnenspezifische Promptbibliotheken entwickeln
Fazit: Prompt Engineers, die das System verstehen, bauen keine Texte â sie bauen Steuerlogiken.
âž»
Was können wir daraus lernen?
1. PrÀzision ist kein Zufall, sondern Architektur.
2. Gute Antworten beginnen nicht bei der Modellleistung, sondern beim Kontextmanagement.
3. Wer Prompts baut, baut Systeme â mit Regeln, Triggern und Interaktionslogik.
4. KI wird produktiv, wenn Struktur auf Intelligenz trifft.
Ob Beratung, Entwicklung oder Wissensarbeit â der System Prompt zeigt:
Je klarer die Regeln im Hintergrund, desto stÀrker die Wirkung im Vordergrund.
Andrej Karpathy's keynote on June 17, 2025 at AI Startup School in San Francisco. Slides provided by Andrej: https://drive.google.com/file/d/1a0h1mkwfmV2Plek...
Best AI Tools for Deep Research (Ranked by a PhD, Not Hype)
Today, Iâm diving into the world of deep research tools to find out which platforms are truly the most helpful for academic work. ⌠✠Sign up for my FREE new...
ð³ Durchblick im ChatGPT-Dschungel: Welches Modell passt wirklich zu dir?
Der groÃe Vergleich von GPT-4o, o3, o4 mini, 4.1 und mehr âš Ein persönlicher Erfahrungsbericht und ein Deep Research â speziell fÃŒr mein Netzwerk.
ð§ðŒðœ ðð²ð»ð²ð¿ð®ðð¶ðð² ðð ð§ð²ð¿ðºð ð¬ðŒð ðŠðµðŒðð¹ð± ðð»ðŒð â ðð ðœð¹ð®ð¶ð»ð²ð± ðŠð¶ðºðœð¹ð
ð. ððð (ðð®ð¿ðŽð² ðð®ð»ðŽðð®ðŽð² ð ðŒð±ð²ð¹)
â Helps computers understand and write human-like text
â Examples: GPT-4, Claude, Gemini
â Used in: Chatbots, coding tools, content generation
ð®. ð§ð¿ð®ð»ðð³ðŒð¿ðºð²ð¿ð
â The tech behind all modern AI models
â Let models understand meaning, context, and order of words
â Examples: BERT, GPT
ð¯. ð£ð¿ðŒðºðœð ðð»ðŽð¶ð»ð²ð²ð¿ð¶ð»ðŽ
â Writing better instructions to get better AI answers
â Includes system prompts, step-by-step prompts, and safety rules
ð°. ðð¶ð»ð²-ð§ðð»ð¶ð»ðŽ
â Training an AI model on your data
â Helps tailor it for specific tasks like legal, medical, or financial use cases
ð±. ððºð¯ð²ð±ð±ð¶ð»ðŽð
â A way for AI to understand meaning and relationships between words or documents
â Used in search engines and recommendation systems
ð². ð¥ðð (ð¥ð²ðð¿ð¶ð²ðð®ð¹-ðððŽðºð²ð»ðð²ð± ðð²ð»ð²ð¿ð®ðð¶ðŒð»)
â Combines AI with a database or document store
â Helps AI give more accurate, fact-based answers
ð³. ð§ðŒðžð²ð»ð
â The chunks of text AI reads and writes
â Managing them controls cost and performance
ðŽ. ðð®ð¹ð¹ðð°ð¶ð»ð®ðð¶ðŒð»
â When AI gives wrong or made-up answers
â Can be fixed with fact-checking and better prompts
ðµ. ðð²ð¿ðŒ-ðŠðµðŒð ðð²ð®ð¿ð»ð¶ð»ðŽ
â When AI can perform a task without being trained on it
â Saves time on training
ðð¬. ððµð®ð¶ð»-ðŒð³-ð§ðµðŒððŽðµð
â AI explains its answer step-by-step
â Helps with complex reasoning tasks
ðð. ððŒð»ðð²ð ð ðªð¶ð»ð±ðŒð
â The amount of info AI can see at once
â Larger windows help with longer documents or conversations
ðð®. ð§ð²ðºðœð²ð¿ð®ððð¿ð²
â Controls how creative or predictable AI is
â Lower values = more accurate; higher values = more creative
ðªðµð®ðâð ððŒðºð¶ð»ðŽ ð¡ð²ð ð?
â Multimodal AI (text, images, audio together)
â Smaller, faster models
â Safer, ethical AI (Constitutional AI)
â Agentic AI (autonomous, task-completing agents)
Knowing the terms is just step one â what really matters is how you ð¶ðŽðŠ them to build better solutions.
| 51 comments on LinkedIn
Everything Announced at Google Cloud Next in 12 Minutes
Catch the top moments from the Google Cloud Next keynote presentation, featuring CEO Thomas Kurian on AI breakthroughs, along with key announcements and real...
Gratis und das BESTE LLM auf zig Benchmarks. Wie hat Google das geschafft? Und vor allem, was bedeutet das fÃŒr uns?Schnappt euch hier Incogni mit meinem Code...
One of my favourite reads from the last six months is Sequoia Capitalâs report exploring the evolution of generative AI and itâs implications for the messyâŠ
Thought is a multi-step process, but rarely linear. Early LLMs lacked⊠| Ross Dawson
Thought is a multi-step process, but rarely linear. Early LLMs lacked structured reasoning and often struggled with logic. Chain-of-Thought introducedâŠ
2025 is the Year of LCMs and not LLMs. Meta has announced a new⊠| Manthan Patel
2025 is the Year of LCMs and not LLMs.
Â
Meta has announced a new architecture for the future of Large Language Models called Large Concept Models.
Â
Building⊠| 103 comments on LinkedIn
ChatGPT 4.5 is here! And it feels a bit magical. My favoriteâŠ
BREAKING! ChatGPT 4.5 is here! And it feels a bit magical. My favorite part is the increased EQ - check out the highlight video below. This feels like the future of AI. More Claude-y, which I LOVE.
Empathy is exactly what I've been missing from ChatGPT! It's been fine - but this is another level.
This is how we will actually communicate with AI. It's the thing I love about Claude. Microsoft is leaning into empathy with Mustafa Suleyman after his turn at Inflection.
Okay, let's get into it. BTW - the video is MY OWN EDIT. I just loved the EQ example so much.
HIGHLIGHTS:
EXCLUSIVE ACCESS: Initially available only to $200/month Pro subscribers, coming to Plus users next week
MOST HUMAN-LIKE YET: Features significantly enhanced emotional intelligence and conversational abilities
LARGEST MODEL: OpenAI's biggest model to date, though specific parameters remain undisclosed
FINAL PRE-REASONING MODEL: Last major release before OpenAI introduces chain-of-thought reasoning in GPT-5
>>The Evolution of AI Conversation
What stands out with 4.5 is how much more human the interactions feel. The model demonstrates substantially improved emotional intelligence, with responses that show greater nuance and sensitivity.
This shift toward a more empathetic, Claude-like conversation style suggests OpenAI is recognizing that raw intelligence isn't enough â it matters HOW it talks to you.
>>Key Features That Make It Special
Enhanced Knowledge and Reasoning
- The expanded knowledge base means deeper, more comprehensive answers
- Significantly fewer hallucinations, making it more reliable for critical tasks
- Pattern recognition that borders on intuitive understanding
Reimagined User Experience
- Conversations flow naturally, without the mechanical feel of earlier models
- Context handling that actually remembers what you've been discussing
- Lightning-fast responses despite its massive size
>>The Price of Progress
Access to this cutting-edge technology comes at a premium.
ChatGPT Pro subscribers ($200/month) get first access, with Plus users ($20/month) joining the party the week of March 3. Enterprise and Education users will follow shortly after.
>>Where 4.5 Really Shines
The model particularly excels at:
- Creative writing with genuine emotional depth
- Complex problem-solving that requires nuanced understanding
- Communication tasks where tone and empathy matter
- Multi-step planning and execution, especially for coding workflows
Pretty cool.
++++++++++++++++++++
UPSKILL YOUR ORGANIZATION:
When your company is ready, we are ready to upskill your workforce at scale. Our Generative AI for Professionals course is tailored to enterprise and highly effective in driving AI adoption through a unique, proven behavioral transformation. It's pretty awesome. Check out our website or shoot me a DM.
AI Deep Research Modelle im Vergleich - OpenAI, Perplexity und Co-Storm AI - Wer liegt vorne?
ð Deep Dive in die Welt der Deep Research Modelle von OpenAI, Perplexity und co! ðWelches davon, macht den besten Eindruck?In der dynamischen Kontext der K...
AI Model Comparison: Free vs Paid Tiers â AI for Education
To help educators choose the best toolset for them, we compare the key features available in the free vs. paid tiers of some of the most popular Generative AI models.
All eyes on DeepSeek: Im KI-Bereich hat offenbar gerade eine Revolution stattgefunden ð€. FÃŒr Hochschulen eine grandiose Nachricht.
All eyes on DeepSeek: Im KI-Bereich hat offenbar gerade eine Revolution stattgefunden ð€. FÃŒr Hochschulen eine grandiose Nachricht. DeepSeek ist ein⊠| 29 comments on LinkedIn
LLMs and Generative AI are terrific tools for lots of different things. Check out the data viz below capturing #HBR's top 100 uses for Gen AI.
LLMs and Generative AI are terrific tools for lots of different things. Check out the data viz below capturing #HBR's top 100 uses for Gen AI. I've personally⊠| 17 comments on LinkedIn
LLMs do more than predict the next word; they compress a "world-model" within their complex networks and weights. This is an area of active debate within the
LLMs do more than predict the next word; they compress a "world-model" within their complex networks and weights. This is an area of active debate within the⊠| 76 comments on LinkedIn