Found 4863 bookmarks
Newest
Graph-based approaches compared to vectors: they are not mutually exclusive – the strongest agent architectures are hybrid.
Graph-based approaches compared to vectors: they are not mutually exclusive – the strongest agent architectures are hybrid.
When the conversation turns to AI agents, even technically savvy people keep asking what's special about graph-based approaches compared to vectors.
graph-based approaches compared to vectors.This has always felt like a strange question, because in fact, they are not mutually exclusive – the strongest agent architectures are hybrid.
·linkedin.com·
Graph-based approaches compared to vectors: they are not mutually exclusive – the strongest agent architectures are hybrid.
A semantic digital twin of a business is not built from a context graph alone. And it’s not built from an ontology alone.
A semantic digital twin of a business is not built from a context graph alone. And it’s not built from an ontology alone.
Over the last few days, there’s been an intense discussion about CONTEXT GRAPHS — sparked by work from people like Jaya Gupta and Animesh Koratana.
A semantic digital twin of a business is not built from a context graph alone. And it’s not built from an ontology alone.
·linkedin.com·
A semantic digital twin of a business is not built from a context graph alone. And it’s not built from an ontology alone.
Can GPT-5.2 extract specific domain entities inside a 256K token Victorian novel using ONLY an RDF Ontology?
Can GPT-5.2 extract specific domain entities inside a 256K token Victorian novel using ONLY an RDF Ontology?
🎯 Can GPT-5.2 extract specific domain entities inside a 256K token Victorian novel using ONLY an RDF Ontology? (Updated GraphRAG repo) There's a lot of talk about GPT-5.2's context attention and its ability to code. I was curious how this affected its ability to "understand" ontologies. I downloaded the full version of Charles Dickens "Oliver Twist" from Kaggle (19000 rows, 975000 characters, 256K tokens) and scattered my jaguar corpus with animals, cars, and guitars into random places to put it to the ultimate test 🧪: 📖 256K tokens of irrelevant Victorian novel → Must ignore 🐆 Wildlife jaguar info scattered throughout → Must extract 🚗 Car jaguar mentions mixed in → Must ignore 🎸 Guitar jaguar mentions mixed in → Must ignore The only thing I gave the model was an RDF ontology. No instructions. No examples. No "please ignore cars and guitars." Just the ontology. ✨ And it worked. Every wildlife jaguar extracted. Every car and guitar ignored. 🤔 Why are RDF/OWL ontologies better than text descriptions? Certain market leading LPG vendors will tell you that text descriptions work just as well as ontologies. They have to, they can't store semantic ontologies anyway. Here's why they're wrong: ❌ Text is ambiguous. Ontologies aren't. Prompt: "Extract information about jaguars (the animal, not cars or guitars)" You're trusting the LLM to interpret "animal" correctly. What if your domain is more nuanced? What if "animal" isn't clear enough? ❌ "Converting RDF to natural language" is stupid. I've seen this pattern: "Use an LLM to convert your ontology to natural language, then use that for extraction!" This is backwards: 🔄 You're only proving the LLM can read RDF. If it can convert RDF → NL, it can just... use the RDF directly. ⚠️ You're introducing a second error source. If the LLM misinterprets something in step 1, that error multiplies when you use the NL version. 🔥It Burns a lot more tokens 🗑️ You're throwing away machine-readability. RDF can be stored in databases. Validated with SHACL. Reasoned over with OWL. Natural language can't. ✅ Maybe LLMs don't "reason". But they sure simulate it. Can an LLM truly perform OWL reasoning? Debatable. But here's what's NOT debatable: 🧠 LLMs have been trained on massive amounts of RDF, OWL, and SPARQL. Just like they've been trained on Python and C++ etc ... 📐 They can predict what valid RDF looks like. 🌳 They can simulate class inheritance. 🔍 They can pattern-match ontological structures. Is that "reasoning"? I don't know. But when I give GPT-5.2 an RDF ontology, it behaves as if it understands it. And that's enough for me. 💡 How do you think this Will affect future RAG systems? 📦 I've updated my open source repo with the new corpus and model: https://lnkd.in/dmf5HDRm 🔗 If you missed the original Jaguar GraphRAG post that started this: https://lnkd.in/dzag69dH #GraphRAG #KnowledgeGraphs #SemanticWeb #RDF #SPARQL #AI #LLM #Ontology #OpenSource #neo4j #graphdb #GPT5 #AgenticAI | 39 comments on LinkedIn
Can GPT-5.2 extract specific domain entities inside a 256K token Victorian novel using ONLY an RDF Ontology?
·linkedin.com·
Can GPT-5.2 extract specific domain entities inside a 256K token Victorian novel using ONLY an RDF Ontology?
A semantic digital twin of a business is not built from a context graph alone. And it’s not built from an ontology alone.
A semantic digital twin of a business is not built from a context graph alone. And it’s not built from an ontology alone.
Over the last few days, there’s been an intense discussion about CONTEXT GRAPHS — sparked by work from people like Jaya Gupta and Animesh Koratana. It’s an important conversation, and it points to something bigger. A semantic digital twin of a business is not built from a context graph alone. And it’s not built from an ontology alone. 👉 It is the combination of both. Ontology defines what the business means It captures: • business concepts and relationships • rules, constraints, and permissions • metric definitions and accountability Ontology is NORMATIVE. It defines what is valid, comparable, and allowed. Without ontology, meaning drifts and decisions can’t be governed. You may have data — but you don’t have authority. The Context Graph captures how the business behaves It records: • decision traces and trajectories • observed human and system activity • experience over time The context graph is EMPIRICAL. It remembers what happened — without rewriting the rules. Without it, there’s no explanation, no learning, and no institutional memory. You may be correct — but you are blind. Together, they form the semantic digital twin A business is not just definitions or events. It is meaning plus experience, rules plus history, authority plus memory. The combination of an ontology and a context graph is the semantic digital twin of the business. This isn’t academic. It’s the foundation for explainable decisions, safe automation, and enterprise AI that can reason about the organization itself. Curious how others think about this split between meaning and behavior. #DigitalTwin #ContextGraph #Ontology #EnterpriseAI | 26 comments on LinkedIn
A semantic digital twin of a business is not built from a context graph alone. And it’s not built from an ontology alone.
·linkedin.com·
A semantic digital twin of a business is not built from a context graph alone. And it’s not built from an ontology alone.
How do you build a context graph
How do you build a context graph
Authored by Animesh Koratana, founder and CEO of PlayerZero We recently wrote about context graphs, the layer that captures decision traces rather than just data. The argument: the next trillion-dollar platforms won't be built by adding AI to existing systems of record, but by capturing the reasonin
·linkedin.com·
How do you build a context graph
The Art of Taxonomy Workarounds
The Art of Taxonomy Workarounds
“With the greatest of ease / I stay on track like the greatest of skis.” – Stress Eater, Czarface, Kool Keith, Rocket Science In all my years working in taxonomy, there have been few times I …
·informationpanopticon.blog·
The Art of Taxonomy Workarounds
An Intent Map collects individual feedback loops, measures alignment to an ontology, and ensures valuable metadata flows into the Context Graph.
An Intent Map collects individual feedback loops, measures alignment to an ontology, and ensures valuable metadata flows into the Context Graph.
Kay Iversen has a great post on the combination of ontologies and context graphs for creating semantic digital twins, inspired by posts by Animesh Koratana and Jaya Gupta.
An Intent Map collects individual feedback loops, measures alignment to an ontology, and ensures valuable metadata flows into the Context Graph.
·linkedin.com·
An Intent Map collects individual feedback loops, measures alignment to an ontology, and ensures valuable metadata flows into the Context Graph.
A NotebookLM slide deck created from the "Context Graphs: AI's Trillion-Dollar Opportunity" article
A NotebookLM slide deck created from the "Context Graphs: AI's Trillion-Dollar Opportunity" article
Here is a NotebookLM slide deck created from the "Context Graphs: AI's Trillion-Dollar Opportunity" article (see https://lnkd.in/e8SQm-Zz) which I saw in this post by Anthony Alcaraz (see https://lnkd.in/eFMhMmEG) I completely agree with this hypothesis that graphs is the way to capture all data used to create decisions, which is the only real way to have provenance and explainability. For example that is what I'm doing at MyFeeds-AI, which you can read about at investor.myfeeds.ai or see this presentation https://lnkd.in/eTyAndHg
a NotebookLM slide deck created from the "Context Graphs: AI's Trillion-Dollar Opportunity" article
·linkedin.com·
A NotebookLM slide deck created from the "Context Graphs: AI's Trillion-Dollar Opportunity" article
Semantic Web Market Size, Share, Growth & Forecast [2030]
Semantic Web Market Size, Share, Growth & Forecast [2030]

📈 Semantic Web Market Set for Strong Growth Toward 2030

Recent market research indicates that the global Semantic Web market is expected to grow significantly toward 2030, fueled by increased adoption of knowledge graphs, semantic data integration, and AI-driven data processing.

This growth reflects a broader shift: organizations are moving beyond traditional data pipelines toward architectures that can capture meaning, context, and relationships. Technologies such as RDF, OWL, SPARQL, and SHACL are increasingly used to address challenges around data interoperability, governance, and explainable AI.

As enterprises and public organizations prepare for stricter data-sharing requirements and more advanced AI use cases, semantic technologies are no longer experimental, they are becoming foundational infrastructure.

🔎 The article highlights a clear trend: semantics are moving from the margins into the mainstream of enterprise data strategy.

·marketsandmarkets.com·
Semantic Web Market Size, Share, Growth & Forecast [2030]
Foundation Capital just published "Context Graphs: AI's Trillion-Dollar Opportunity"
Foundation Capital just published "Context Graphs: AI's Trillion-Dollar Opportunity"
Foundation Capital just published "Context Graphs: AI's Trillion-Dollar Opportunity" and it's the most technically coherent thesis I've seen on where enterprise AI infrastructure is heading.
Foundation Capital just published "Context Graphs: AI's Trillion-Dollar Opportunity"
·linkedin.com·
Foundation Capital just published "Context Graphs: AI's Trillion-Dollar Opportunity"
Ontology layering: Upper ontology, Business ontology, Systemic ontology
Ontology layering: Upper ontology, Business ontology, Systemic ontology
I’ve been exploring ways to reason more clearly about meaning, responsibility, and criticality in data-heavy organizations — and I’d really value input from others who work with conceptual models or architectures. One line of thinking I’ve been exploring is whether it helps to separate three concerns that often get mixed: • A very small upper ontology that defines basic kinds of things (e.g. object, event, measure, context) • A Business domain ontology that describes operational reality — what exists, happens, and can be validated in the business • A Systemic domain ontology that describes how that same reality is interpreted for finance, risk, reporting, or regulatory purposes The intent is not to add abstraction for its own sake, but to avoid: • business meaning being overwritten by reporting logic • regulatory interpretations being treated as operational facts • debates about “one correct definition” Instead, business and systemic concepts would be explicitly related, but not collapsed into one. I’m curious how others see this: • Does separating business reality and systemic interpretation resonate with your experience? • Is an explicit upper ontology helpful — or overkill — in this kind of setup? Genuinely interested in different viewpoints, especially from those who’ve tried (or rejected) similar approaches. #EnterpriseArchitecture #DataArchitecture #InformationArchitecture #DataGovernance #BCBS239 | 23 comments on LinkedIn
·linkedin.com·
Ontology layering: Upper ontology, Business ontology, Systemic ontology
GraphBench: Next-generation graph learning benchmarking
GraphBench: Next-generation graph learning benchmarking

GraphBench: Next-generation graph learning benchmarking We present Graphbench, a comprehensive graph learning benchmark across domains and prediction regimes. GraphBench standardizes evaluation with consistent splits, metrics, and out-of-distribution checks, and includes a unified hyperparameter tuning framework. We also provide strong baselines with state-of-the-art message-passing and graph transformer models and easy plug-and-play code to get you started.

·linkedin.com·
GraphBench: Next-generation graph learning benchmarking