Found 282 bookmarks
Custom sorting
Let's talk ontologies. They are all the rage.
Let's talk ontologies. They are all the rage.
Let's talk ontologies. They are all the rage. I've been drawing what I now know is a 'triple' on whiteboards for years. It's one of the standard ways I know to start to understand a business. A triple is: subject, predicate, object I cannot overstate how useful this practice has been. Understanding how everything links together is useful, for people and AI. I'm now stuck on what that gets stored in. I'm reading about triplestores and am unclear on the action needed. Years ago some colleagues and I used Neo4j to do this. I liked the visual interaction of the output but I'm not sure that is the best path here. Who can help me understand how to move from whiteboarding to something more formal? Where to actually store all these triples? At what point does it become a 'knowledge graph'? Are there tools or products that help with this? Or is there a new language to learn to store it properly? (I think yes) #ontology #help | 40 comments on LinkedIn
Let's talk ontologies. They are all the rage.
·linkedin.com·
Let's talk ontologies. They are all the rage.
Unified Foundational Ontology tutorial
Unified Foundational Ontology tutorial
As requested, these are the FIRST set of slides for my Ontobras Tutorial on the Unified Foundational Ontology i.e., the upcoming ISO/IEC CD 21838-5 (https://lnkd.in/egrMiCvG), and as announced here: https://lnkd.in/eeKmVW-5. The Brazilian community is one of the most active and lively communities in ontologies these days and the event joined many people from academia, government and industry. The slides for the SECOND part can be found here: https://lnkd.in/eD2xhPKj Thanks again for the invitation Jose M Parente de Oliveira. #ontology #ontologies #conceptualmodeling #semantics Semantics, Cybersecurity, and Services (SCS)/University of Twente
Unified Foundational Ontology
·linkedin.com·
Unified Foundational Ontology tutorial
Why is it so hard to build an ontology-first architecture yet so necessary for the future of organizations?
Why is it so hard to build an ontology-first architecture yet so necessary for the future of organizations?
Why is it so hard to build an ontology-first architecture yet so necessary for the future of organizations? Because it forces you to slow down before you speed up. It means defining what exists in your organization before building systems to act on it. It requires clarity, discipline, and the ability to model multiple perspectives without losing coherence. Ontology-first doesn’t mean everyone must agree; it means connecting different views through layers: application ontologies for context, domain ontologies for shared objects, mid-level ontologies for reusable patterns, and a top-level ontology for common sense. It means defining what exists in your organization before building systems to act on it. Without a shared map of what things mean, every new system just adds noise. Ontology-first architecture isn’t about technology; it’s about truth, structure, and long-term adaptability. It’s the foundation that allows AI to enhance human power and impact without losing context or control. It’s hard because it demands that we think, model, and connect before we automate. But that’s also why it’s the only path toward a world where humans ingenuity can be enhanced with AI truly. | 42 comments on LinkedIn
Why is it so hard to build an ontology-first architecture yet so necessary for the future of organizations?
·linkedin.com·
Why is it so hard to build an ontology-first architecture yet so necessary for the future of organizations?
Why Large Graphs Fail Small: When LPG Scalability Breaks Cognitive Coherence
Why Large Graphs Fail Small: When LPG Scalability Breaks Cognitive Coherence
Why Large Graphs Fail to Understand Small Scalability We love to talk about scaling graphs: billions of nodes, trillions of relationships and distributed clusters. But, in practice, larger graphs often become harder to understand. As Labelled Property Graphs (LPGs) grow, their structure remains sound, but their meaning starts to drift. Queries still run, but the answers become useless. In my latest post, I explore why semantic coherence collapses faster than infrastructure can scale up, what 'cognitive coherence' really means in graph systems and how the flexibility of LPGs can empower and endanger knowledge integrity. Full article: 'Why Large Graphs Fail Small: When LPG Scalability Breaks Cognitive Coherence' https://lnkd.in/epmwGM9u #GraphRAG #KnowledgeGraph #LabeledPropertyGraph #LPG #SemanticAI #AIExplainability #GraphThinking #RDF #AKG #KGL | 15 comments on LinkedIn
·linkedin.com·
Why Large Graphs Fail Small: When LPG Scalability Breaks Cognitive Coherence
OpenAI Emerging Semantic Layer | LinkedIn
OpenAI Emerging Semantic Layer | LinkedIn
Following yesterday's announcements from OpenAI, brands start to have real ways to operate inside ChatGPT. At a very high-level this is the map for anyone considering entering (or expanding) into the ChatGPT ecosystem: Conversational Prompts / UX: optimize how ChatGPT “asks” for or surfaces brand se
·linkedin.com·
OpenAI Emerging Semantic Layer | LinkedIn
An infographic that you can use when teaching the step-by-step progression from a simple list of business terms to a real-time agentic enterprise knowledge graph
An infographic that you can use when teaching the step-by-step progression from a simple list of business terms to a real-time agentic enterprise knowledge graph
Inspired by the talented Jessica Talisman, here is a new infographic microsim that you can use when teaching the step-by-step progression from a simple list of business terms to a real-time agentic enterprise knowledge graph: https://lnkd.in/g66HRBhn You can include this interactive microsim in all of your semantics/ontology and agentic AI courses with just a single line of HTML.
a new infographic microsim that you can use when teaching the step-by-step progression from a simple list of business terms to a real-time agentic enterprise knowledge graph
·linkedin.com·
An infographic that you can use when teaching the step-by-step progression from a simple list of business terms to a real-time agentic enterprise knowledge graph
Automatic Ontology Generation Still Falls Short & Why Applied Ontologists Deliver the ROI | LinkedIn
Automatic Ontology Generation Still Falls Short & Why Applied Ontologists Deliver the ROI | LinkedIn
For all the excitement around large language models, the latest research from Simona-Vasilica Oprea and Georgiana Stănescu (Electronics 14:1313, 2025) offers a reality check. Automatic ontology generation, even with novel prompting techniques like Memoryless CQ-by-CQ and Ontogenia, remains a partial
·linkedin.com·
Automatic Ontology Generation Still Falls Short & Why Applied Ontologists Deliver the ROI | LinkedIn
Ever heard of "knowledge engineering"?
Ever heard of "knowledge engineering"?
Ever heard of "knowledge engineering"? It’s what we called AI before AI was cool. I just pulled this out of the deep archives, Stanford University, 1980. Feigenbaum’s HPP report. The bones of modern context engineering were already there. ↳ What they did: ➤ Curated knowledge bases, not giant prompts ➤ Rule “evocation” to gate relevance ➤ Certainty factors to track confidence ➤ Shells + blackboards to orchestrate tools ➤ Traceable logic so humans could audit decisions ↳ What we do now: ➤ Trimmed RAG context instead of bloated prompts ➤ Retrieval + reranking + policy checks for gating ➤ Scores, evals, and guardrails to manage uncertainty ➤ Tool calling, MCPs, workflow engines for execution ➤ Logs + decision docs for explainability ↳ The through-line for UX: ➤Performance comes from shaping context, what to include, when to include it, and how to prove it worked. If you're building AI agents, you're standing on those shoulders. Start with context, not cleverness. Follow for human-centered AI + UX. Reshare if your team ships with context discipline. | 41 comments on LinkedIn
Ever heard of "knowledge engineering"?
·linkedin.com·
Ever heard of "knowledge engineering"?
Semantics in use part 5: and interview with Anikó Gerencsér, Team leader - Reference data team @Publication Office of the European Union | LinkedIn
Semantics in use part 5: and interview with Anikó Gerencsér, Team leader - Reference data team @Publication Office of the European Union | LinkedIn
What is your role? I am working in the Publications Office of the European Union as the team leader of the Reference data team. The Publications Office of the European Union is the official provider of publishing services to all EU institutions, bodies and agencies.
·linkedin.com·
Semantics in use part 5: and interview with Anikó Gerencsér, Team leader - Reference data team @Publication Office of the European Union | LinkedIn
Recent Trends and Insights in Semantic Web and Ontology-Driven Knowledge Representation Across Disciplines Using Topic Modeling
Recent Trends and Insights in Semantic Web and Ontology-Driven Knowledge Representation Across Disciplines Using Topic Modeling
This research aims to investigate the roles of ontology and Semantic Web Technologies (SWT) in modern knowledge representation and data management. By analyzing a dataset of 10,037 academic articles from Web of Science (WoS) published in the last 6 years (2019–2024) across several fields, such as computer science, engineering, and telecommunications, our research identifies important trends in the use of ontologies and semantic frameworks. Through bibliometric and semantic analyses, Natural Language Processing (NLP), and topic modeling using Latent Dirichlet Allocation (LDA) and BERT-clustering approach, we map the evolution of semantic technologies, revealing core research themes such as ontology engineering, knowledge graphs, and linked data. Furthermore, we address existing research gaps, including challenges in the semantic web, dynamic ontology updates, and scalability in Big Data environments. By synthesizing insights from the literature, our research provides an overview of the current state of semantic web research and its prospects. With a 0.75 coherence score and perplexity = 48, the topic modeling analysis identifies three distinct thematic clusters: (1) Ontology-Driven Knowledge Representation and Intelligent Systems, which focuses on the use of ontologies for AI integration, machine interpretability, and structured knowledge representation; (2) Bioinformatics, Gene Expression and Biological Data Analysis, highlighting the role of ontologies and semantic frameworks in biomedical research, particularly in gene expression, protein interactions and biological network modeling; and (3) Advanced Bioinformatics, Systems Biology and Ethical-Legal Implications, addressing the intersection of biological data sciences with ethical, legal and regulatory challenges in emerging technologies. The clusters derived from BERT embeddings and clustering show thematic overlap with the LDA-derived topics but with some notable differences in emphasis and granularity. Our contributions extend beyond theoretical discussions, offering practical implications for enhancing data accessibility, semantic search, and automated knowledge discovery.
·mdpi.com·
Recent Trends and Insights in Semantic Web and Ontology-Driven Knowledge Representation Across Disciplines Using Topic Modeling
Protocols move bits. Semantics move value.
Protocols move bits. Semantics move value.
Protocols move bits. Semantics move value. The reports on agents are starting to sound samey: go vertical not horizontal; redesign workflows end-to-end; clean your data; stop doing pilots that automate inefficiencies; price for outcomes when the agent does the work. All true. All necessary. All needing repetition ad nauseam. So it’s refreshing to see a switch-up in Bain’s Technology Report 2025: the real leverage now sits with semantics. A shared layer of meaning. Bain notes that protocols are maturing. MCP and A2A let agents pass tool calls, tokens, and results between layers. Useful plumbing. But there’s still no shared vocabulary that says what an invoice, policy, or work order is, how it moves through states, and how it maps to APIs, tables, and approvals. Without that, cross-vendor reliability will keep stalling. They go further: whoever lands a pragmatic semantic layer first gets winner-takes-most network effects. Define the dictionary and you steer the value flow. This isn’t just a feature. It’s a control point. Bain frames the stack clearly: - Systems of record (data, rules, compliance) - Agent operating systems (orchestration, planning, memory) - Outcome interfaces (natural language requests, user-facing actions) The bottleneck is semantics. And there’s a pricing twist. If agents do the work, semantics define what “done” means. That unlocks outcome-based pricing, charging for tasks completed or value delivered, not log-ons. Bain is blunt: the open, any-to-any agent utopia will smash against vendor incentives, messy data, IP, and security. Translation: walled gardens lead first. Start where governance is clear and data is good enough, then use that traction to shape the semantics others will later adopt. This is where I’m seeing convergence. In practice, a knowledge graph can provide that shared meaning, identity, relationships, and policy. One workable pattern: the agent plans with an LLM, resolves entities and checks rules in the graph, then acts through typed APIs, writing back as events the graph can audit. That’s the missing vocabulary and the enforcement that protocols alone can’t cover. Tony Seale puts it well: “Neural and symbolic systems are not rivals; they are complements… a knowledge graph provides the symbolic backbone… to ground AI in shared semantics and enforce consistency.” To me, this is optimistic, because it moves the conversation from “make the model smarter” to “make the system understandable.” Agents don’t need perfection if they are predictable, composable, and auditable. Semantics deliver that. It’s also how smaller players compete with hyperscalers: you don’t need to win the model race to win the meaning race. With semantics, agents become infrastructure. The next few years won’t be won by who builds the biggest model. It’ll be won by who defines the smallest shared meaning. | 27 comments on LinkedIn
Protocols move bits. Semantics move value.
·linkedin.com·
Protocols move bits. Semantics move value.
Announcing the formation of a Data Façades W3C Community Group
Announcing the formation of a Data Façades W3C Community Group
I am excited to announce the formation of a Data Façades W3C Community Group. Façade-X, initially introduced at SEMANTICS 2021 and successfully implemented by the SPARQL Anything project, provides a simple yet powerful, homogeneous view over diverse and heterogeneous data sources (e.g., CSV, JSON, XML, and many others). With the recent v1.0.0 release of SPARQL Anything, the time was right to work on the long-term stability and widespread adoption of this approach by developing an open, vendor-neutral technology. The Façade-X concept was born to allow SPARQL users to query data in any structured format in plain SPARQL. Therefore, the choice of a W3C community group to lead efforts on specifications is just natural. Specifications will enhance its reliability, foster innovation, and encourage various vendors and projects—including graph database developers — to provide their own compatible implementations. The primary goals of the Data Façades Community Group is to: Define the core specification of the Façade-X method. Define Standard Mappings: Formalize the required mappings and profiles for connecting Façade-X to common data formats. Define the specification of the query dialect: Provide a reference for the SPARQL dialect, configuration conventions (like SERVICE IRIs), and the functions/magic properties used. Establish Governance: Create a monitored, robust process for adding support for new data formats. Foster Collaboration: Build connections with relevant W3C groups (e.g., RDF & SPARQL, Data Shapes) and encourage involvement from developers, businesses, and adopters. Join us! With Luigi Asprino Ivo Velitchkov Justin Dowdy Paul Mulholland Andy Seaborne Ryan Shaw ... CG: https://lnkd.in/eSxuqsvn Github: https://lnkd.in/dkHGT8N3 SPARQL Anything #RDF #SPARQL #W3C #FX
announce the formation of a Data Façades W3C Community Group
·linkedin.com·
Announcing the formation of a Data Façades W3C Community Group
SHACL Practicioner pre-order
SHACL Practicioner pre-order
Help !! I just let the pre-order live! 😬 All of you who signed up for more information shall have received an e-mail with pre-order option now. This is the most scariest sh*t I've done in a long time. Seeing pre-orders ticking in to something I've created---together with the most amazing guest authors---is a super weird feeling. THANK YOU! ❤️ It's been quite a process. From an idea planted in 2022, to now seeing the light at the end of the tunnel. I've spend hours and hours inside TeXworks, nerding around with LaTeX and Ti𝘬Z. Numerous moments at pubs, while waiting for someone, to edit, edit and edit. Taking vacation off work to isolate myself to write (so effective, btw!). Having a gold team of proofreaders, providing with super valuable feedback. Working with awesome SHACL practitioners to tell great SHACL stories to you! IT HAS BEEN GREAT FUN! This week, I have been focusing on final touches (thank you Data Treehouse for letting me do this!!). Indexing like a hero. Soon the words, bits and blobs will hit the printing press, and the first copies will ship on 𝐍𝐨𝐯𝐞𝐦𝐛𝐞𝐫 3𝐫𝐝. If you want to pre-order, tag along to https://lnkd.in/dER72USX for more information. All pre-orders will get a tiny SHACL surprise inside their book. 😇 Btw: the final product will probably not look like this, I got good help from our mutual friend ChatGPT, but I know it will be yellow at least. 💛
·linkedin.com·
SHACL Practicioner pre-order
Semantic Quality Is the Missing Risk Control in Financial AI and GraphRAG | LinkedIn
Semantic Quality Is the Missing Risk Control in Financial AI and GraphRAG | LinkedIn
by Timothy Coleman and J Bittner Picture this: an AI system confidently delivers a financial report, but it misclassifies $100M in assets as liabilities. Errors of this kind are already appearing in financial AI systems, and the stakes only grow as organizations adopt Retrieval-Augmented Generation
·linkedin.com·
Semantic Quality Is the Missing Risk Control in Financial AI and GraphRAG | LinkedIn
T-Box: The secret sauce of knowledge graphs and AI
T-Box: The secret sauce of knowledge graphs and AI
T-Box: The secret sauce of knowledge graphs and AI Ever wondered how knowledge graphs “understand” the world? Meet the T-Box, the part that tells your graph what exists and how it can relate. Think of it like building a LEGO set: T-Box (Terminological Box) = the instruction manual (defines the pieces and how they fit) A-Box (Assertional Box) = the LEGO pieces you actually have (your data, your instances) Why it’s important for RDF knowledge graphs: - Gives your data structure and rules, so your graph doesn’t turn into spaghetti - Enables reasoning, letting the system infer new facts automatically - Keeps your graph consistent and maintainable, even as it grows Why it’s better than other models: Traditional databases just store rows and columns; relationships have no meaning RDF + T-Box = data that can explain itself and connect across domains Why AI loves it: - AI can reason over knowledge, not just crunch numbers - Enables smarter recommendations, insights, and predictions based on structured knowledge Quick analogy: T-Box = blueprint/instruction manual (the ontology / what is possible) A-Box = the real-world building (the facts / what is true) Together = AI-friendly, smart knowledge graph #KnowledgeGraph #RDF #AI #SemanticWeb #DataScience #GraphData
T-Box: The secret sauce of knowledge graphs and AI
·linkedin.com·
T-Box: The secret sauce of knowledge graphs and AI
Comparing LPG and RDF in Recent Graph RAG Architectures
Comparing LPG and RDF in Recent Graph RAG Architectures
Comparing LPG and RDF in Recent Graph RAG Architectures As a follow-up to my previous posts and discussions, I would like to share three papers on arXiv that demonstrate the wide range of design choices in combining LPG and RDF. Here’s a brief overview of each: 1. RAGONITE: Iterative Retrieval on Induced Databases and Verbalized RDF arXiv:2412.17690 This paper builds on RDF knowledge graphs. Rather than relying solely on SPARQL queries, it establishes two retrieval pathways: one from an SQL database generated from the KG, and another from text searches over verbalised RDF facts. A controller decides when to combine or switch between them, with results passed to an LLM. The insight: RDF alone is not robust enough for conversational queries, but pairing it with SQL and text dramatically improves coverage and resilience. 2. GraphAr: Efficient Storage for Property Graphs in Data Lakes arXiv:2312.09577 This article addresses LPGs. It introduces a storage scheme that preserves LPG semantics in formats such as Parquet, while significantly boosting performance. Reported gains are impressive: neighbour retrieval is ~4452× faster, label filtering 14.8× faster, and end-to-end workflows 29.5× faster compared to baseline Parquet methods. Such optimisations are critical for GraphRAG, where low-latency retrieval is essential. 3. CypherBench: Towards Precise Retrieval over Full-scale Modern Knowledge Graphs in the LLM Era arXiv:2412.18702 This work brings a benchmarking perspective, targeting Cypher queries over large-scale LPGs. It emphasises precision retrieval across full-scale graphs, something crucial when LLMs are expected to interact with enterprise-scale knowledge. By formalising benchmarks, it encourages more rigorous evaluation of GraphRAG retrieval techniques and raises the bar for future architectures. Takeaway Together, these works highlight the diverse strategies for bridging RDF and LPG in GraphRAG — from hybrid retrieval pipelines to optimised storage and precision benchmarks. They show how research is steadily moving from demos to architectures that balance semantics, performance, and accuracy.
·linkedin.com·
Comparing LPG and RDF in Recent Graph RAG Architectures