GraphNews

#KnowledgeGraph #semantics
Transforming SHACL Shape Graphs into HTML Applications for Populating Knowledge Graphs
Transforming SHACL Shape Graphs into HTML Applications for Populating Knowledge Graphs
Creating applications to manually populate and modify knowledge graphs is a complex task. In this paper, we propose a novel approach for designing user interfaces for this purpose, based on existing SHACL constraint files. Our method consists of taking SHACL constraints and creating multi-form web applications. The novelty of the approach is to treat the editing of knowledge graphs via multi-form application interaction as a business process. This enables user interface modeling, such as modeling of application control flows by integrating ontology-based business process management components. Additionally, because our application models are themselves knowledge graphs, we demonstrate how they can leverage OWL reasoning to verify logical consistency and improve the user experience.
·mdpi.com·
Transforming SHACL Shape Graphs into HTML Applications for Populating Knowledge Graphs
Your agents NEED a semantic layer
Your agents NEED a semantic layer
Your agents NEED a semantic layer 🫵 Traditional RAG systems embed documents, retrieve similar chunks, and feed them to LLMs. This works for simple Q&A. It fails catastrophically for agents that need to reason across systems. Why? Because semantic similarity doesn't capture relationships. Your vector database can tell you that two documents are "about bonds." It can't tell you that Document A contains the official pricing methodology, Document B is a customer complaint referencing that methodology, and Document C is an assembly guide that superseded both. These relationships are invisible to embeddings. What semantic layers provide: Entity resolution across data silos. When "John Smith" in your CRM, "J. Smith" in email, and "john.smith@company.com" in logs all map to the same person node, agents can traverse the complete context. Cross-domain entity linking through knowledge graphs. Products in your database connect to assembly guides, which link to customer reviews, which reference support tickets. Single-query traversal instead of application-level joins. Provenance-tracked derivations. Every extracted entity, inferred relationship, and generated embedding maintains lineage to source data. Critical for regulatory compliance and debugging agent behavior. Ontology-grounded reasoning. Financial instruments mapped to FIBO standards. Products mapped to domain taxonomies. Agents reason with structured vocabulary, not statistical word associations. The technical implementation pattern: Layer 1: Unified graph database supporting vector, structured, and semi-structured data types in single queries. Layer 2: Entity extraction pipeline with coreference resolution and deduplication across sources. Layer 3: Relationship inference and cross-domain linking using both explicit identifiers and contextual signals. Layer 4: Separation of first-party data from derived artifacts with clear tagging for safe regeneration. The result: Agents can traverse "Product → described_in → AssemblyGuide → improved_by → CommunityTip → authored_by → Expert" in a single graph query instead of five API calls with application-level joins. Model Context Protocol is emerging as the open standard for semantic tool modeling. Not just describing APIs, but encoding what tools do, when to use them, and how outputs compose. This enables agents to discover and reason about capabilities dynamically. The competitive moat isn't your model choice. The moat is your knowledge graph architecture and the accumulated entity relationships that took years to build. | 28 comments on LinkedIn
Your agents NEED a semantic layer
·linkedin.com·
Your agents NEED a semantic layer
Let's talk ontologies. They are all the rage.
Let's talk ontologies. They are all the rage.
Let's talk ontologies. They are all the rage. I've been drawing what I now know is a 'triple' on whiteboards for years. It's one of the standard ways I know to start to understand a business. A triple is: subject, predicate, object I cannot overstate how useful this practice has been. Understanding how everything links together is useful, for people and AI. I'm now stuck on what that gets stored in. I'm reading about triplestores and am unclear on the action needed. Years ago some colleagues and I used Neo4j to do this. I liked the visual interaction of the output but I'm not sure that is the best path here. Who can help me understand how to move from whiteboarding to something more formal? Where to actually store all these triples? At what point does it become a 'knowledge graph'? Are there tools or products that help with this? Or is there a new language to learn to store it properly? (I think yes) #ontology #help | 40 comments on LinkedIn
Let's talk ontologies. They are all the rage.
·linkedin.com·
Let's talk ontologies. They are all the rage.
Unified Foundational Ontology tutorial
Unified Foundational Ontology tutorial
As requested, these are the FIRST set of slides for my Ontobras Tutorial on the Unified Foundational Ontology i.e., the upcoming ISO/IEC CD 21838-5 (https://lnkd.in/egrMiCvG), and as announced here: https://lnkd.in/eeKmVW-5. The Brazilian community is one of the most active and lively communities in ontologies these days and the event joined many people from academia, government and industry. The slides for the SECOND part can be found here: https://lnkd.in/eD2xhPKj Thanks again for the invitation Jose M Parente de Oliveira. #ontology #ontologies #conceptualmodeling #semantics Semantics, Cybersecurity, and Services (SCS)/University of Twente
Unified Foundational Ontology
·linkedin.com·
Unified Foundational Ontology tutorial
Why is it so hard to build an ontology-first architecture yet so necessary for the future of organizations?
Why is it so hard to build an ontology-first architecture yet so necessary for the future of organizations?
Why is it so hard to build an ontology-first architecture yet so necessary for the future of organizations? Because it forces you to slow down before you speed up. It means defining what exists in your organization before building systems to act on it. It requires clarity, discipline, and the ability to model multiple perspectives without losing coherence. Ontology-first doesn’t mean everyone must agree; it means connecting different views through layers: application ontologies for context, domain ontologies for shared objects, mid-level ontologies for reusable patterns, and a top-level ontology for common sense. It means defining what exists in your organization before building systems to act on it. Without a shared map of what things mean, every new system just adds noise. Ontology-first architecture isn’t about technology; it’s about truth, structure, and long-term adaptability. It’s the foundation that allows AI to enhance human power and impact without losing context or control. It’s hard because it demands that we think, model, and connect before we automate. But that’s also why it’s the only path toward a world where humans ingenuity can be enhanced with AI truly. | 42 comments on LinkedIn
Why is it so hard to build an ontology-first architecture yet so necessary for the future of organizations?
·linkedin.com·
Why is it so hard to build an ontology-first architecture yet so necessary for the future of organizations?
Why Large Graphs Fail Small: When LPG Scalability Breaks Cognitive Coherence
Why Large Graphs Fail Small: When LPG Scalability Breaks Cognitive Coherence
Why Large Graphs Fail to Understand Small Scalability We love to talk about scaling graphs: billions of nodes, trillions of relationships and distributed clusters. But, in practice, larger graphs often become harder to understand. As Labelled Property Graphs (LPGs) grow, their structure remains sound, but their meaning starts to drift. Queries still run, but the answers become useless. In my latest post, I explore why semantic coherence collapses faster than infrastructure can scale up, what 'cognitive coherence' really means in graph systems and how the flexibility of LPGs can empower and endanger knowledge integrity. Full article: 'Why Large Graphs Fail Small: When LPG Scalability Breaks Cognitive Coherence' https://lnkd.in/epmwGM9u #GraphRAG #KnowledgeGraph #LabeledPropertyGraph #LPG #SemanticAI #AIExplainability #GraphThinking #RDF #AKG #KGL | 15 comments on LinkedIn
·linkedin.com·
Why Large Graphs Fail Small: When LPG Scalability Breaks Cognitive Coherence
OpenAI Emerging Semantic Layer | LinkedIn
OpenAI Emerging Semantic Layer | LinkedIn
Following yesterday's announcements from OpenAI, brands start to have real ways to operate inside ChatGPT. At a very high-level this is the map for anyone considering entering (or expanding) into the ChatGPT ecosystem: Conversational Prompts / UX: optimize how ChatGPT “asks” for or surfaces brand se
·linkedin.com·
OpenAI Emerging Semantic Layer | LinkedIn
An infographic that you can use when teaching the step-by-step progression from a simple list of business terms to a real-time agentic enterprise knowledge graph
An infographic that you can use when teaching the step-by-step progression from a simple list of business terms to a real-time agentic enterprise knowledge graph
Inspired by the talented Jessica Talisman, here is a new infographic microsim that you can use when teaching the step-by-step progression from a simple list of business terms to a real-time agentic enterprise knowledge graph: https://lnkd.in/g66HRBhn You can include this interactive microsim in all of your semantics/ontology and agentic AI courses with just a single line of HTML.
a new infographic microsim that you can use when teaching the step-by-step progression from a simple list of business terms to a real-time agentic enterprise knowledge graph
·linkedin.com·
An infographic that you can use when teaching the step-by-step progression from a simple list of business terms to a real-time agentic enterprise knowledge graph
Automatic Ontology Generation Still Falls Short & Why Applied Ontologists Deliver the ROI | LinkedIn
Automatic Ontology Generation Still Falls Short & Why Applied Ontologists Deliver the ROI | LinkedIn
For all the excitement around large language models, the latest research from Simona-Vasilica Oprea and Georgiana Stănescu (Electronics 14:1313, 2025) offers a reality check. Automatic ontology generation, even with novel prompting techniques like Memoryless CQ-by-CQ and Ontogenia, remains a partial
·linkedin.com·
Automatic Ontology Generation Still Falls Short & Why Applied Ontologists Deliver the ROI | LinkedIn
Ever heard of "knowledge engineering"?
Ever heard of "knowledge engineering"?
Ever heard of "knowledge engineering"? It’s what we called AI before AI was cool. I just pulled this out of the deep archives, Stanford University, 1980. Feigenbaum’s HPP report. The bones of modern context engineering were already there. ↳ What they did: ➤ Curated knowledge bases, not giant prompts ➤ Rule “evocation” to gate relevance ➤ Certainty factors to track confidence ➤ Shells + blackboards to orchestrate tools ➤ Traceable logic so humans could audit decisions ↳ What we do now: ➤ Trimmed RAG context instead of bloated prompts ➤ Retrieval + reranking + policy checks for gating ➤ Scores, evals, and guardrails to manage uncertainty ➤ Tool calling, MCPs, workflow engines for execution ➤ Logs + decision docs for explainability ↳ The through-line for UX: ➤Performance comes from shaping context, what to include, when to include it, and how to prove it worked. If you're building AI agents, you're standing on those shoulders. Start with context, not cleverness. Follow for human-centered AI + UX. Reshare if your team ships with context discipline. | 41 comments on LinkedIn
Ever heard of "knowledge engineering"?
·linkedin.com·
Ever heard of "knowledge engineering"?
Semantics in use part 5: and interview with Anikó Gerencsér, Team leader - Reference data team @Publication Office of the European Union | LinkedIn
Semantics in use part 5: and interview with Anikó Gerencsér, Team leader - Reference data team @Publication Office of the European Union | LinkedIn
What is your role? I am working in the Publications Office of the European Union as the team leader of the Reference data team. The Publications Office of the European Union is the official provider of publishing services to all EU institutions, bodies and agencies.
·linkedin.com·
Semantics in use part 5: and interview with Anikó Gerencsér, Team leader - Reference data team @Publication Office of the European Union | LinkedIn
Recent Trends and Insights in Semantic Web and Ontology-Driven Knowledge Representation Across Disciplines Using Topic Modeling
Recent Trends and Insights in Semantic Web and Ontology-Driven Knowledge Representation Across Disciplines Using Topic Modeling
This research aims to investigate the roles of ontology and Semantic Web Technologies (SWT) in modern knowledge representation and data management. By analyzing a dataset of 10,037 academic articles from Web of Science (WoS) published in the last 6 years (2019–2024) across several fields, such as computer science, engineering, and telecommunications, our research identifies important trends in the use of ontologies and semantic frameworks. Through bibliometric and semantic analyses, Natural Language Processing (NLP), and topic modeling using Latent Dirichlet Allocation (LDA) and BERT-clustering approach, we map the evolution of semantic technologies, revealing core research themes such as ontology engineering, knowledge graphs, and linked data. Furthermore, we address existing research gaps, including challenges in the semantic web, dynamic ontology updates, and scalability in Big Data environments. By synthesizing insights from the literature, our research provides an overview of the current state of semantic web research and its prospects. With a 0.75 coherence score and perplexity = 48, the topic modeling analysis identifies three distinct thematic clusters: (1) Ontology-Driven Knowledge Representation and Intelligent Systems, which focuses on the use of ontologies for AI integration, machine interpretability, and structured knowledge representation; (2) Bioinformatics, Gene Expression and Biological Data Analysis, highlighting the role of ontologies and semantic frameworks in biomedical research, particularly in gene expression, protein interactions and biological network modeling; and (3) Advanced Bioinformatics, Systems Biology and Ethical-Legal Implications, addressing the intersection of biological data sciences with ethical, legal and regulatory challenges in emerging technologies. The clusters derived from BERT embeddings and clustering show thematic overlap with the LDA-derived topics but with some notable differences in emphasis and granularity. Our contributions extend beyond theoretical discussions, offering practical implications for enhancing data accessibility, semantic search, and automated knowledge discovery.
·mdpi.com·
Recent Trends and Insights in Semantic Web and Ontology-Driven Knowledge Representation Across Disciplines Using Topic Modeling
Protocols move bits. Semantics move value.
Protocols move bits. Semantics move value.
Protocols move bits. Semantics move value. The reports on agents are starting to sound samey: go vertical not horizontal; redesign workflows end-to-end; clean your data; stop doing pilots that automate inefficiencies; price for outcomes when the agent does the work. All true. All necessary. All needing repetition ad nauseam. So it’s refreshing to see a switch-up in Bain’s Technology Report 2025: the real leverage now sits with semantics. A shared layer of meaning. Bain notes that protocols are maturing. MCP and A2A let agents pass tool calls, tokens, and results between layers. Useful plumbing. But there’s still no shared vocabulary that says what an invoice, policy, or work order is, how it moves through states, and how it maps to APIs, tables, and approvals. Without that, cross-vendor reliability will keep stalling. They go further: whoever lands a pragmatic semantic layer first gets winner-takes-most network effects. Define the dictionary and you steer the value flow. This isn’t just a feature. It’s a control point. Bain frames the stack clearly: - Systems of record (data, rules, compliance) - Agent operating systems (orchestration, planning, memory) - Outcome interfaces (natural language requests, user-facing actions) The bottleneck is semantics. And there’s a pricing twist. If agents do the work, semantics define what “done” means. That unlocks outcome-based pricing, charging for tasks completed or value delivered, not log-ons. Bain is blunt: the open, any-to-any agent utopia will smash against vendor incentives, messy data, IP, and security. Translation: walled gardens lead first. Start where governance is clear and data is good enough, then use that traction to shape the semantics others will later adopt. This is where I’m seeing convergence. In practice, a knowledge graph can provide that shared meaning, identity, relationships, and policy. One workable pattern: the agent plans with an LLM, resolves entities and checks rules in the graph, then acts through typed APIs, writing back as events the graph can audit. That’s the missing vocabulary and the enforcement that protocols alone can’t cover. Tony Seale puts it well: “Neural and symbolic systems are not rivals; they are complements… a knowledge graph provides the symbolic backbone… to ground AI in shared semantics and enforce consistency.” To me, this is optimistic, because it moves the conversation from “make the model smarter” to “make the system understandable.” Agents don’t need perfection if they are predictable, composable, and auditable. Semantics deliver that. It’s also how smaller players compete with hyperscalers: you don’t need to win the model race to win the meaning race. With semantics, agents become infrastructure. The next few years won’t be won by who builds the biggest model. It’ll be won by who defines the smallest shared meaning. | 27 comments on LinkedIn
Protocols move bits. Semantics move value.
·linkedin.com·
Protocols move bits. Semantics move value.
Announcing the formation of a Data Façades W3C Community Group
Announcing the formation of a Data Façades W3C Community Group
I am excited to announce the formation of a Data Façades W3C Community Group. Façade-X, initially introduced at SEMANTICS 2021 and successfully implemented by the SPARQL Anything project, provides a simple yet powerful, homogeneous view over diverse and heterogeneous data sources (e.g., CSV, JSON, XML, and many others). With the recent v1.0.0 release of SPARQL Anything, the time was right to work on the long-term stability and widespread adoption of this approach by developing an open, vendor-neutral technology. The Façade-X concept was born to allow SPARQL users to query data in any structured format in plain SPARQL. Therefore, the choice of a W3C community group to lead efforts on specifications is just natural. Specifications will enhance its reliability, foster innovation, and encourage various vendors and projects—including graph database developers — to provide their own compatible implementations. The primary goals of the Data Façades Community Group is to: Define the core specification of the Façade-X method. Define Standard Mappings: Formalize the required mappings and profiles for connecting Façade-X to common data formats. Define the specification of the query dialect: Provide a reference for the SPARQL dialect, configuration conventions (like SERVICE IRIs), and the functions/magic properties used. Establish Governance: Create a monitored, robust process for adding support for new data formats. Foster Collaboration: Build connections with relevant W3C groups (e.g., RDF & SPARQL, Data Shapes) and encourage involvement from developers, businesses, and adopters. Join us! With Luigi Asprino Ivo Velitchkov Justin Dowdy Paul Mulholland Andy Seaborne Ryan Shaw ... CG: https://lnkd.in/eSxuqsvn Github: https://lnkd.in/dkHGT8N3 SPARQL Anything #RDF #SPARQL #W3C #FX
announce the formation of a Data Façades W3C Community Group
·linkedin.com·
Announcing the formation of a Data Façades W3C Community Group
SHACL Practicioner pre-order
SHACL Practicioner pre-order
Help !! I just let the pre-order live! 😬 All of you who signed up for more information shall have received an e-mail with pre-order option now. This is the most scariest sh*t I've done in a long time. Seeing pre-orders ticking in to something I've created---together with the most amazing guest authors---is a super weird feeling. THANK YOU! ❤️ It's been quite a process. From an idea planted in 2022, to now seeing the light at the end of the tunnel. I've spend hours and hours inside TeXworks, nerding around with LaTeX and Ti𝘬Z. Numerous moments at pubs, while waiting for someone, to edit, edit and edit. Taking vacation off work to isolate myself to write (so effective, btw!). Having a gold team of proofreaders, providing with super valuable feedback. Working with awesome SHACL practitioners to tell great SHACL stories to you! IT HAS BEEN GREAT FUN! This week, I have been focusing on final touches (thank you Data Treehouse for letting me do this!!). Indexing like a hero. Soon the words, bits and blobs will hit the printing press, and the first copies will ship on 𝐍𝐨𝐯𝐞𝐦𝐛𝐞𝐫 3𝐫𝐝. If you want to pre-order, tag along to https://lnkd.in/dER72USX for more information. All pre-orders will get a tiny SHACL surprise inside their book. 😇 Btw: the final product will probably not look like this, I got good help from our mutual friend ChatGPT, but I know it will be yellow at least. 💛
·linkedin.com·
SHACL Practicioner pre-order
Semantic Quality Is the Missing Risk Control in Financial AI and GraphRAG | LinkedIn
Semantic Quality Is the Missing Risk Control in Financial AI and GraphRAG | LinkedIn
by Timothy Coleman and J Bittner Picture this: an AI system confidently delivers a financial report, but it misclassifies $100M in assets as liabilities. Errors of this kind are already appearing in financial AI systems, and the stakes only grow as organizations adopt Retrieval-Augmented Generation
·linkedin.com·
Semantic Quality Is the Missing Risk Control in Financial AI and GraphRAG | LinkedIn