GraphNews

4902 bookmarks
Custom sorting
Q²Forge: Minting Competency Questions and SPARQL Queries for Question-Answering Over Knowledge Graphs
Q²Forge: Minting Competency Questions and SPARQL Queries for Question-Answering Over Knowledge Graphs
The SPARQL query language is the standard method to access knowledge graphs (KGs). However, formulating SPARQL queries is a significant challenge for non-expert users, and remains time-consuming for the experienced ones. Best practices recommend to document KGs with competency questions and example queries to contextualise the knowledge they contain and illustrate their potential applications. In practice, however, this is either not the case or the examples are provided in limited numbers. Large Language Models (LLMs) are being used in conversational agents and are proving to be an attractive solution with a wide range of applications, from simple question-answering about common knowledge to generating code in a targeted programming language. However, training and testing these models to produce high quality SPARQL queries from natural language questions requires substantial datasets of question-query pairs. In this paper, we present Q²Forge that addresses the challenge of generating new competency questions for a KG and corresponding SPARQL queries. It iteratively validates those queries with human feedback and LLM as a judge. Q²Forge is open source, generic, extensible and modular, meaning that the different modules of the application (CQ generation, query generation and query refinement) can be used separately, as an integrated pipeline, or replaced by alternative services. The result is a complete pipeline from competency question formulation to query evaluation, supporting the creation of reference query sets for any target KG.
·hal.science·
Q²Forge: Minting Competency Questions and SPARQL Queries for Question-Answering Over Knowledge Graphs
Kalisi is a single governance library, backed by a Neo4j graph model that captures how requirements, controls, evidence, systems, and processes actually relate
Kalisi is a single governance library, backed by a Neo4j graph model that captures how requirements, controls, evidence, systems, and processes actually relate
a single governance library, backed by a Neo4j graph model that captures how requirements, controls, evidence, systems, and processes actually relate.
·linkedin.com·
Kalisi is a single governance library, backed by a Neo4j graph model that captures how requirements, controls, evidence, systems, and processes actually relate
Clustering, Cognition, and Ontology Engineering: The Proven Foundations
Clustering, Cognition, and Ontology Engineering: The Proven Foundations
Clustering, Cognition, and Ontology Engineering: The Proven Foundations We’re witnessing a powerful convergence in modern knowledge engineering. At its heart lies a decades-old pairing: the TBox-ABox architecture of description logics. This split between a schema layer (TBox: classes, axioms, rules) and an assertional layer (ABox: instances, facts) is more than a modeling convenience—it is the structural backbone for scalable, interpretable, and efficient knowledge systems.​ 1. Bidirectional TBox-ABox Reasoning: The TBox-ABox relationship isn’t just a one-way street. Modern ontology platforms implement feedback cycles—TBox-driven reasoning enriches ABox data, while patterns discovered in ABox instances suggest new or improved TBox axioms. Automated systems like DL-Learner and AMIE+ exemplify this virtuous loop, blending logical inference with machine-driven discovery.​ 2. Iterative Refinement and Machine Learning Analogies: While not mathematically identical, the iterative cycles in clustering algorithms (such as EM or K-Means) echo the knowledge-refinement process in ontologies. In one, the cluster centroids and assignments are repeatedly updated to improve fit. In the other, TBox rules and ABox facts are successively refined for logical coherence and empirical richness. This analogy, though conceptual, powerfully illustrates the self-improving nature of mature knowledge systems.​ 3. Horn Logic for Computational Efficiency: Horn logic unlocks polynomial-time reasoning, the foundation of OWL 2 RL and similar profiles. By restricting to Horn clauses, systems achieve tractable, scalable inferences that keep production-grade knowledge graphs accurate and responsive—even at enterprise scale.​ 4. DL Safety and Rule Guarantees: Combining rules (SWRL) with expressive ontologies (OWL DL) is notoriously tricky—adding rules risks undecidability. The DL-Safety condition was a breakthrough, enforcing restrictions that ensure automated reasoning stays sound and terminates, without sacrificing the power of rule-based schema enrichment.​ 5. Dual Validation for Trustworthy Knowledge: Ontology-based data access (OBDA) completes the story by empirically grounding logic-driven schemas in live production data, while reasoners like HermiT enforce logical consistency. This dual validation ensures that knowledge systems are both scientifically sound and operationally reliable.​ Key Takeaway: Today’s most robust knowledge systems don’t rely on black-box heuristics or wishful analogies. Their power comes from rigorous, validated architectures: bidirectional ontology refinement, tractable logic fragments, and principled integration of logical and statistical learning. This is the true engine behind scalable, explainable, and trustworthy enterprise AI. #OntologyEngineering #KnowledgeGraphs #MachineLearning #SemanticWeb
Clustering, Cognition, and Ontology Engineering: The Proven Foundations
·linkedin.com·
Clustering, Cognition, and Ontology Engineering: The Proven Foundations
Building GraphRAG Agents with ADK | Google Codelabs
Building GraphRAG Agents with ADK | Google Codelabs
This codelab teaches you how to build intelligent multi-agent systems using Google’s Agent Development Kit (ADK) combined with Neo4j graph databases and the Model Context Protocol (MCP) Toolbox. You’ll learn to create specialized GraphRAG-powered agents that leverage knowledge graphs for context-aware query responses, implement agent orchestration patterns, and deploy pre-validated database queries as reusable tools. By the end, you’ll have built a production-ready investment research system demonstrating best practices for next-generation retrieval agents.
·codelabs.developers.google.com·
Building GraphRAG Agents with ADK | Google Codelabs
A research project on the history of knowledge graphs using Gemini's Deep Research Agent
A research project on the history of knowledge graphs using Gemini's Deep Research Agent
I conducted a deep research project on the history of knowledge graphs using Gemini's Deep Research Agent and extracted a knowledge graph from the research report in one pipeline. You can find the code that I used to build the agent inside a Colab notebook labelled "KG_Research_Agent.ipynb" inside this GitHub repository: https://lnkd.in/dXsYb_4V I gave the agent the simple prompt: "Research the history of knowledge graphs", and it came up with a detailed report, along with references, which you can read here: https://lnkd.in/erv-aPqK Connected Data Oxford Semantic Technologies Google DeepMind Gephi GitHub #knowledgegraphs #agents #deepresearch #semanticweb
esearch project on the history of knowledge graphs using Gemini's Deep Research Agent and extracted a knowledge graph from the research report in one pipeline.
·linkedin.com·
A research project on the history of knowledge graphs using Gemini's Deep Research Agent
Axiomatic Inheritance vs. Taxonomic Lineage: Why BFO Implementation Strategy Matters | LinkedIn
Axiomatic Inheritance vs. Taxonomic Lineage: Why BFO Implementation Strategy Matters | LinkedIn
A Response to Critical Discourse on Ontology-Grounded Knowledge Graphs Yesterday I published an article on the engineering realities of building BFO-grounded knowledge graphs at scale. The response from the enterprise architecture community surfaced a critical architectural question that deserves de
·linkedin.com·
Axiomatic Inheritance vs. Taxonomic Lineage: Why BFO Implementation Strategy Matters | LinkedIn
Engineering Reality: What It Actually Takes to Build a BFO-Grounded Knowledge Graph at Scale | LinkedIn
Engineering Reality: What It Actually Takes to Build a BFO-Grounded Knowledge Graph at Scale | LinkedIn
Beyond the Hype: A Practitioner's Perspective on Ontology-First Architecture The knowledge graph market has exploded with vendors claiming semantic capabilities, ontological reasoning, and enterprise-grade inference. Yet a significant gap exists between marketing collateral and engineering reality.
·linkedin.com·
Engineering Reality: What It Actually Takes to Build a BFO-Grounded Knowledge Graph at Scale | LinkedIn
Generating Taxonomies Promptly: Practical LLM applications for human-centric taxonomy development | LinkedIn
Generating Taxonomies Promptly: Practical LLM applications for human-centric taxonomy development | LinkedIn
Co-authored with Gigi Shannon Every taxonomist knows the pressure of delivering the most accurate models under tight timelines and tighter budgets. Now that AI has permeated enterprise tools and strategic plans, the question isn’t whether they should use LLMs, but how to make them work most effectiv
·linkedin.com·
Generating Taxonomies Promptly: Practical LLM applications for human-centric taxonomy development | LinkedIn
Before LLMs, Palantir was competing with Snowflake and Databricks. Post-LLMs, they do not believe they have any competitors. Why?
Before LLMs, Palantir was competing with Snowflake and Databricks. Post-LLMs, they do not believe they have any competitors. Why?
Before LLMs, Palantir was competing with Snowflake and Databricks.
Before LLMs, Palantir was competing with Snowflake and Databricks.Post-LLMs, they do not believe they have any competitors. Why?
·linkedin.com·
Before LLMs, Palantir was competing with Snowflake and Databricks. Post-LLMs, they do not believe they have any competitors. Why?