GraphNews

4497 bookmarks
Custom sorting
AI-Assisted Ontology Mapping
AI-Assisted Ontology Mapping
AI-Assisted Ontology Mapping Ontology alignment, glossary mapping, semantic integration - none are new. For decades: TF-IDF, WordNet, property matching, supervised models. They work - but remain rule-bounded. The new Google + Harvard research (2025-09-08) signals a paradigm shift: Ontologies are no longer static. Every conceptual decision can be treated as a measurable task. Ontologies as Living Systems An ontology is not a document. It is a formalized knowledge backbone, where: - Concepts are expressed declaratively (OWL, RDF, OntoUML) - Relations exist as axioms - Every inference is machine-checkable In this world, the semantic layer isn’t a BI artifact - it’s the formal contract of meaning: business glossaries, KPIs, and data attributes all refer to the same conceptual entities. Measuring Ontological Precision The Google–Harvard approach reframes ontology engineering as scorable tasks: - Mapping-F1 → accuracy of mappings between glossaries and semantic layers. - Alignment% → conceptual overlap between ontologies. - Consistency → are KPI definitions aligned with their OWL/RDF axioms? Once we define these metrics, semantic mappings stop being static deliverables. They become living quality signals - ontological KPIs. AI as a Sandbox Co-Scientist The breakthrough is not automation. It’s the ability to generate, test, and validate conceptual hypotheses iteratively: - LLM proposes alternative mapping strategies: embeddings, synonym discovery, definition-based similarity. - Tree Search explores promising branches, sandbox-validating each. - Research Injection pulls external knowledge - papers, books, benchmarks - into the loop. In one small-scale ontology alignment task: - Task: map 20 glossary terms into a semantic layer. - Baseline: manual mapping → Mapping-F1 = 0.55. - AI loop: hypotheses generated, sandbox-validated. - Breakthrough: after 8 iterations, Mapping-F1 reached 0.91. This isn’t “AI hallucination.” It’s measured, validated ontology evolution. The Ontological Cockpit An ontology cockpit tracks the health of your knowledge model: - Mapping-F1 trends - how well glossaries and layers align. - Alignment% by domain - where conceptual drift emerges. - Consistency-break log - where KPI definitions diverge from formal models. - Drift detection - alerts when semantics shift silently. This cockpit is the dynamic mirror of formalism. BI 2.0 dashboards can later inherit these metrics. AI-Supported Formalism Jessica Talisman - this is close to what you’ve been advocating: formal knowledge models supported, not replaced, by AI. - Sandbox validation ensures every hypothesis is tested and versioned. - Research injection integrates state-of-the-art ontological heuristics. - Ontologies evolve iteratively, without compromising formal rigor. The Google + Harvard research shows us: a semantic backbone that learns, an ontology that continuously integrates new knowledge, and a future where conceptual precision is measurable, auditable, and improvable. | 73 comments on LinkedIn
AI-Assisted Ontology Mapping
·linkedin.com·
AI-Assisted Ontology Mapping
Stanford Graph Learning Workshop 2025
Stanford Graph Learning Workshop 2025
🚀 I’m excited to announce Stanford Graph Learning Workshop 2025, happening Tuesday, October 14, 2025 at Stanford University (with online livestream). Free registration! Submit a talk/poster. 📍 This year’s workshop will spotlight three fast-moving frontiers in AI & data science: -- Agents — Autonomous systems reshaping how we interact with tech -- Relational Foundation Models — Unlocking structure and meaning in complex data -- Fast LLM Inference — Pushing the boundaries of speed & scalability for large language models We’re bringing together researchers, innovators, and practitioners for a full day of cutting-edge talks, interactive sessions, and collaborative discussions. Whether you’re working in industry, academia, or startup land, there will be something to spark your curiosity and drive your work forward. 🔍 Want to share your work? We have a Call for Contributed Talks and Posters/Demos open now. ✅ Register now (free): https://lnkd.in/dm9JUnH6 📅 Save the date: Oct 14, 2025 | 13 comments on LinkedIn
Stanford Graph Learning Workshop 2025
·linkedin.com·
Stanford Graph Learning Workshop 2025
Showrooms vs. Production Reality: Why is RDF still not widely used?
Showrooms vs. Production Reality: Why is RDF still not widely used?
Showrooms vs. Production Reality: Why is RDF still not widely used? The debate around RDF never really goes away. Advocates highlight its strong foundations, interoperability and precision. However, critics point to its steep learning curve, unwieldy tools, and limited adoption beyond academia and government circles. So why is RDF still a hard sell in enterprise settings? The answer lies less in ignorance and more in reality. Enterprises operate in dynamic environments. Data is constantly being created, updated, versioned and retired. Complex CRUD operations, integration pipelines and governance processes are not exceptions, but part of the daily routine. RDF, with its emphasis on formal representation, often finds it difficult to keep up with this level of operational activity. Performance matters, too. Systems that appear elegant in theory often encounter scaling and latency issues in practice. Enterprises cannot afford philosophical debates when customers expect instant results and compliance teams demand verifiable evidence. Usability is another factor. While RDF tooling is powerful, it is geared towards specialists. Enterprises need platforms that are usable by architects, data stewards, analysts and developers, without requiring them to master semantic web standards. Meanwhile, pragmatic approaches to GraphRAG — combining graph models with embeddings — are gaining traction. While they may lack the rigour of RDF, they offer faster integration, better performance and easier adoption. For many enterprises, 'good enough and working' is preferable to 'perfect but unused'. This doesn’t mean that RDF has no place. It remains relevant in classical information systems where interoperability and formal semantics are essential, such as in the healthcare, government and regulated industries. However, the centre of gravity has shifted. In today's LLM and GraphRAG pipelines, with all their complexity and pragmatic constraints, enterprises prioritise solutions that work, scale and can be trusted. Therefore, the real question may no longer be “Why don’t enterprises adopt RDF?”, but rather, “Can RDF remain relevant in the noisy, fast-moving world of enterprise AI?” #KnowledgeGraphs #EnterpriseAI #GraphRAG #RDF #DataArchitecture #AIinEnterprise #LLM #AIAdoption | 22 comments on LinkedIn
Showrooms vs. Production Reality: Why is RDF still not widely used?
·linkedin.com·
Showrooms vs. Production Reality: Why is RDF still not widely used?
Ontology-driven vibe coding
Ontology-driven vibe coding
Ontology-driven vibe coding Build a reliable app in a matter of minutes in just five steps: 1. Define concepts 2. Define relationships 3. Connect concepts through relationships 4. Define attributes 5. Connect attributes to concepts Then click on 'go to app -' and you are ready to go! What does Hapsah.org provide: - Business glossary with terms and definitions - Conceptual modelling environment - Business rule authoring tool - App running environment - Admin environment - APIs for operational data access (for data manipulation) - APIs for meta data access (glossary, conceptual model and business rules) Making changes or additions to your app is just as easy. You never run into debugging issues. There is no spaghetti codebase that is created and managed under the hood. So no debugging hell, just a smoothly running app. #vibecoding #nocode #ontology #semantic #app #development #businessrules | 18 comments on LinkedIn
Ontology-driven vibe coding
·linkedin.com·
Ontology-driven vibe coding
A Knowledge Graph of code by GitLab
A Knowledge Graph of code by GitLab
If you could hire the smartest engineers and drop them in your code base would you expect miracles overnight? No, of course not! Because even if they are the best of coders, they don’t have context on your project, engineering processes and culture, security and compliance rules, user personas, business priorities, etc. The same is true of the very best agents.. they may know how to write (mostly) technically correct code, and have the context of your source code, but they’re still missing tons of context. Building agents that can deliver high quality outcomes, faster, is going to require much more than your source code, rules and a few prompts. Agents need the same full lifecyle context your engineers gain after being months and years on the job. LLMs will never have access to your company’s engineering systems to train on, so something has to bridge the knowledge gap and it shouldn’t be you, one prompt at a time. This is why we're building what we call our Knowledge Graph at GitLab. It's not just indexing files and code; it's mapping the relationships across your entire development environment. When an agent understands that a particular code block contains three security vulnerabilities, impacts two downstream services, and connects to a broader epic about performance improvements, it can make smarter recommendations and changes than just technically correct code. This kind of contextual reasoning is what separates valuable AI agents from expensive, slow, LLM driven search tools. We're moving toward a world where institutional knowledge becomes portable and queryable. The context of a veteran engineer who knows "why we built it this way" or "what happened last time we tried this approach" can now be captured, connected, and made available to both human teammates and AI agents. See the awesome demos below and I look forward to sharing more later this month in our 18.4 beta update!
·linkedin.com·
A Knowledge Graph of code by GitLab
GraphRAG doesn’t lack ideas, it struggles to scale up.
GraphRAG doesn’t lack ideas, it struggles to scale up.
GraphRAG doesn’t lack ideas, it struggles to scale up. It’s easy to be impressed by a demo that runs on a few documents and carefully curated questions. In that controlled environment, the answers appear seamless, latency is low and everything seems reliable. But the reality of enterprise is very different. Production workloads involve gigabytes of content, thousands of questions and tens of thousands of documents that are constantly changing. In such an environment, manual review is no longer an option. You can’t hire teams to check every answer against every evolving dataset. For GraphRAG to succeed in enterprise production, it must therefore rely on automated control mechanisms that continuously validate efficiency. Validation cannot be based on subjective impressions of 'good answers'. What is needed is a synthetic index of accuracy: a measurable framework that automatically tests and reflects performance at each stage of the workflow. This means validating ingestion (are we capturing the correct data?), embeddings (are entities represented consistently?), retrieval (are relevant entities retrieved reliably?) and reasoning (is the output aligned with the validated context?). Each step must be monitored and tested continuously as data and queries evolve. Another critical requirement is repeatability. In chatbot use cases, a degree of LLM creativity might be tolerated. In enterprise environments, however, it undermines trust. If the same query over the same dataset yields different answers each time, the system cannot be relied upon. Reducing the LLM's freedom to enforce repeatable, auditable answers is essential for GraphRAG to transition from prototype to production. The real differentiator will not be which graph model is 'purest', or which demo looks smoothest, but rather which implementation can demonstrate efficiency within enterprise constraints. This requires automation, accuracy, repeatability, and resilience at scale. Without these features, GraphRAG will remain an experimental solution rather than a practical one. #GraphRAG #RAG #AITrust #AutomatedValidation #AIBechmark
GraphRAG doesn’t lack ideas, it struggles to scale up.
·linkedin.com·
GraphRAG doesn’t lack ideas, it struggles to scale up.
GoAI: Enhancing AI Students' Learning Paths and Idea Generation via Graph of AI Ideas
GoAI: Enhancing AI Students' Learning Paths and Idea Generation via Graph of AI Ideas
💡 Graph of Ideas -- LLMs paired with knowledge graphs can be great partners for ideation, exploration, and research. We've all seen the classic detective corkboard, with pinned notes and pictures, all strung together with red twine. 🕵️  The digital version could be a mind-map, but you still have to draw everything by hand. What if you could just build one from a giant pile of documents? Enter GoAI - a fascinating approach that just dropped on arXiv combining knowledge graphs with LLMs for AI research idea generation. While the paper focuses on a graph of research papers, the approach is generalizable. Here's what caught my attention: 🔗 It builds knowledge graphs from AI papers where nodes are papers/concepts and edges capture semantic citation relationships - basically mapping how ideas actually connect and build on each other 🎯 The "Idea Studio" feature gives you feedback on innovation, clarity, and feasibility of your research ideas - like having a research mentor in your pocket 📈 Experiments show it helps produce clearer, more novel, and more impactful research ideas compared to traditional LLM approaches The key insight? Current LLMs miss the semantic structure and prerequisite relationships in academic knowledge. This framework bridges that gap by making the connections explicit. As AI research accelerates, this approach can be be used for any situation where you're looking for what's missing, rather than answering a question about what exists. Read all the details in the paper... https://lnkd.in/ekGtCx9T
Graph of Ideas -- LLMs paired with knowledge graphs can be great partners for ideation, exploration, and research.
·linkedin.com·
GoAI: Enhancing AI Students' Learning Paths and Idea Generation via Graph of AI Ideas
RDF Sketch extension for VS Code now works directly in the browser.
RDF Sketch extension for VS Code now works directly in the browser.
Our RDF Sketch extension (https://lnkd.in/d_T4SUGX) for VS Code now works directly in the browser. You can use it in:  - https://vscode.dev - https://github.dev - GitLab Web IDE We’d love your feedback if you try it out. #RDF #LinkedData #KnowledgeGraphs #VSCode #DevTools #SemanticWeb
RDF Sketch extension (https://lnkd.in/d_T4SUGX) for VS Code now works directly in the browser.
·linkedin.com·
RDF Sketch extension for VS Code now works directly in the browser.
Are you sure that Knowledge Graphs cannot support decision making based on probability? | LinkedIn
Are you sure that Knowledge Graphs cannot support decision making based on probability? | LinkedIn
There are people who seem to reject Knowledge Graphs while claiming that they do not allow AI Agents to make decisions under uncertainty. This article aims at refuting this claim and showing that, apart from supporting decisions based on reasoning grounded in logic, they are also capable of supporti
·linkedin.com·
Are you sure that Knowledge Graphs cannot support decision making based on probability? | LinkedIn
Integrating Knowledge Graphs into the Debian Ecosystem | Alexander Belikov
Integrating Knowledge Graphs into the Debian Ecosystem | Alexander Belikov
In an era where software systems are increasingly complex and interconnected, effectively managing the relationships between packages, maintainers, dependencies, and vulnerabilities is both a challenge and a necessity. This paper explores the integration of knowledge graphs into the Debian ecosystem as a powerful means to bring structure, semantics, and coherence to diverse sources of package-related data. By unifying information such as package metadata, security advisories, and reproducibility reports into a single graph-based representation, we enable richer visibility into the ecosystem's structure and behavior. Beyond constructing the DebKG graph, we demonstrate how it supports practical, high-impact applications — such as tracing vulnerability propagation and identifying gaps between community needs and development activity — thereby offering a foundation for smarter, data-informed decision-making within Debian.
·alexander-belikov.github.io·
Integrating Knowledge Graphs into the Debian Ecosystem | Alexander Belikov
Understanding ecological systems using knowledge graphs: an application to highly pathogenic avian influenza | Bioinformatics Advances | Oxford Academic
Understanding ecological systems using knowledge graphs: an application to highly pathogenic avian influenza | Bioinformatics Advances | Oxford Academic
AbstractMotivation. Ecological systems are complex. Representing heterogeneous knowledge about ecological systems is a pervasive challenge because data are
·academic.oup.com·
Understanding ecological systems using knowledge graphs: an application to highly pathogenic avian influenza | Bioinformatics Advances | Oxford Academic
Ontologies as Living Systems | LinkedIn
Ontologies as Living Systems | LinkedIn
Earlier this week I came across a post by Miklós Molnár that sparked something I think the ontology community has needed to articulate for a long time. The post described a shift in how we might think about ontology mapping and alignment in the age of AI.
·linkedin.com·
Ontologies as Living Systems | LinkedIn
Semantics in use part 4: an interview with Michael Pool, Semantic Technology Product Leader @Bloomberg | LinkedIn
Semantics in use part 4: an interview with Michael Pool, Semantic Technology Product Leader @Bloomberg | LinkedIn
What is your role? I am a product manager in the Office of the CTO at Bloomberg, where I am responsible for developing products that help to deploy semantic solutions that facilitate our data integration and delivery. Bloomberg is a global provider of financial news and information, including real-t
·linkedin.com·
Semantics in use part 4: an interview with Michael Pool, Semantic Technology Product Leader @Bloomberg | LinkedIn
Graph training: Graph Tech Demystified
Graph training: Graph Tech Demystified
Calling all data scientists, developers, and managers! 📢 Looking to level up your team's knowledge of graph technology? We're excited to share the recorded 2-part training series, "Graph Tech Demystified" with the amazing Paco Nathan. This is your chance to get up to speed on graph fundamentals: In Part 1: Intro to Graph Technologies, you'll learn: - Core concepts in graph tech. - Common pitfalls and what graph technology won't solve. - Focus of graph analytics and measuring quality. 🎥 Recording https://lnkd.in/gCtCCZH5 📖 Slides https://lnkd.in/gbCnUjQN In Part 2: Advanced Topics in Graph Technologies, we explore: - Sophisticated graph patterns like motifs and probabilistic subgraphs. - Intersection of Graph Neural Networks (GNNs) and Reinforcement Learning. - Multi-agent systems and Graph RAG. 🎥 Recording https://lnkd.in/g_5B8nNC 📖 Slides https://lnkd.in/g6iMbJ_Z Insider tip: The resources alone are enough to keep you busy far longer the time it takes to watch the training!
Graph Tech Demystified
·linkedin.com·
Graph training: Graph Tech Demystified