ATOM is finally here! A scalable and fast approach that can build and continuously update temporal knowledge graphs, inspired by atomic bonds.
Alhamdulillah, ATOM is finally here! A scalable and fast approach that can build and continuously update temporal knowledge graphs, inspired by atomic bonds.
Just as matter is formed from atoms, and galaxies are formed from stars, knowledge is likely to be formed from atomic knowledge graphs.
Atomic knowledge graphs were born from our intention to solve a common problem in LLM-based KG construction methods: exhaustivity and stability. Often, these methods produce unstable KGs that change when rerunning the construction process, even without changing anything. Moreover, they fail to capture all facts in the input documents and usually overlook the temporal and dynamic aspects of real-world data.
What is the solution? Atomic facts that are temporally aware.
Instead of constructing knowledge graphs from raw documents, we split them into atomic facts, which are self-contained and concise propositions. Temporal atomic KGs are constructed from each atomic fact. Then, we defined how the temporal atomic KGs would be merged at the atomic level and how the temporal aspects would be handled. We designed a binary merge algorithm that combines two TKGs and a parallel merge process that merges all TKGs simultaneously. The entire architecture operates in parallel.
ATOM employs dual-time modeling that distinguishes observation time from validity time and has 3 main modules:
- Module 1 (Atomic Fact Decomposition) splits input documents observed at time t into atomic facts using LLM-based prompting, where each temporal atomic fact is a short, self-contained snippet that conveys exactly one piece of information.
- Module 2 (Atomic TKGs Construction) extracts 5-tuples in parallel from each atomic fact to construct atomic temporal KGs, while embedding nodes and relations and addressing temporal resolution during extraction.
- Module 3 (Parallel Atomic Merge) employs a binary merge algorithm to merge pairs of atomic TKGs through iterative pairwise merging in parallel until convergence, with three resolution phases: (1) entity resolution, (2) relation name resolution, and (3) temporal resolution that merges observation and validity time sets for relations with similar (e_s, r_p, e_o). The resulting TKG snapshot is then merged with the previous DTKG to yield the updated DTKG.
Results: Empirical evaluations demonstrate that ATOM achieves ~18% higher exhaustivity, ~17% better stability, and over 90% latency reduction compared to baseline methods (including iText2KG), demonstrating strong scalability potential for dynamic TKG construction.
Check our ATOM's architecture and code:
Preprint Paper: https://lnkd.in/dsJzDaQc
Code: https://lnkd.in/drZUyisV
Website: (coming soon)
Example use cases: (coming soon)
Special thanks to the dream team: Ludovic Moncla, Khalid Benabdeslem, Rémy Cazabet, Pierre Cléau 📚📡 | 14 comments on LinkedIn
ATOM is finally here! A scalable and fast approach that can build and continuously update temporal knowledge graphs, inspired by atomic bonds.
Pseudo-Knowledge Graphs for Better RAG | by Devashish Datt Mamgain | Oct, 2025 | Towards AI
Pseudo-Knowledge Graphs for Better RAG Retrieval-Augmented Generation (RAG) was supposed to give Large Language Models perfect memory: ask a question, fetch the exact facts, and generate a fluent and …
Cognee - AI Agents with LangGraph + cognee: Persistent Semantic Memory
Build AI agents with LangGraph and cognee: persistent semantic memory across sessions for cleaner context and higher accuracy. See the demo—get started now.
Is OpenAI quietly moving toward knowledge graphs?
Yesterday’s OpenAI DevDay was all about new no-code tools to create agents. Impressive. But what caught my attention wasn’t what they announced… it’s what they didn’t talk about.
During the summer, OpenAI released a Cookbook update introducing the concept Temporal Agents (see below) connecting it to Subject–Predicate–Object triples: the very foundation of a knowledge graph.
If you’ve ever worked with graphs, you know this means something big:
they’re not just building agents anymore they’re building memory, relationships, and meaning.
When you see “London – isCapitalOf – United Kingdom” in their official docs, you realize they’re experimenting with how to represent knowledge itself.
And with any good knowledge graph… comes an ontology.
So here’s my prediction:
ChatGPT-6 will come with a built-in graph that connects everything about you.
The question is: do you want their AI to know everything about you?
Or do you want to build your own sovereign AI, one that you own, built from open-source intelligence and collective knowledge?
Would love to know what you think. Is that me hallucinating or is that a weak signal?👇 | 62 comments on LinkedIn
Algorithmic vs. Symbolic Reasoning: Is Graph Data Science a critical, transformative layer for GraphRAG?
Is Graph Data Science a critical, transformative layer for GraphRAG? The field of enterprise Artificial Intelligence (AI) is undergoing a significant architectural evolution. The initial enthusiasm for Large Language Models (LLMs) has matured into a pragmatic recognition of their limitations, partic
Today, I'd like to introduce the GitLab Knowledge Graph. This release includes a code indexing engine, written in Rust, that turns your codebase into a live, embeddable graph database for LLM RAG. You can install it with a simple one-line script, parse local repositories directly in your editor, and connect via MCP to query your workspace and over 50,000 files in under 100 milliseconds.
We also saw GKG agents scoring up to 10% higher on the SWE-Bench-lite benchmarks, with just a few tools and a small prompt added to opencode (an open-source coding agent). On average, we observed a 7% accuracy gain across our eval runs, and GKG agents were able to solve new tasks compared to the baseline agents. You can read more from the team's research here https://lnkd.in/egiXXsaE.
This release is just the first step: we aim for this local version to serve as the backbone of a Knowledge Graph service that enables you to query the entire GitLab Software Development Life Cycle—from an Issue down to a single line of code.
I am incredibly proud of the work the team has done. Thank you, Michael U., Jean-Gabriel Doyon, Bohdan Parkhomchuk, Dmitry Gruzd, Omar Qunsul, and Jonathan Shobrook. You can watch Bill Staples and I present this and more in the GitLab 18.4 release here: https://lnkd.in/epvjrhqB
Try today at: https://lnkd.in/eAypneFA
Roadmap: https://lnkd.in/eXNYQkEn
Watch more below for a complete, in-depth tutorial on what we've built: | 19 comments on LinkedIn
GraphSearch: An Agentic Deep‑Search Workflow for Graph Retrieval‑Augmented Generation
GraphSearch: An Agentic Deep‑Search Workflow for Graph Retrieval‑Augmented Generation ...
Why Current AI Search Falls Short When You Need Real Answers
What happens when you ask an AI system a complex question that requires connecting multiple pieces of information? Most current approaches retrieve some relevant documents, generate an answer, and call it done. But this single-pass strategy often misses critical evidence.
👉 The Problem with Shallow Retrieval
Traditional retrieval-augmented generation (RAG) systems work like a student who only skims the first few search results before writing an essay. They grab what seems relevant on the surface but miss deeper connections that would lead to better answers.
When researchers tested these systems on complex multi-hop questions, they found a consistent pattern: the AI would confidently provide answers based on incomplete evidence, leading to logical gaps and missing key facts.
👉 A New Approach: Deep Searching with Dual Channels
Researchers from IDEA Research and Hong Kong University of Science and Technology developed GraphSearch, which works more like a thorough investigator than a quick searcher.
The system breaks down complex questions into smaller, manageable pieces, then searches through both text documents and structured knowledge graphs. Think of it as having two different research assistants: one excellent at finding descriptive information in documents, another skilled at tracing relationships between entities.
👉 How It Actually Works
Instead of one search-and-answer cycle, GraphSearch uses six coordinated modules:
Query decomposition splits complex questions into atomic sub-questions
Context refinement filters out noise from retrieved information
Query grounding fills in missing details from previous searches
Logic drafting organizes evidence into coherent reasoning chains
Evidence verification checks if the reasoning holds up
Query expansion generates new searches to fill identified gaps
The system continues this process until it has sufficient evidence to provide a well-grounded answer.
👉 Real Performance Gains
Testing across six different question-answering benchmarks showed consistent improvements. On the MuSiQue dataset, for example, answer accuracy jumped from 35% to 51% when GraphSearch was integrated with existing graph-based systems.
The approach works particularly well under constrained conditions - when you have limited computational resources for retrieval, the iterative searching strategy maintains performance better than single-pass methods.
This research points toward more reliable AI systems that can handle the kind of complex reasoning we actually need in practice.
Paper: "GraphSearch: An Agentic Deep Searching Workflow for Graph Retrieval-Augmented Generation" by Yang et al.
GraphSearch: An Agentic Deep‑Search Workflow for Graph Retrieval‑Augmented Generation
Cognee - Graph-Aware Embeddings by cognee: For Even Smarter Retrieval
Cognee introduces graph-aware embeddings: graph signals boost semantic search for faster and more precise retrievals in paid plans. Learn more and book a call.
HiRAG: Retrieval-Augmented Generation with Hierarchical Knowledge
Graph-based Retrieval-Augmented Generation (RAG) methods have significantly enhanced the performance of large language models (LLMs) in domain-specific tasks. However, existing RAG methods do not...
T-Box: The secret sauce of knowledge graphs and AI
T-Box: The secret sauce of knowledge graphs and AI
Ever wondered how knowledge graphs “understand” the world? Meet the T-Box, the part that tells your graph what exists and how it can relate.
Think of it like building a LEGO set:
T-Box (Terminological Box) = the instruction manual (defines the pieces and how they fit)
A-Box (Assertional Box) = the LEGO pieces you actually have (your data, your instances)
Why it’s important for RDF knowledge graphs:
- Gives your data structure and rules, so your graph doesn’t turn into spaghetti
- Enables reasoning, letting the system infer new facts automatically
- Keeps your graph consistent and maintainable, even as it grows
Why it’s better than other models:
Traditional databases just store rows and columns; relationships have no meaning
RDF + T-Box = data that can explain itself and connect across domains
Why AI loves it:
- AI can reason over knowledge, not just crunch numbers
- Enables smarter recommendations, insights, and predictions based on structured knowledge
Quick analogy:
T-Box = blueprint/instruction manual (the ontology / what is possible)
A-Box = the real-world building (the facts / what is true)
Together = AI-friendly, smart knowledge graph
#KnowledgeGraph #RDF #AI #SemanticWeb #DataScience #GraphData
T-Box: The secret sauce of knowledge graphs and AI
Youtu-GraphRAG: Vertically Unified Agents for Graph...
Graph retrieval-augmented generation (GraphRAG) has effectively enhanced large language models in complex reasoning by organizing fragmented knowledge into explicitly structured graphs. Prior...