Found 170 bookmarks
Newest
ATOM is finally here! A scalable and fast approach that can build and continuously update temporal knowledge graphs, inspired by atomic bonds.
ATOM is finally here! A scalable and fast approach that can build and continuously update temporal knowledge graphs, inspired by atomic bonds.
Alhamdulillah, ATOM is finally here! A scalable and fast approach that can build and continuously update temporal knowledge graphs, inspired by atomic bonds. Just as matter is formed from atoms, and galaxies are formed from stars, knowledge is likely to be formed from atomic knowledge graphs. Atomic knowledge graphs were born from our intention to solve a common problem in LLM-based KG construction methods: exhaustivity and stability. Often, these methods produce unstable KGs that change when rerunning the construction process, even without changing anything. Moreover, they fail to capture all facts in the input documents and usually overlook the temporal and dynamic aspects of real-world data. What is the solution? Atomic facts that are temporally aware. Instead of constructing knowledge graphs from raw documents, we split them into atomic facts, which are self-contained and concise propositions. Temporal atomic KGs are constructed from each atomic fact. Then, we defined how the temporal atomic KGs would be merged at the atomic level and how the temporal aspects would be handled. We designed a binary merge algorithm that combines two TKGs and a parallel merge process that merges all TKGs simultaneously. The entire architecture operates in parallel. ATOM employs dual-time modeling that distinguishes observation time from validity time and has 3 main modules: - Module 1 (Atomic Fact Decomposition) splits input documents observed at time t into atomic facts using LLM-based prompting, where each temporal atomic fact is a short, self-contained snippet that conveys exactly one piece of information. - Module 2 (Atomic TKGs Construction) extracts 5-tuples in parallel from each atomic fact to construct atomic temporal KGs, while embedding nodes and relations and addressing temporal resolution during extraction. - Module 3 (Parallel Atomic Merge) employs a binary merge algorithm to merge pairs of atomic TKGs through iterative pairwise merging in parallel until convergence, with three resolution phases: (1) entity resolution, (2) relation name resolution, and (3) temporal resolution that merges observation and validity time sets for relations with similar (e_s, r_p, e_o). The resulting TKG snapshot is then merged with the previous DTKG to yield the updated DTKG. Results: Empirical evaluations demonstrate that ATOM achieves ~18% higher exhaustivity, ~17% better stability, and over 90% latency reduction compared to baseline methods (including iText2KG), demonstrating strong scalability potential for dynamic TKG construction. Check our ATOM's architecture and code: Preprint Paper: https://lnkd.in/dsJzDaQc Code: https://lnkd.in/drZUyisV Website: (coming soon) Example use cases: (coming soon) Special thanks to the dream team: Ludovic Moncla, Khalid Benabdeslem, Rémy Cazabet, Pierre Cléau 📚📡 | 14 comments on LinkedIn
ATOM is finally here! A scalable and fast approach that can build and continuously update temporal knowledge graphs, inspired by atomic bonds.
·linkedin.com·
ATOM is finally here! A scalable and fast approach that can build and continuously update temporal knowledge graphs, inspired by atomic bonds.
Is OpenAI quietly moving toward knowledge graphs?
Is OpenAI quietly moving toward knowledge graphs?
Is OpenAI quietly moving toward knowledge graphs? Yesterday’s OpenAI DevDay was all about new no-code tools to create agents. Impressive. But what caught my attention wasn’t what they announced… it’s what they didn’t talk about. During the summer, OpenAI released a Cookbook update introducing the concept Temporal Agents (see below) connecting it to Subject–Predicate–Object triples: the very foundation of a knowledge graph. If you’ve ever worked with graphs, you know this means something big: they’re not just building agents anymore they’re building memory, relationships, and meaning. When you see “London – isCapitalOf – United Kingdom” in their official docs, you realize they’re experimenting with how to represent knowledge itself. And with any good knowledge graph… comes an ontology. So here’s my prediction: ChatGPT-6 will come with a built-in graph that connects everything about you. The question is: do you want their AI to know everything about you? Or do you want to build your own sovereign AI, one that you own, built from open-source intelligence and collective knowledge? Would love to know what you think. Is that me hallucinating or is that a weak signal?👇 | 62 comments on LinkedIn
Is OpenAI quietly moving toward knowledge graphs?
·linkedin.com·
Is OpenAI quietly moving toward knowledge graphs?
Algorithmic vs. Symbolic Reasoning: Is Graph Data Science a critical, transformative layer for GraphRAG?
Algorithmic vs. Symbolic Reasoning: Is Graph Data Science a critical, transformative layer for GraphRAG?
Is Graph Data Science a critical, transformative layer for GraphRAG? The field of enterprise Artificial Intelligence (AI) is undergoing a significant architectural evolution. The initial enthusiasm for Large Language Models (LLMs) has matured into a pragmatic recognition of their limitations, partic
·linkedin.com·
Algorithmic vs. Symbolic Reasoning: Is Graph Data Science a critical, transformative layer for GraphRAG?
Flexible-GraphRAG
Flexible-GraphRAG
𝗙𝗹𝗲𝘅𝗶𝗯𝗹𝗲 𝗚𝗿𝗮𝗽𝗵𝗥𝗔𝗚 𝗼𝗿 𝗥𝗔𝗚 is now flexing to the max using LlamaIndex, supports 𝟳 𝗴𝗿𝗮𝗽𝗵 𝗱𝗮𝘁𝗮𝗯𝗮𝘀𝗲𝘀, 𝟭𝟬 𝘃𝗲𝗰𝘁𝗼𝗿 𝗱𝗮𝘁𝗮𝗯𝗮𝘀𝗲𝘀, 𝟭𝟯 𝗱𝗮𝘁𝗮 𝘀𝗼𝘂𝗿𝗰𝗲𝘀, 𝗟𝗟𝗠𝘀, Docling 𝗱𝗼𝗰 𝗽𝗿𝗼𝗰𝗲𝘀𝘀𝗶𝗻𝗴, 𝗮𝘂𝘁𝗼 𝗰𝗿𝗲𝗮𝘁𝗲 𝗞𝗚𝘀, 𝗚𝗿𝗮𝗽𝗵𝗥𝗔𝗚, 𝗛𝘆𝗯𝗿𝗶𝗱 𝗦𝗲𝗮𝗿𝗰𝗵, 𝗔𝗜 𝗖𝗵𝗮𝘁 (shown Hyland products web page data src) 𝗔𝗽𝗮𝗰𝗵𝗲 𝟮.𝟬 𝗢𝗽𝗲𝗻 𝗦𝗼𝘂𝗿𝗰𝗲 𝗚𝗿𝗮𝗽𝗵: Neo4j ArcadeDB FalkorDB Kuzu NebulaGraph, powered by Vesoft (coming Memgraph and 𝗔𝗺𝗮𝘇𝗼𝗻 𝗡𝗲𝗽𝘁𝘂𝗻𝗲) 𝗩𝗲𝗰𝘁𝗼𝗿: Qdrant, Elastic, OpenSearch Project, Neo4j 𝘃𝗲𝗰𝘁𝗼𝗿, Milvus, created by Zilliz (coming Weaviate, Chroma, Pinecone, 𝗣𝗼𝘀𝘁𝗴𝗿𝗲𝗦𝗤𝗟 + 𝗽𝗴𝘃𝗲𝗰𝘁𝗼𝗿, LanceDB) Docling 𝗱𝗼𝗰𝘂𝗺𝗲𝗻𝘁 𝗽𝗿𝗼𝗰𝗲𝘀𝘀𝗶𝗻𝗴 𝗗𝗮𝘁𝗮 𝗦𝗼𝘂𝗿𝗰𝗲𝘀: using LlamaIndex readers: working: Web Pages, Wikipedia, Youtube, untested: Google Drive, Msft OneDrive, S3, Azure Blob, GCS, Box, SharePoint, previous: filesystem, Alfresco, CMIS. 𝗟𝗟𝗠𝘀: 𝗟𝗹𝗮𝗺𝗮𝗜𝗻𝗱𝗲𝘅 𝗟𝗟𝗠𝘀 (OpenAI, Ollama, Claude, Gemini, etc.) 𝗥𝗲𝗮𝗰𝘁, 𝗩𝘂𝗲, 𝗔𝗻𝗴𝘂𝗹𝗮𝗿 𝗨𝗜𝘀, 𝗠𝗖𝗣 𝘀𝗲𝗿𝘃𝗲𝗿, 𝗙𝗮𝘀𝘁𝗔𝗣𝗜 𝘀𝗲𝗿𝘃𝗲𝗿 𝗚𝗶𝘁𝗛𝘂𝗯 𝘀𝘁𝗲𝘃𝗲𝗿𝗲𝗶𝗻𝗲𝗿/𝗳𝗹𝗲𝘅𝗶𝗯𝗹𝗲-𝗴𝗿𝗮𝗽𝗵𝗿𝗮𝗴: https://lnkd.in/eUEeF2cN 𝗫.𝗰𝗼𝗺 𝗣𝗼𝘀𝘁 𝗼𝗻 𝗙𝗹𝗲𝘅𝗶𝗯𝗹𝗲 𝗚𝗿𝗮𝗽𝗵𝗥𝗔𝗚 𝗼𝗿 𝗥𝗔𝗚 𝗺𝗮𝘅 𝗳𝗹𝗲𝘅𝗶𝗻𝗴 https://lnkd.in/gHpTupAr 𝗜𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗲𝗱 𝗦𝗲𝗺𝗮𝗻𝘁𝗶𝗰𝘀 𝗕𝗹𝗼𝗴: https://lnkd.in/ehpjTV7d
·linkedin.com·
Flexible-GraphRAG
Introducing the GitLab Knowledge Graph
Introducing the GitLab Knowledge Graph
Today, I'd like to introduce the GitLab Knowledge Graph. This release includes a code indexing engine, written in Rust, that turns your codebase into a live, embeddable graph database for LLM RAG. You can install it with a simple one-line script, parse local repositories directly in your editor, and connect via MCP to query your workspace and over 50,000 files in under 100 milliseconds. We also saw GKG agents scoring up to 10% higher on the SWE-Bench-lite benchmarks, with just a few tools and a small prompt added to opencode (an open-source coding agent). On average, we observed a 7% accuracy gain across our eval runs, and GKG agents were able to solve new tasks compared to the baseline agents. You can read more from the team's research here https://lnkd.in/egiXXsaE. This release is just the first step: we aim for this local version to serve as the backbone of a Knowledge Graph service that enables you to query the entire GitLab Software Development Life Cycle—from an Issue down to a single line of code. I am incredibly proud of the work the team has done. Thank you, Michael U., Jean-Gabriel Doyon, Bohdan Parkhomchuk, Dmitry Gruzd, Omar Qunsul, and Jonathan Shobrook. You can watch Bill Staples and I present this and more in the GitLab 18.4 release here: https://lnkd.in/epvjrhqB Try today at: https://lnkd.in/eAypneFA Roadmap: https://lnkd.in/eXNYQkEn Watch more below for a complete, in-depth tutorial on what we've built: | 19 comments on LinkedIn
introduce the GitLab Knowledge Graph
·linkedin.com·
Introducing the GitLab Knowledge Graph
GraphSearch: An Agentic Deep‑Search Workflow for Graph Retrieval‑Augmented Generation
GraphSearch: An Agentic Deep‑Search Workflow for Graph Retrieval‑Augmented Generation
GraphSearch: An Agentic Deep‑Search Workflow for Graph Retrieval‑Augmented Generation ... Why Current AI Search Falls Short When You Need Real Answers What happens when you ask an AI system a complex question that requires connecting multiple pieces of information? Most current approaches retrieve some relevant documents, generate an answer, and call it done. But this single-pass strategy often misses critical evidence. 👉 The Problem with Shallow Retrieval Traditional retrieval-augmented generation (RAG) systems work like a student who only skims the first few search results before writing an essay. They grab what seems relevant on the surface but miss deeper connections that would lead to better answers. When researchers tested these systems on complex multi-hop questions, they found a consistent pattern: the AI would confidently provide answers based on incomplete evidence, leading to logical gaps and missing key facts. 👉 A New Approach: Deep Searching with Dual Channels Researchers from IDEA Research and Hong Kong University of Science and Technology developed GraphSearch, which works more like a thorough investigator than a quick searcher. The system breaks down complex questions into smaller, manageable pieces, then searches through both text documents and structured knowledge graphs. Think of it as having two different research assistants: one excellent at finding descriptive information in documents, another skilled at tracing relationships between entities. 👉 How It Actually Works Instead of one search-and-answer cycle, GraphSearch uses six coordinated modules: Query decomposition splits complex questions into atomic sub-questions Context refinement filters out noise from retrieved information Query grounding fills in missing details from previous searches Logic drafting organizes evidence into coherent reasoning chains Evidence verification checks if the reasoning holds up Query expansion generates new searches to fill identified gaps The system continues this process until it has sufficient evidence to provide a well-grounded answer. 👉 Real Performance Gains Testing across six different question-answering benchmarks showed consistent improvements. On the MuSiQue dataset, for example, answer accuracy jumped from 35% to 51% when GraphSearch was integrated with existing graph-based systems. The approach works particularly well under constrained conditions - when you have limited computational resources for retrieval, the iterative searching strategy maintains performance better than single-pass methods. This research points toward more reliable AI systems that can handle the kind of complex reasoning we actually need in practice. Paper: "GraphSearch: An Agentic Deep Searching Workflow for Graph Retrieval-Augmented Generation" by Yang et al.
GraphSearch: An Agentic Deep‑Search Workflow for Graph Retrieval‑Augmented Generation
·linkedin.com·
GraphSearch: An Agentic Deep‑Search Workflow for Graph Retrieval‑Augmented Generation
T-Box: The secret sauce of knowledge graphs and AI
T-Box: The secret sauce of knowledge graphs and AI
T-Box: The secret sauce of knowledge graphs and AI Ever wondered how knowledge graphs “understand” the world? Meet the T-Box, the part that tells your graph what exists and how it can relate. Think of it like building a LEGO set: T-Box (Terminological Box) = the instruction manual (defines the pieces and how they fit) A-Box (Assertional Box) = the LEGO pieces you actually have (your data, your instances) Why it’s important for RDF knowledge graphs: - Gives your data structure and rules, so your graph doesn’t turn into spaghetti - Enables reasoning, letting the system infer new facts automatically - Keeps your graph consistent and maintainable, even as it grows Why it’s better than other models: Traditional databases just store rows and columns; relationships have no meaning RDF + T-Box = data that can explain itself and connect across domains Why AI loves it: - AI can reason over knowledge, not just crunch numbers - Enables smarter recommendations, insights, and predictions based on structured knowledge Quick analogy: T-Box = blueprint/instruction manual (the ontology / what is possible) A-Box = the real-world building (the facts / what is true) Together = AI-friendly, smart knowledge graph #KnowledgeGraph #RDF #AI #SemanticWeb #DataScience #GraphData
T-Box: The secret sauce of knowledge graphs and AI
·linkedin.com·
T-Box: The secret sauce of knowledge graphs and AI