GraphNews

4353 bookmarks
Custom sorting
DRAG introduces a novel distillation framework that transfers RAG capabilities from LLMs to SLMs through Evidence-based distillation and Graph-based structuring
DRAG introduces a novel distillation framework that transfers RAG capabilities from LLMs to SLMs through Evidence-based distillation and Graph-based structuring
Small Models, Big Knowledge: How DRAG Bridges the AI Efficiency-Accuracy Gap 👉 Why This Matters Modern AI systems face a critical tension: large language models (LLMs) deliver impressive knowledge recall but demand massive computational resources, while smaller models (SLMs) struggle with factual accuracy and "hallucinations." Traditional retrieval-augmented generation (RAG) systems amplify this problem by requiring constant updates to vast knowledge bases. 👉 The Innovation DRAG introduces a novel distillation framework that transfers RAG capabilities from LLMs to SLMs through two key mechanisms: 1. Evidence-based distillation: Filters and ranks factual snippets from teacher LLMs 2. Graph-based structuring: Converts retrieved knowledge into relational graphs to preserve critical connections This dual approach reduces model size requirements by 10-100x while improving factual accuracy by up to 27.7% compared to prior methods like MiniRAG. 👉 How It Works 1. Evidence generation: A large teacher LLM produces multiple context-relevant facts 2. Semantic filtering: Combines cosine similarity and LLM scoring to retain top evidence 3. Knowledge graph creation: Extracts entity relationships to form structured context 4. Distilled inference: SLMs generate answers using both filtered text and graph data The process mimics how humans combine raw information with conceptual understanding, enabling smaller models to "think" like their larger counterparts without the computational overhead. 👉 Privacy Bonus DRAG adds a privacy layer by: - Local query sanitization before cloud processing - Returning only de-identified knowledge graphs Tests show 95.7% reduction in potential personal data leakage while maintaining answer quality. 👉 Why It’s Significant This work addresses three critical challenges simultaneously: - Makes advanced RAG capabilities accessible on edge devices - Reduces hallucination rates through structured knowledge grounding - Preserves user privacy in cloud-based AI interactions The GitHub repository provides full implementation details, enabling immediate application in domains like healthcare diagnostics, legal analysis, and educational tools where accuracy and efficiency are non-negotiable.
DRAG introduces a novel distillation framework that transfers RAG capabilities from LLMs to SLMs through two key mechanisms:1. Evidence-based distillation: Filters and ranks factual snippets from teacher LLMs2. Graph-based structuring: Converts retrieved knowledge into relational graphs to preserve critical connections
·linkedin.com·
DRAG introduces a novel distillation framework that transfers RAG capabilities from LLMs to SLMs through Evidence-based distillation and Graph-based structuring
Synalinks release 0.3 focuses on the Knowledge Graph layer
Synalinks release 0.3 focuses on the Knowledge Graph layer
Your agents, multi-agent systems and LMs apps are still failing with basic logic? We got you covered. Today we're excited to announce Synalinks 0.3 our Keras-based neuro-symbolic framework that bridges the gap between neural networks and symbolic reasoning. Our latest release focuses entirely on the Knowledge Graph layer, delivering production-ready solutions for real-world applications: - Fully constrained KG extraction powered by Pydantic: ensuring that relations connect to the correct entity types. - Seamless integration with our Agents/Chain-of-Thought and Self-Critique modules. - Automatic entity alignment with HSWN. - KG extraction and retrieval optimizable with OPRO and RandomFewShot algorithms. - 100% reliable Cypher query generation through logic-enhanced hybrid triplet retrieval (works with local models too!). - We took extra care to avoid Cypher injection vulnerabilities (yes, we're looking at you, LangGraph 👀) - The retriever don't need the graph schema, as it is included in the way we constrain the generation, avoiding context pollution (hence better accuracy). - We also fixed Synalinks CLI for Windows users along with some minor bug fixes. Our technology combine constrained structured output with in-context reinforcement learning, making enterprise-grade reasoning both highly efficient and cost-effective. Currently supporting Neo4j with plans to expand to other graph databases. Built this initially for a client project, but the results were too good not to share with the community. Want to add support for your preferred graph database? It's just one file to implement! Drop a comment and let's make it happen! #AI #MachineLearning #KnowledgeGraphs #NeuralNetworks #Keras #Neo4j #AIAgents #TechInnovation #OpenSource | 10 comments on LinkedIn
·linkedin.com·
Synalinks release 0.3 focuses on the Knowledge Graph layer
From Dictionaries to Ontologies: Bridging Human Understanding and Machine Reasoning | LinkedIn
From Dictionaries to Ontologies: Bridging Human Understanding and Machine Reasoning | LinkedIn
In the long tradition of dictionaries, the essence of meaning has always relied on two elements: a symbol (usually a word or a phrase) and a definition—an intelligible explanation composed using other known terms. This recursive practice builds a web of meanings, where each term is explained using o
·linkedin.com·
From Dictionaries to Ontologies: Bridging Human Understanding and Machine Reasoning | LinkedIn
Ontology for knowledge sharing in design
Ontology for knowledge sharing in design
In complex engineering systems, how can we ensure that design knowledge doesn’t get lost in spreadsheets, silos, or forgotten documents? One of the greatest challenges in design domain and product development isn’t a lack of data, but a lack of meaningful, connected knowledge. This is where ontologies come in. An ontology is more than just a taxonomy or glossary. It’s a formal representation of concepts and relationships that enables shared understanding across teams, tools, and disciplines. In the design domain, ontologies serve as a semantic backbone, helping engineers and systems interpret, reuse, and reason over knowledge that would otherwise remain trapped in silos. Why does this matter? Because design decisions are rarely made in isolation. Whether it’s integrating functional models, analysing field failures, or updating risk assessment documents, we need a way to bridge data across multiple sources and domains. Ontologies enable that integration by creating a common language and structured relationships, allowing information to flow intelligently from design to deployment. Ontology-driven systems also support human decision-making by enhancing traceability, contextualising feedback, and enabling AI-powered insights. It’s not about replacing designers, it’s about augmenting their intuition with structured, reusable knowledge. As we move towards more data-driven and model-based approaches in engineering, ontologies are key to unlocking collaboration, innovation, and resilience in product development. #Ontology #KnowledgeEngineering #SystemsThinking #DesignThinking #SystemEngineering #AI #DigitalEngineering #MBSE #KnowledgeSharing #DecisionSupport #AugmentedIntelligence | 16 comments on LinkedIn
An ontology is more than just a taxonomy or glossary. It’s a formal representation of concepts and relationships
·linkedin.com·
Ontology for knowledge sharing in design
Semantically Composable Architectures
Semantically Composable Architectures
I'm happy to share the draft of the "Semantically Composable Architectures" mini-paper. It is the culmination of approximately four years' work, which began with Coreless Architectures and has now evolved into something much bigger. LLMs are impressive, but a real breakthrough will occur once we surpass the cognitive capabilities of a single human brain. Enabling autonomous large-scale system reverse engineering and large-scale autonomous transformation with minimal to no human involvement, while still making it understandable to humans if they choose to, is a central pillar of making truly groundbreaking changes. We hope the ideas we shared will be beneficial to humanity and advance our civilization further. It is not final and will require some clarification and improvements, but the key concepts are present. Happy to hear your thoughts and feedback. Some of these concepts underpin the design of the Product X system. Part of the core team + external contribution: Andrew Barsukov Andrey Kolodnitsky Sapta Girisa N Keith E. Glendon Gurpreet Sachdeva Saurav Chandra Mike Diachenko Oleh Sinkevych | 13 comments on LinkedIn
Semantically Composable Architectures
·linkedin.com·
Semantically Composable Architectures
Unlocking graph insights with Raphtory, an advanced in-memory graph tool designed to facilitate efficient exploration of evolving networks
Unlocking graph insights with Raphtory, an advanced in-memory graph tool designed to facilitate efficient exploration of evolving networks
Unlocking graph insights with Raphtory, an advanced in-memory graph tool designed to facilitate efficient exploration of evolving networks. 🔹 Scalability & Performance: Handles large-scale graph data seamlessly, enabling fast computations. 🔹 Temporal Analysis: Investigate how networks change over time, identifying trends and key shifts. 🔹 Multi-layer Modeling: Incorporate diverse data sources into a unified, structured framework for deeper insights. 🔹 Integration: Works easily with existing pipelines via **Python APIs**, ensuring a smooth workflow for professionals. #Graphs #GraphDB #NetworkAnalysis #TemporalData https://www.raphtory.com/
Unlocking graph insights with Raphtory, an advanced in-memory graph tool designed to facilitate efficient exploration of evolving networks
·linkedin.com·
Unlocking graph insights with Raphtory, an advanced in-memory graph tool designed to facilitate efficient exploration of evolving networks
Graph Modeling Mastery — GraphGeeks
Graph Modeling Mastery — GraphGeeks
In our GraphGeeks Talk with Max De Marzi , we unpack what makes a graph model solid, what tends to break things, and how to design with both your data and your queries in mind.
·graphgeeks.org·
Graph Modeling Mastery — GraphGeeks
Want to explore the Anthropic Transformer-Circuit's as a queryable graph?
Want to explore the Anthropic Transformer-Circuit's as a queryable graph?
Want to explore the Anthropic Transformer-Circuit's as a queryable graph? Wrote a script to import the graph json into Neo4j - code in Gist. https://lnkd.in/eT4NjQgY https://lnkd.in/e38TfQpF Next step - write directly from the circuit-tracer library to the graph db. https://lnkd.in/eVU_t6mS
Want to explore the Anthropic Transformer-Circuit's as a queryable graph?
·linkedin.com·
Want to explore the Anthropic Transformer-Circuit's as a queryable graph?
Semantic Querying with SAP HANA Cloud Knowledge Graph using RDF, SPARQL, and Generative AI in Python
Semantic Querying with SAP HANA Cloud Knowledge Graph using RDF, SPARQL, and Generative AI in Python
SAP Knowledge Graph is now generally available (Q1 2025) and is poised to fundamentally change how data relationships are mapped and queried. With grounded intelligence, knowledge graphs are crucial in enabling AI agents for reasoning and retrieval with context and high accuracy. SAP Knowledge Graph...
·community.sap.com·
Semantic Querying with SAP HANA Cloud Knowledge Graph using RDF, SPARQL, and Generative AI in Python
Introducing FACT: Fast Augmented Context Tools (3.2x faster, 90% cost reduction vs RAG)
Introducing FACT: Fast Augmented Context Tools (3.2x faster, 90% cost reduction vs RAG)
Introducing FACT: Fast Augmented Context Tools (3.2x faster, 90% cost reduction vs RAG) RAG had its run, but it’s not built for agentic systems. Vectors are fuzzy, slow, and blind to context. They work fine for static data, but once you enter recursive, real-time workflows, where agents need to reason, act, and reflect. RAG collapses under its own ambiguity. That’s why I built FACT: Fast Augmented Context Tools. Traditional Approach: User Query → Database → Processing → Response (2-5 seconds) FACT Approach: User Query → Intelligent Cache → [If Miss] → Optimized Processing → Response (50ms) It replaces vector search in RAG pipelines with a combination of intelligent prompt caching and deterministic tool execution via MCP. Instead of guessing which chunk is relevant, FACT explicitly retrieves structured data, SQL queries, live APIs, internal tools, then intelligently caches the result if it’s useful downstream. The prompt caching isn’t just basic storage. It’s intelligent using the prompt cache from Anthropic and other LLM providers, tuned for feedback-driven loops: static elements get reused, transient ones expire, and the system adapts in real time. Some things you always want cached, schemas, domain prompts. Others, like live data, need freshness. Traditional RAG is particularly bad at this. Ask anyone force to frequently update vector DBs. I'm also using Arcade.dev to handle secure, scalable execution across both local and cloud environments, giving FACT hybrid intelligence for complex pipelines and automatic tool selection. If you're building serious agents, skip the embeddings. RAG is a workaround. FACT is a foundation. It’s cheaper, faster, and designed for how agents actually work: with tools, memory, and intent. To get started point your favorite coding agent at: https://lnkd.in/gek_akem | 38 comments on LinkedIn
Introducing FACT: Fast Augmented Context Tools (3.2x faster, 90% cost reduction vs RAG)
·linkedin.com·
Introducing FACT: Fast Augmented Context Tools (3.2x faster, 90% cost reduction vs RAG)
A-MEM Transforms AI Agent Memory with Zettelkasten Method, Atomic Notes, Dynamic Linking & Continuous Evolution
A-MEM Transforms AI Agent Memory with Zettelkasten Method, Atomic Notes, Dynamic Linking & Continuous Evolution
🏯🏇 A-MEM Transforms AI Agent Memory with Zettelkasten Method, Atomic Notes, Dynamic Linking & Continuous Evolution! This Novel Memory fixes rigid structures with adaptable, evolving, and interconnected knowledge networks, delivering 2x performance in complex reasoning tasks. 𝗧𝗵𝗶𝘀 𝗶𝘀 𝘄𝗵𝗮𝘁 𝗜 𝗹𝗲𝗮𝗿𝗻𝗲𝗱: ﹌﹌﹌﹌﹌﹌﹌﹌﹌ 》 𝗪𝗵𝘆 𝗧𝗿𝗮𝗱𝗶𝘁𝗶𝗼𝗻𝗮𝗹 𝗠𝗲𝗺𝗼𝗿𝘆 𝗙𝗮𝗹𝗹 𝗦𝗵𝗼𝗿𝘁 Most AI agents today rely on simplistic storage and retrieval but break down when faced with complex, multi-step reasoning tasks. ✸ Common Limitations: ☆ Fixed schemas: Conventional memory systems require predefined structures that limit flexibility. ☆ Limited adaptability: When new information arises, old memories remain static and disconnected, reducing an agent’s ability to build on past experiences. ☆ Ineffective long-term retention: AI agents often struggle to recall relevant past interactions, leading to redundant processing and inefficiencies. ﹌﹌﹌﹌﹌﹌﹌﹌﹌ 》𝗔-𝗠𝗘𝗠: 𝗔𝘁𝗼𝗺𝗶𝗰 𝗻𝗼𝘁𝗲𝘀 𝗮𝗻𝗱 𝗗𝘆𝗻𝗮𝗺𝗶𝗰 𝗹𝗶𝗻𝗸𝗶𝗻𝗴 A-MEM organizes knowledge in a way that mirrors how humans create and refine ideas over time. ✸ How it Works: ☆ Atomic notes: Information is broken down into small, self-contained knowledge units, ensuring clarity and easy integration with future knowledge. ☆ Dynamic linking: Instead of relying on static categories, A-MEM automatically creates connections between related knowledge, forming a network of interrelated ideas. ﹌﹌﹌﹌﹌﹌﹌﹌﹌ 》 𝗣𝗿𝗼𝘃𝗲𝗻 𝗣𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲 𝗔𝗱𝘃𝗮𝗻𝘁𝗮𝗴𝗲 A-MEM delivers measurable improvements. ✸ Empirical results demonstrate: ☆ Over 2x performance improvement in complex reasoning tasks, where AI must synthesize multiple pieces of information across different timeframes. ☆ Superior efficiency across top foundation models, including GPT, Llama, and Qwen—proving its versatility and broad applicability. ﹌﹌﹌﹌﹌﹌﹌﹌﹌ 》 𝗜𝗻𝘀𝗶𝗱𝗲 𝗔-𝗠𝗘𝗠 ✸ Note Construction: ☆ AI-generated structured notes that capture essential details and contextual insights. ☆ Each memory is assigned metadata, including keywords and summaries, for faster retrieval. ✸ Link Generation: ☆ The system autonomously connects new memories to relevant past knowledge. ☆ Relationships between concepts emerge naturally, allowing AI to recognize patterns over time. ✸ Memory Evolution: ☆ Older memories are continuously updated as new insights emerge. ☆ The system dynamically refines knowledge structures, mimicking the way human memory strengthens connections over time. ≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣ ⫸ꆛ Want to build Real-World AI agents? Join My 𝗛𝗮𝗻𝗱𝘀-𝗼𝗻 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁 𝟰-𝗶𝗻-𝟭 𝗧𝗿𝗮𝗶𝗻𝗶𝗻𝗴 TODAY! 𝟰𝟴𝟬+ already Enrolled. ➠ Build Real-World AI Agents for Healthcare, Finance,Smart Cities,Sales ➠ Learn 4 Framework: LangGraph | PydanticAI | CrewAI | OpenAI Swarm ➠ Work with Text, Audio, Video and Tabular Data 👉𝗘𝗻𝗿𝗼𝗹𝗹 𝗡𝗢𝗪 (𝟰𝟱% 𝗱𝗶𝘀𝗰𝗼𝘂𝗻𝘁): https://lnkd.in/eGuWr4CH | 27 comments on LinkedIn
A-MEM Transforms AI Agent Memory with Zettelkasten Method, Atomic Notes, Dynamic Linking & Continuous Evolution
·linkedin.com·
A-MEM Transforms AI Agent Memory with Zettelkasten Method, Atomic Notes, Dynamic Linking & Continuous Evolution
RAG vs Graph RAG, explained visually
RAG vs Graph RAG, explained visually
RAG vs Graph RAG, explained visually. (it's a popular LLM interview question) Imagine you have a long document, say a biography, about an individual (X) who has accomplished several things in this life. ↳ Chapter 1: Talks about Accomplishment-1. ↳ Chapter 2: Talks about Accomplishment-2. ... ↳ Chapter 10: Talks about Accomplishment-10. Summarizing all these accomplishments via RAG might never be possible since... ...it must require the entire context... ...but one might only be fetching the top-k relevant chunks from the vector db. Moreover, since traditional RAG systems retrieve each chunk independently, this can often leave the LLM to infer the connections between them (provided the chunks are retrieved). Graph RAG solves this. The idea is to first create a graph (entities & relationships) from the documents and then do traversal over that graph during the retrieval phase. See how Graph RAG solves the above problems. - First, a system (typically an LLM) will create the graph by understanding the biography. - This will produce a full graph of nodes entities & relationships, and a subgraph will look like this: ↳ X → → Accomplishment-1. ↳ X → → Accomplishment-2. ... ↳ X → → Accomplishment-N. When summarizing these accomplishments, the retrieval phase can do a graph traversal to fetch all the relevant context related to X's accomplishments. This context, when passed to the LLM, will produce a more coherent and complete answer as opposed to traditional RAG. Another reason why Graph RAG systems are so effective is because LLMs are inherently adept at reasoning with structured data. Graph RAG instills that structure into them with their retrieval mechanism. 👉 Over to you: What are some other issues with traditional RAG systems that Graph RAG solves? ____ Find me → Avi Chawla Every day, I share tutorials and insights on DS, ML, LLMs, and RAGs. | 24 comments on LinkedIn
RAG vs Graph RAG, explained visually
·linkedin.com·
RAG vs Graph RAG, explained visually
Standing on Giants' Shoulders: What Happens When Formal Ontology Meets Modern Verification? 🚀 | LinkedIn
Standing on Giants' Shoulders: What Happens When Formal Ontology Meets Modern Verification? 🚀 | LinkedIn
Building on Decades of Foundational Research The formal ontology community has given us incredible foundations - Barry Smith's BFO framework, Alan Ruttenberg's CLIF axiomatizations, and Microsoft Research's Z3 theorem prover. What happens when we combine these mature technologies with modern graph d
·linkedin.com·
Standing on Giants' Shoulders: What Happens When Formal Ontology Meets Modern Verification? 🚀 | LinkedIn
Unified Foundational Ontology
Unified Foundational Ontology
On request, this is the complete slide deck I used in my course at the C-FORS summer school on Foundational Ontologies (see https://lnkd.in/e9Af5JZF) at the University of Oslo, Norway. If you want to know more, here are some papers related to the talk: On the ontology itself: a) for a gentle introduction to UFO: https://lnkd.in/egS5FsQ b) to understand the UFO history and ecosystem (including OntoUML): https://lnkd.in/emCaX5pF c) a more formal paper on the axiomatization of UFO but also with examples (in OntoUML): https://lnkd.in/e_bUuTMa d) focusing on UFO's theory of Types and Taxonomic Structures: https://lnkd.in/eGPXHeh e) focusing on its Theory of Relations (including relationship reification): https://lnkd.in/eTFFRBy8 and https://lnkd.in/eMNmi7-B f) focusing on Qualities and Modes (aspect reification): https://lnkd.in/eNXbrKrW and https://lnkd.in/eQtNC9GH g) focusing on events and processes: https://lnkd.in/e3Z8UrCD, https://lnkd.in/ePZEaJh9, https://lnkd.in/eYnirFv6, https://lnkd.in/ev-cb7_e, https://lnkd.in/e_nTwBc7 On the tools: a) Model Auto-repair and Constraint Learning: https://lnkd.in/esuYSU9i b) Model Validation and Anti-Pattern Detection: https://lnkd.in/e2SxvVzS c) Ontological Patterns and Pattern Grammars: https://lnkd.in/exMFMgpT and https://lnkd.in/eCeRtMNz d) Multi-Level Modeling: https://lnkd.in/eVavvURk and https://lnkd.in/e8t3sMdU e) Complexity Management: https://lnkd.in/eq3xWp-U f) FAIR catalog of models and Pattern Mining: https://lnkd.in/eaN5d3QR and https://lnkd.in/ecjhfp8e g) Anti-Patterns on Wikidata: https://lnkd.in/eap37SSU h) Model Transformation/implementation: https://lnkd.in/eh93u5Hg, https://lnkd.in/e9bU_9NC, https://lnkd.in/eQtNC9GH, https://lnkd.in/esGS8ZTb #ontology #UFO #ontologies #foundationalontology #toplevelontology #TLO Semantics, Cybersecurity, and Services (SCS)/University of Twente
·linkedin.com·
Unified Foundational Ontology
A Pragmatic Introduction to Knowledge Graphs | LinkedIn
A Pragmatic Introduction to Knowledge Graphs | LinkedIn
Audience: This blog is written for engineering leaders, architects, and decision-makers who want to understand what a knowledge graph is, when it makes sense, and when it doesn’t. It is not a deep technical dive, but a strategic overview.
·linkedin.com·
A Pragmatic Introduction to Knowledge Graphs | LinkedIn