GraphNews

4343 bookmarks
Custom sorting
Want to explore the Anthropic Transformer-Circuit's as a queryable graph?
Want to explore the Anthropic Transformer-Circuit's as a queryable graph?
Want to explore the Anthropic Transformer-Circuit's as a queryable graph? Wrote a script to import the graph json into Neo4j - code in Gist. https://lnkd.in/eT4NjQgY https://lnkd.in/e38TfQpF Next step - write directly from the circuit-tracer library to the graph db. https://lnkd.in/eVU_t6mS
Want to explore the Anthropic Transformer-Circuit's as a queryable graph?
·linkedin.com·
Want to explore the Anthropic Transformer-Circuit's as a queryable graph?
Semantic Querying with SAP HANA Cloud Knowledge Graph using RDF, SPARQL, and Generative AI in Python
Semantic Querying with SAP HANA Cloud Knowledge Graph using RDF, SPARQL, and Generative AI in Python
SAP Knowledge Graph is now generally available (Q1 2025) and is poised to fundamentally change how data relationships are mapped and queried. With grounded intelligence, knowledge graphs are crucial in enabling AI agents for reasoning and retrieval with context and high accuracy. SAP Knowledge Graph...
·community.sap.com·
Semantic Querying with SAP HANA Cloud Knowledge Graph using RDF, SPARQL, and Generative AI in Python
Introducing FACT: Fast Augmented Context Tools (3.2x faster, 90% cost reduction vs RAG)
Introducing FACT: Fast Augmented Context Tools (3.2x faster, 90% cost reduction vs RAG)
Introducing FACT: Fast Augmented Context Tools (3.2x faster, 90% cost reduction vs RAG) RAG had its run, but it’s not built for agentic systems. Vectors are fuzzy, slow, and blind to context. They work fine for static data, but once you enter recursive, real-time workflows, where agents need to reason, act, and reflect. RAG collapses under its own ambiguity. That’s why I built FACT: Fast Augmented Context Tools. Traditional Approach: User Query → Database → Processing → Response (2-5 seconds) FACT Approach: User Query → Intelligent Cache → [If Miss] → Optimized Processing → Response (50ms) It replaces vector search in RAG pipelines with a combination of intelligent prompt caching and deterministic tool execution via MCP. Instead of guessing which chunk is relevant, FACT explicitly retrieves structured data, SQL queries, live APIs, internal tools, then intelligently caches the result if it’s useful downstream. The prompt caching isn’t just basic storage. It’s intelligent using the prompt cache from Anthropic and other LLM providers, tuned for feedback-driven loops: static elements get reused, transient ones expire, and the system adapts in real time. Some things you always want cached, schemas, domain prompts. Others, like live data, need freshness. Traditional RAG is particularly bad at this. Ask anyone force to frequently update vector DBs. I'm also using Arcade.dev to handle secure, scalable execution across both local and cloud environments, giving FACT hybrid intelligence for complex pipelines and automatic tool selection. If you're building serious agents, skip the embeddings. RAG is a workaround. FACT is a foundation. It’s cheaper, faster, and designed for how agents actually work: with tools, memory, and intent. To get started point your favorite coding agent at: https://lnkd.in/gek_akem | 38 comments on LinkedIn
Introducing FACT: Fast Augmented Context Tools (3.2x faster, 90% cost reduction vs RAG)
·linkedin.com·
Introducing FACT: Fast Augmented Context Tools (3.2x faster, 90% cost reduction vs RAG)
A-MEM Transforms AI Agent Memory with Zettelkasten Method, Atomic Notes, Dynamic Linking & Continuous Evolution
A-MEM Transforms AI Agent Memory with Zettelkasten Method, Atomic Notes, Dynamic Linking & Continuous Evolution
🏯🏇 A-MEM Transforms AI Agent Memory with Zettelkasten Method, Atomic Notes, Dynamic Linking & Continuous Evolution! This Novel Memory fixes rigid structures with adaptable, evolving, and interconnected knowledge networks, delivering 2x performance in complex reasoning tasks. 𝗧𝗵𝗶𝘀 𝗶𝘀 𝘄𝗵𝗮𝘁 𝗜 𝗹𝗲𝗮𝗿𝗻𝗲𝗱: ﹌﹌﹌﹌﹌﹌﹌﹌﹌ 》 𝗪𝗵𝘆 𝗧𝗿𝗮𝗱𝗶𝘁𝗶𝗼𝗻𝗮𝗹 𝗠𝗲𝗺𝗼𝗿𝘆 𝗙𝗮𝗹𝗹 𝗦𝗵𝗼𝗿𝘁 Most AI agents today rely on simplistic storage and retrieval but break down when faced with complex, multi-step reasoning tasks. ✸ Common Limitations: ☆ Fixed schemas: Conventional memory systems require predefined structures that limit flexibility. ☆ Limited adaptability: When new information arises, old memories remain static and disconnected, reducing an agent’s ability to build on past experiences. ☆ Ineffective long-term retention: AI agents often struggle to recall relevant past interactions, leading to redundant processing and inefficiencies. ﹌﹌﹌﹌﹌﹌﹌﹌﹌ 》𝗔-𝗠𝗘𝗠: 𝗔𝘁𝗼𝗺𝗶𝗰 𝗻𝗼𝘁𝗲𝘀 𝗮𝗻𝗱 𝗗𝘆𝗻𝗮𝗺𝗶𝗰 𝗹𝗶𝗻𝗸𝗶𝗻𝗴 A-MEM organizes knowledge in a way that mirrors how humans create and refine ideas over time. ✸ How it Works: ☆ Atomic notes: Information is broken down into small, self-contained knowledge units, ensuring clarity and easy integration with future knowledge. ☆ Dynamic linking: Instead of relying on static categories, A-MEM automatically creates connections between related knowledge, forming a network of interrelated ideas. ﹌﹌﹌﹌﹌﹌﹌﹌﹌ 》 𝗣𝗿𝗼𝘃𝗲𝗻 𝗣𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲 𝗔𝗱𝘃𝗮𝗻𝘁𝗮𝗴𝗲 A-MEM delivers measurable improvements. ✸ Empirical results demonstrate: ☆ Over 2x performance improvement in complex reasoning tasks, where AI must synthesize multiple pieces of information across different timeframes. ☆ Superior efficiency across top foundation models, including GPT, Llama, and Qwen—proving its versatility and broad applicability. ﹌﹌﹌﹌﹌﹌﹌﹌﹌ 》 𝗜𝗻𝘀𝗶𝗱𝗲 𝗔-𝗠𝗘𝗠 ✸ Note Construction: ☆ AI-generated structured notes that capture essential details and contextual insights. ☆ Each memory is assigned metadata, including keywords and summaries, for faster retrieval. ✸ Link Generation: ☆ The system autonomously connects new memories to relevant past knowledge. ☆ Relationships between concepts emerge naturally, allowing AI to recognize patterns over time. ✸ Memory Evolution: ☆ Older memories are continuously updated as new insights emerge. ☆ The system dynamically refines knowledge structures, mimicking the way human memory strengthens connections over time. ≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣ ⫸ꆛ Want to build Real-World AI agents? Join My 𝗛𝗮𝗻𝗱𝘀-𝗼𝗻 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁 𝟰-𝗶𝗻-𝟭 𝗧𝗿𝗮𝗶𝗻𝗶𝗻𝗴 TODAY! 𝟰𝟴𝟬+ already Enrolled. ➠ Build Real-World AI Agents for Healthcare, Finance,Smart Cities,Sales ➠ Learn 4 Framework: LangGraph | PydanticAI | CrewAI | OpenAI Swarm ➠ Work with Text, Audio, Video and Tabular Data 👉𝗘𝗻𝗿𝗼𝗹𝗹 𝗡𝗢𝗪 (𝟰𝟱% 𝗱𝗶𝘀𝗰𝗼𝘂𝗻𝘁): https://lnkd.in/eGuWr4CH | 27 comments on LinkedIn
A-MEM Transforms AI Agent Memory with Zettelkasten Method, Atomic Notes, Dynamic Linking & Continuous Evolution
·linkedin.com·
A-MEM Transforms AI Agent Memory with Zettelkasten Method, Atomic Notes, Dynamic Linking & Continuous Evolution
RAG vs Graph RAG, explained visually
RAG vs Graph RAG, explained visually
RAG vs Graph RAG, explained visually. (it's a popular LLM interview question) Imagine you have a long document, say a biography, about an individual (X) who has accomplished several things in this life. ↳ Chapter 1: Talks about Accomplishment-1. ↳ Chapter 2: Talks about Accomplishment-2. ... ↳ Chapter 10: Talks about Accomplishment-10. Summarizing all these accomplishments via RAG might never be possible since... ...it must require the entire context... ...but one might only be fetching the top-k relevant chunks from the vector db. Moreover, since traditional RAG systems retrieve each chunk independently, this can often leave the LLM to infer the connections between them (provided the chunks are retrieved). Graph RAG solves this. The idea is to first create a graph (entities & relationships) from the documents and then do traversal over that graph during the retrieval phase. See how Graph RAG solves the above problems. - First, a system (typically an LLM) will create the graph by understanding the biography. - This will produce a full graph of nodes entities & relationships, and a subgraph will look like this: ↳ X → → Accomplishment-1. ↳ X → → Accomplishment-2. ... ↳ X → → Accomplishment-N. When summarizing these accomplishments, the retrieval phase can do a graph traversal to fetch all the relevant context related to X's accomplishments. This context, when passed to the LLM, will produce a more coherent and complete answer as opposed to traditional RAG. Another reason why Graph RAG systems are so effective is because LLMs are inherently adept at reasoning with structured data. Graph RAG instills that structure into them with their retrieval mechanism. 👉 Over to you: What are some other issues with traditional RAG systems that Graph RAG solves? ____ Find me → Avi Chawla Every day, I share tutorials and insights on DS, ML, LLMs, and RAGs. | 24 comments on LinkedIn
RAG vs Graph RAG, explained visually
·linkedin.com·
RAG vs Graph RAG, explained visually
Standing on Giants' Shoulders: What Happens When Formal Ontology Meets Modern Verification? 🚀 | LinkedIn
Standing on Giants' Shoulders: What Happens When Formal Ontology Meets Modern Verification? 🚀 | LinkedIn
Building on Decades of Foundational Research The formal ontology community has given us incredible foundations - Barry Smith's BFO framework, Alan Ruttenberg's CLIF axiomatizations, and Microsoft Research's Z3 theorem prover. What happens when we combine these mature technologies with modern graph d
·linkedin.com·
Standing on Giants' Shoulders: What Happens When Formal Ontology Meets Modern Verification? 🚀 | LinkedIn
Unified Foundational Ontology
Unified Foundational Ontology
On request, this is the complete slide deck I used in my course at the C-FORS summer school on Foundational Ontologies (see https://lnkd.in/e9Af5JZF) at the University of Oslo, Norway. If you want to know more, here are some papers related to the talk: On the ontology itself: a) for a gentle introduction to UFO: https://lnkd.in/egS5FsQ b) to understand the UFO history and ecosystem (including OntoUML): https://lnkd.in/emCaX5pF c) a more formal paper on the axiomatization of UFO but also with examples (in OntoUML): https://lnkd.in/e_bUuTMa d) focusing on UFO's theory of Types and Taxonomic Structures: https://lnkd.in/eGPXHeh e) focusing on its Theory of Relations (including relationship reification): https://lnkd.in/eTFFRBy8 and https://lnkd.in/eMNmi7-B f) focusing on Qualities and Modes (aspect reification): https://lnkd.in/eNXbrKrW and https://lnkd.in/eQtNC9GH g) focusing on events and processes: https://lnkd.in/e3Z8UrCD, https://lnkd.in/ePZEaJh9, https://lnkd.in/eYnirFv6, https://lnkd.in/ev-cb7_e, https://lnkd.in/e_nTwBc7 On the tools: a) Model Auto-repair and Constraint Learning: https://lnkd.in/esuYSU9i b) Model Validation and Anti-Pattern Detection: https://lnkd.in/e2SxvVzS c) Ontological Patterns and Pattern Grammars: https://lnkd.in/exMFMgpT and https://lnkd.in/eCeRtMNz d) Multi-Level Modeling: https://lnkd.in/eVavvURk and https://lnkd.in/e8t3sMdU e) Complexity Management: https://lnkd.in/eq3xWp-U f) FAIR catalog of models and Pattern Mining: https://lnkd.in/eaN5d3QR and https://lnkd.in/ecjhfp8e g) Anti-Patterns on Wikidata: https://lnkd.in/eap37SSU h) Model Transformation/implementation: https://lnkd.in/eh93u5Hg, https://lnkd.in/e9bU_9NC, https://lnkd.in/eQtNC9GH, https://lnkd.in/esGS8ZTb #ontology #UFO #ontologies #foundationalontology #toplevelontology #TLO Semantics, Cybersecurity, and Services (SCS)/University of Twente
·linkedin.com·
Unified Foundational Ontology
A Pragmatic Introduction to Knowledge Graphs | LinkedIn
A Pragmatic Introduction to Knowledge Graphs | LinkedIn
Audience: This blog is written for engineering leaders, architects, and decision-makers who want to understand what a knowledge graph is, when it makes sense, and when it doesn’t. It is not a deep technical dive, but a strategic overview.
·linkedin.com·
A Pragmatic Introduction to Knowledge Graphs | LinkedIn
Graph RAG open source stack to generate and visualize knowledge graphs
Graph RAG open source stack to generate and visualize knowledge graphs
A serious knowledge graph effort is much more than a bit of Github, but customers and adventurous minds keep asking me if there is an easy to use (read: POC click-and-go solution) graph RAG open source stack they can use to generate knowledge graphs. So, here is my list of projects I keep an eye on. Mind, there is nothing simple if you venture into graphs, despite all the claims and marketing. Things like graph machine learning, graph layout and distributed graph analytics is more than a bit of pip install. The best solutions are hidden inside multi-nationals, custom made. Equity firms and investors sometimes ask me to evaluate innovations. It's amazing what talented people develop and never shows up in the news, or on Github. TrustGraph - The Knowledge Platform for AI https://trustgraph.ai/ The only one with a distributed architecture and made for enterprise KG. itext2kg - https://lnkd.in/e-eQbwV5 Clean and plain. Wrapped prompts done right. Fast GraphRAG - https://lnkd.in/e7jZ9GZH Popular and with some basic visualization. ZEP - https://lnkd.in/epxtKtCU Geared towards agentic memory. Triplex - https://lnkd.in/eGV8FR56 LLM to extract triples. GraphRAG Local with UI - https://lnkd.in/ePGeqqQE Another starting point for small KG efforts. Or to convince your investors. GraphRAG visualizer - https://lnkd.in/ePuMmfkR Makes pretty pictures but not for drill-downs. Neo4j's GraphRAG - https://lnkd.in/ex_A52RU A python package with a focus on getting data into Neo4j. OpenSPG - https://lnkd.in/er4qUFJv Has a different take and more academic. Microsoft GraphRAG - https://lnkd.in/e_a-mPum A classic but I don't think anyone is using this beyond experimentation. yWorks - https://www.yworks.com If you are serious about interactive graph layout. Ogma - https://lnkd.in/evwnJCBK If you are serious about graph data viz. Orbifold Consulting - https://lnkd.in/e-Dqg4Zx If you are serious about your KG journey. #GraphRAG #GraphViz #GraphMachineLearning #KnowledgeGraphs
graph RAG open source stack they can use to generate knowledge graphs.
·linkedin.com·
Graph RAG open source stack to generate and visualize knowledge graphs
LLMs generate possibilities; knowledge graphs remember what works
LLMs generate possibilities; knowledge graphs remember what works
LLMs generate possibilities; knowledge graphs remember what works. Together, they forge the recursive memory and creative engine that enables AI systems to truly evolve themselves. Combining neural components (like large language models) with symbolic verification creates a powerful framework for self-evolution that overcomes limitations of either approach used independently. AlphaEvolve demonstrates that self-evolving systems face a fundamental tension between generating novel solutions and ensuring those solutions actually work. The paper shows how AlphaEvolve addresses this through a hybrid architecture where: Neural components (LLMs) provide creative generation of code modifications by drawing on patterns learned from vast training data Symbolic components (code execution) provide ground truth verification through deterministic evaluation Without this combination, a system would either generate interesting but incorrect solutions (neural-only approach) or be limited to small, safe modifications within known patterns (symbolic-only approach). The system can operate at multiple levels of abstraction depending on the problem: raw solution evolution, constructor function evolution, search algorithm evolution, or co-evolution of intermediate solutions and search algorithms. This capability emanates directly from the neurosymbolic integration, where: Neural networks excel at working with continuous, high-dimensional spaces and recognizing patterns across abstraction levels Symbolic systems provide precise representations of discrete structures and logical relationships This enables AlphaEvolve to modify everything from specific lines of code to entire algorithmic approaches. While AlphaEvolve currently uses an evolutionary database, a knowledge graph structure could significantly enhance self-evolution by: Capturing evolutionary relationships between solutions Identifying patterns of code changes that consistently lead to improvements Representing semantic connections between different solution approaches Supporting transfer learning across problem domains Automated, objective evaluation is the core foundation enabling self-evolution: The main limitation of AlphaEvolve is that it handles problems for which it is possible to devise an automated evaluator. This evaluation component provides the "ground truth" feedback that guides evolution, allowing the system to: Differentiate between successful and unsuccessful modifications Create selection pressure toward better-performing solutions Avoid hallucinations or non-functional solutions that might emerge from neural components alone. When applied to optimize Gemini's training kernels, the system essentially improved the very LLM technology that powers it. | 12 comments on LinkedIn
LLMs generate possibilities; knowledge graphs remember what works
·linkedin.com·
LLMs generate possibilities; knowledge graphs remember what works
I added a Knowledge Graph to Cursor using MCP
I added a Knowledge Graph to Cursor using MCP
I added a Knowledge Graph to Cursor using MCP. You gotta see this working! Knowledge graphs are a game-changer for AI Agents, and this is one example of how you can take advantage of them. How this works: 1. Cursor connects to Graphiti's MCP Server. Graphiti is a very popular open-source Knowledge Graph library for AI agents. 2. Graphiti connects to Neo4j running locally. Now, every time I interact with Cursor, the information is synthesized and stored in the knowledge graph. In short, Cursor now "remembers" everything about our project. Huge! Here is the video I recorded. To get this working on your computer, follow the instructions on this link: https://lnkd.in/eeZ_4dkb Something super cool about using Graphiti's MCP server: You can use one model to develop the requirements and a completely different model to implement the code. This is a huge plus because you could use the stronger model at each stage. Also, Graphiti supports custom entities, which you can use when running the MCP server. You can use these custom entities to structure and recall domain-specific information, which will tenfold the accuracy of your results. Here is an example of what these look like: https://lnkd.in/efv7kTaH By the way, knowledge graphs for agents are a big thing. A few ridiculous and eye-opening benchmarks comparing an AI Agent using knowledge graphs with state-of-the-art methods: • 94.8% accuracy versus 93.4% in the Deep Memory Retrieval (DMR) benchmark. • 71.2% accuracy versus 60.2% on conversations simulating real-world enterprise use cases. • 2.58s of latency versus 28.9s. • 38.4% improvement in temporal reasoning. You'll find these benchmarks in this paper: https://fnf.dev/3CLQjBK | 36 comments on LinkedIn
I added a Knowledge Graph to Cursor using MCP
·linkedin.com·
I added a Knowledge Graph to Cursor using MCP
Efficient Graph Storage for Entity Resolution Using Clique-Based Compression | Towards Data Science
Efficient Graph Storage for Entity Resolution Using Clique-Based Compression | Towards Data Science
Entity resolution systems face challenges with dense, interconnected graphs, and clique-based graph compression offers an efficient solution by reducing storage overhead and improving system performance during data deletion and reprocessing.
·towardsdatascience.com·
Efficient Graph Storage for Entity Resolution Using Clique-Based Compression | Towards Data Science
Personal Knowledge Domain
Personal Knowledge Domain
𝙏𝙝𝙤𝙪𝙜𝙝𝙩 𝙛𝙤𝙧 𝙩𝙝𝙚 𝘿𝙖𝙮: What if we could encapsulate everything a person knows—their entire bubble of knowledge, what I’d call a Personal Knowledge Domain or better, our 𝙎𝙚𝙢𝙖𝙣𝙩𝙞𝙘 𝙎𝙚𝙡𝙛, and represent it in an RDF graph? From that foundation, we could create Personal Agents that act on our behalf. Each of us would own our agent, with the ability to share or lease it for collaboration with other agents. If we could make these agents secure, continuously updatable, and interoperable, what kind of power might we unlock for the human race? Is this idea so far-fetched? It has solid grounding in knowledge representation, identity theory, and agent-based systems. It fits right in with current trends: AI assistants, the semantic web, Web3 identity, and digital twins. Yes, the technical and ethical hurdles are significant, but this could become the backbone of a future architecture for personalized AI and cooperative knowledge ecosystems. Pieces of the puzzle already exist: Tim Berners-Lee’s Solid Project, digital twins for individuals, Personal AI platforms like personal.ai, Retrieval-Augmented Language Model agents (ReALM), and Web3 identity efforts such as SpruceID, architectures such as MCP and inter-agent protocols such as A2A. We see movement in human-centric knowledge graphs like FOAF and SIOC, learning analytics, personal learning environments, and LLM-graph hybrids. What we still need is a unified architecture that: * Employs RDF or similar for semantic richness * Ensures user ownership and true portability * Enables secure agent-to-agent collaboration * Supports continuous updates and trust mechanisms * Integrates with LLMs for natural, contextual reasoning These are certainly not novel notions, for example: * MyPDDL (My Personal Digital Life) and the PDS (Personal Data Store) concept from MIT and the EU’s DECODE project. * The Human-Centric AI Group at Stanford and the Augmented Social Cognition group at PARC have also published research around lifelong personal agents and social memory systems. However, one wonders if anyone is working on combining all of the ingredients into a fully baked cake - after which we can enjoy dessert while our personal agents do our bidding. | 21 comments on LinkedIn
Personal Knowledge Domain
·linkedin.com·
Personal Knowledge Domain