Found 12 bookmarks
Custom sorting
Semantically Composable Architectures
Semantically Composable Architectures
I'm happy to share the draft of the "Semantically Composable Architectures" mini-paper. It is the culmination of approximately four years' work, which began with Coreless Architectures and has now evolved into something much bigger. LLMs are impressive, but a real breakthrough will occur once we surpass the cognitive capabilities of a single human brain. Enabling autonomous large-scale system reverse engineering and large-scale autonomous transformation with minimal to no human involvement, while still making it understandable to humans if they choose to, is a central pillar of making truly groundbreaking changes. We hope the ideas we shared will be beneficial to humanity and advance our civilization further. It is not final and will require some clarification and improvements, but the key concepts are present. Happy to hear your thoughts and feedback. Some of these concepts underpin the design of the Product X system. Part of the core team + external contribution: Andrew Barsukov Andrey Kolodnitsky Sapta Girisa N Keith E. Glendon Gurpreet Sachdeva Saurav Chandra Mike Diachenko Oleh Sinkevych | 13 comments on LinkedIn
Semantically Composable Architectures
ยทlinkedin.comยท
Semantically Composable Architectures
A-MEM Transforms AI Agent Memory with Zettelkasten Method, Atomic Notes, Dynamic Linking & Continuous Evolution
A-MEM Transforms AI Agent Memory with Zettelkasten Method, Atomic Notes, Dynamic Linking & Continuous Evolution
๐Ÿฏ๐Ÿ‡ A-MEM Transforms AI Agent Memory with Zettelkasten Method, Atomic Notes, Dynamic Linking & Continuous Evolution! This Novel Memory fixes rigid structures with adaptable, evolving, and interconnected knowledge networks, delivering 2x performance in complex reasoning tasks. ๐—ง๐—ต๐—ถ๐˜€ ๐—ถ๐˜€ ๐˜„๐—ต๐—ฎ๐˜ ๐—œ ๐—น๐—ฒ๐—ฎ๐—ฟ๐—ป๐—ฒ๐—ฑ: ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ ใ€‹ ๐—ช๐—ต๐˜† ๐—ง๐—ฟ๐—ฎ๐—ฑ๐—ถ๐˜๐—ถ๐—ผ๐—ป๐—ฎ๐—น ๐— ๐—ฒ๐—บ๐—ผ๐—ฟ๐˜† ๐—™๐—ฎ๐—น๐—น ๐—ฆ๐—ต๐—ผ๐—ฟ๐˜ Most AI agents today rely on simplistic storage and retrieval but break down when faced with complex, multi-step reasoning tasks. โœธ Common Limitations: โ˜† Fixed schemas: Conventional memory systems require predefined structures that limit flexibility. โ˜† Limited adaptability: When new information arises, old memories remain static and disconnected, reducing an agentโ€™s ability to build on past experiences. โ˜† Ineffective long-term retention: AI agents often struggle to recall relevant past interactions, leading to redundant processing and inefficiencies. ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ ใ€‹๐—”-๐— ๐—˜๐— : ๐—”๐˜๐—ผ๐—บ๐—ถ๐—ฐ ๐—ป๐—ผ๐˜๐—ฒ๐˜€ ๐—ฎ๐—ป๐—ฑ ๐——๐˜†๐—ป๐—ฎ๐—บ๐—ถ๐—ฐ ๐—น๐—ถ๐—ป๐—ธ๐—ถ๐—ป๐—ด A-MEM organizes knowledge in a way that mirrors how humans create and refine ideas over time. โœธ How it Works: โ˜† Atomic notes: Information is broken down into small, self-contained knowledge units, ensuring clarity and easy integration with future knowledge. โ˜† Dynamic linking: Instead of relying on static categories, A-MEM automatically creates connections between related knowledge, forming a network of interrelated ideas. ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ ใ€‹ ๐—ฃ๐—ฟ๐—ผ๐˜ƒ๐—ฒ๐—ป ๐—ฃ๐—ฒ๐—ฟ๐—ณ๐—ผ๐—ฟ๐—บ๐—ฎ๐—ป๐—ฐ๐—ฒ ๐—”๐—ฑ๐˜ƒ๐—ฎ๐—ป๐˜๐—ฎ๐—ด๐—ฒ A-MEM delivers measurable improvements. โœธ Empirical results demonstrate: โ˜† Over 2x performance improvement in complex reasoning tasks, where AI must synthesize multiple pieces of information across different timeframes. โ˜† Superior efficiency across top foundation models, including GPT, Llama, and Qwenโ€”proving its versatility and broad applicability. ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ ใ€‹ ๐—œ๐—ป๐˜€๐—ถ๐—ฑ๐—ฒ ๐—”-๐— ๐—˜๐—  โœธ Note Construction: โ˜† AI-generated structured notes that capture essential details and contextual insights. โ˜† Each memory is assigned metadata, including keywords and summaries, for faster retrieval. โœธ Link Generation: โ˜† The system autonomously connects new memories to relevant past knowledge. โ˜† Relationships between concepts emerge naturally, allowing AI to recognize patterns over time. โœธ Memory Evolution: โ˜† Older memories are continuously updated as new insights emerge. โ˜† The system dynamically refines knowledge structures, mimicking the way human memory strengthens connections over time. โ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃ โซธ๊†› Want to build Real-World AI agents? Join My ๐—›๐—ฎ๐—ป๐—ฑ๐˜€-๐—ผ๐—ป ๐—”๐—œ ๐—”๐—ด๐—ฒ๐—ป๐˜ ๐Ÿฐ-๐—ถ๐—ป-๐Ÿญ ๐—ง๐—ฟ๐—ฎ๐—ถ๐—ป๐—ถ๐—ป๐—ด TODAY! ๐Ÿฐ๐Ÿด๐Ÿฌ+ already Enrolled. โž  Build Real-World AI Agents for Healthcare, Finance,Smart Cities,Sales โž  Learn 4 Framework: LangGraph | PydanticAI | CrewAI | OpenAI Swarm โž  Work with Text, Audio, Video and Tabular Data ๐Ÿ‘‰๐—˜๐—ป๐—ฟ๐—ผ๐—น๐—น ๐—ก๐—ข๐—ช (๐Ÿฐ๐Ÿฑ% ๐—ฑ๐—ถ๐˜€๐—ฐ๐—ผ๐˜‚๐—ป๐˜): https://lnkd.in/eGuWr4CH | 27 comments on LinkedIn
A-MEM Transforms AI Agent Memory with Zettelkasten Method, Atomic Notes, Dynamic Linking & Continuous Evolution
ยทlinkedin.comยท
A-MEM Transforms AI Agent Memory with Zettelkasten Method, Atomic Notes, Dynamic Linking & Continuous Evolution
LLMs generate possibilities; knowledge graphs remember what works
LLMs generate possibilities; knowledge graphs remember what works
LLMs generate possibilities; knowledge graphs remember what works. Together, they forge the recursive memory and creative engine that enables AI systems to truly evolve themselves. Combining neural components (like large language models) with symbolic verification creates a powerful framework for self-evolution that overcomes limitations of either approach used independently. AlphaEvolve demonstrates that self-evolving systems face a fundamental tension between generating novel solutions and ensuring those solutions actually work. The paper shows how AlphaEvolve addresses this through a hybrid architecture where: Neural components (LLMs) provide creative generation of code modifications by drawing on patterns learned from vast training data Symbolic components (code execution) provide ground truth verification through deterministic evaluation Without this combination, a system would either generate interesting but incorrect solutions (neural-only approach) or be limited to small, safe modifications within known patterns (symbolic-only approach). The system can operate at multiple levels of abstraction depending on the problem: raw solution evolution, constructor function evolution, search algorithm evolution, or co-evolution of intermediate solutions and search algorithms. This capability emanates directly from the neurosymbolic integration, where: Neural networks excel at working with continuous, high-dimensional spaces and recognizing patterns across abstraction levels Symbolic systems provide precise representations of discrete structures and logical relationships This enables AlphaEvolve to modify everything from specific lines of code to entire algorithmic approaches. While AlphaEvolve currently uses an evolutionary database, a knowledge graph structure could significantly enhance self-evolution by: Capturing evolutionary relationships between solutions Identifying patterns of code changes that consistently lead to improvements Representing semantic connections between different solution approaches Supporting transfer learning across problem domains Automated, objective evaluation is the core foundation enabling self-evolution: The main limitation of AlphaEvolve is that it handles problems for which it is possible to devise an automated evaluator. This evaluation component provides the "ground truth" feedback that guides evolution, allowing the system to: Differentiate between successful and unsuccessful modifications Create selection pressure toward better-performing solutions Avoid hallucinations or non-functional solutions that might emerge from neural components alone. When applied to optimize Gemini's training kernels, the system essentially improved the very LLM technology that powers it. | 12 comments on LinkedIn
LLMs generate possibilities; knowledge graphs remember what works
ยทlinkedin.comยท
LLMs generate possibilities; knowledge graphs remember what works
NodeRAG restructures knowledge into a heterograph: a rich, layered, musical graph where each node plays a different role
NodeRAG restructures knowledge into a heterograph: a rich, layered, musical graph where each node plays a different role
NodeRAG restructures knowledge into a heterograph: a rich, layered, musical graph where each node plays a different role. Itโ€™s not just smarter retrieval. Itโ€™s structured memory for AI agents. ใ€‹ Why NodeRAG? Most Retrieval-Augmented Generation (RAG) methods retrieve chunks of text. Good enough โ€” until you need reasoning, precision, and multi-hop understanding. This is how NodeRAG solves these problems: ใ€‹ ๐Ÿ”นStep 1: Graph Decomposition NodeRAG begins by decomposing raw text into smart building blocks: โœธ Semantic Units (S): Little event nuggets ("Hinton won the Nobel Prize.") โœธ Entities (N): Key names or concepts ("Hinton", "Nobel Prize") โœธ Relationships (R): Links between entities ("awarded to") โœฉ This is like teaching your AI to recognize the actors, actions, and scenes inside any document. ใ€‹ ๐Ÿ”นStep 2: Graph Augmentation Decomposition alone isn't enough. NodeRAG augments the graph by identifying important hubs: โœธ Node Importance: Using K-Core and Betweenness Centrality to find critical nodes โœฉ Important entities get special attention โ€” their attributes are summarized into new nodes (A). โœธ Community Detection: Grouping related nodes into communities and summarizing them into high-level insights (H). โœฉ Each community gets a "headline" overview node (O) for quick retrieval. It's like adding context and intuition to raw facts. ใ€‹ ๐Ÿ”น Step 3: Graph Enrichment Knowledge without detail is brittle. So NodeRAG enriches the graph: โœธ Original Text: Full chunks are linked back into the graph (Text nodes, T) โœธ Semantic Edges: Using HNSW for fast, meaningful similarity connections โœฉ Only smart nodes are embedded (not everything!) โ€” saving huge storage space. โœฉ Dual search (exact + vector) makes retrieval laser-sharp. Itโ€™s like turning a 2D map into a 3D living world. ใ€‹ ๐Ÿ”น Step 4: Graph Searching Now comes the magic. โœธ Dual Search: First find strong entry points (by name or by meaning) โœธ Shallow Personalized PageRank (PPR): Expand carefully from entry points to nearby relevant nodes. โœฉ No wandering into irrelevant parts of the graph. The search is surgical. โœฉ Retrieval includes fine-grained semantic units, attributes, high-level elements โ€” everything you need, nothing you don't. Itโ€™s like sending out agents into a city โ€” and they return not with everything they saw, but exactly what you asked for, summarized and structured. ใ€‹ Results: NodeRAG's Performance Compared to GraphRAG, LightRAG, NaiveRAG, and HyDE โ€” NodeRAG wins across every major domain: Tech, Science, Writing, Recreation, and Finance. NodeRAG isnโ€™t just a better graph. NodeRAG is a new operating system for memory. โ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃ โซธ๊†› Want to build Real-World AI agents? Join My ๐—›๐—ฎ๐—ป๐—ฑ๐˜€-๐—ผ๐—ป ๐—”๐—œ ๐—”๐—ด๐—ฒ๐—ป๐˜ ๐—ง๐—ฟ๐—ฎ๐—ถ๐—ป๐—ถ๐—ป๐—ด TODAY! โž  Build Real-World AI Agents + RAG Pipelines โž  Learn 3 Tools: LangGraph/LangChain | CrewAI | OpenAI Swarm โž  Work with Text, Audio, Video and Tabular Data ๐Ÿ‘‰๐—˜๐—ป๐—ฟ๐—ผ๐—น๐—น ๐—ก๐—ข๐—ช (๐Ÿฏ๐Ÿฐ% ๐—ฑ๐—ถ๐˜€๐—ฐ๐—ผ๐˜‚๐—ป๐˜): https://lnkd.in/eGuWr4CH | 20 comments on LinkedIn
NodeRAG restructures knowledge into a heterograph: a rich, layered, musical graph where each node plays a different role
ยทlinkedin.comยท
NodeRAG restructures knowledge into a heterograph: a rich, layered, musical graph where each node plays a different role
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning ... ๐Ÿ‘‰ Why This Matters Most AI systems blend knowledge graphs (structured data) with large language models (flexible reasoning). But thereโ€™s a hidden variable: "how" you translate the graph into text for the AI. Researchers discovered that the formatting choice alone can swing performance by up to "17.5%" on reasoning tasks. Imagine solving 1 in 5 more problems correctly just by adjusting how you present data. ๐Ÿ‘‰ What They Built KG-LLM-Bench is a new benchmark to test how language models reason with knowledge graphs. It includes five tasks: - Triple verification (โ€œDoes this fact exist?โ€) - Shortest path finding (โ€œHow are two concepts connected?โ€) - Aggregation (โ€œHow many entities meet X condition?โ€) - Multi-hop reasoning (โ€œWhich entities linked to A also have property B?โ€) - Global analysis (โ€œWhich node is most central?โ€) The team tested seven models (Claude, GPT-4o, Gemini, Llama, Nova) with five ways to โ€œtextualizeโ€ graphs, from simple edge lists to structured JSON and semantic web formats like RDF Turtle. ๐Ÿ‘‰ Key Insights 1. Format matters more than assumed: ย ย - Structured JSON and edge lists performed best overall, but results varied by task. ย ย - For example, JSON excels at aggregation tasks (data is grouped by entity), while edge lists help identify central nodes (repeated mentions highlight connections). 2. Models donโ€™t cheat: Replacing real entity names with fake ones (e.g., โ€œFranceโ€ โ†’ โ€œVerdaniaโ€) caused only a 0.2% performance drop, proving models rely on context, not memorized knowledge. 3. Token efficiency: ย ย - Edge lists used ~2,600 tokens vs. JSON-LDโ€™s ~13,500. Shorter formats free up context space for complex reasoning. ย ย - But concise โ‰  always better: structured formats improved accuracy for tasks requiring grouped data. 4. Models struggle with directionality: ย  Counting outgoing edges (e.g., โ€œWhich countries does France border?โ€) is easier than incoming ones (โ€œWhich countries border France?โ€), likely due to formatting biases. ๐Ÿ‘‰ Practical Takeaways - Optimize for your task: Use JSON for aggregation, edge lists for centrality. - Test your model: The best format depends on the LLMโ€”Claude thrived with RDF Turtle, while Gemini preferred edge lists. - Donโ€™t fear pseudonyms: Masking real names minimally impacts performance, useful for sensitive data. The benchmark is openly available, inviting researchers to add new tasks, graphs, and models. As AI handles larger knowledge bases, choosing the right โ€œdata languageโ€ becomes as critical as the reasoning logic itself. Paper: [KG-LLM-Bench: A Scalable Benchmark for Evaluating LLM Reasoning on Textualized Knowledge Graphs] Authors: Elan Markowitz, Krupa Galiya, Greg Ver Steeg, Aram Galstyan
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning
ยทlinkedin.comยท
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning
What is really Graph RAG?
What is really Graph RAG?
What is really Graph RAG? Inspired by "From Local to Global: A Graph RAG Approach to Query-Focused Summarization" paper from Microsoft! How do you combineโ€ฆ | 12 comments on LinkedIn
What is really Graph RAG?
ยทlinkedin.comยท
What is really Graph RAG?
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph semantic distance metric
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph semantic distance metric
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graphโ€ฆ
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph semantic distance metric
ยทlinkedin.comยท
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph semantic distance metric