I'm happy to share the draft of the "Semantically Composable Architectures" mini-paper.
It is the culmination of approximately four years' work, which began with Coreless Architectures and has now evolved into something much bigger.
LLMs are impressive, but a real breakthrough will occur once we surpass the cognitive capabilities of a single human brain.
Enabling autonomous large-scale system reverse engineering and large-scale autonomous transformation with minimal to no human involvement, while still making it understandable to humans if they choose to, is a central pillar of making truly groundbreaking changes.
We hope the ideas we shared will be beneficial to humanity and advance our civilization further.
It is not final and will require some clarification and improvements, but the key concepts are present. Happy to hear your thoughts and feedback.
Some of these concepts underpin the design of the Product X system.
Part of the core team + external contribution:
Andrew Barsukov Andrey Kolodnitsky Sapta Girisa N Keith E. Glendon Gurpreet Sachdeva Saurav Chandra Mike Diachenko Oleh Sinkevych | 13 comments on LinkedIn
A-MEM Transforms AI Agent Memory with Zettelkasten Method, Atomic Notes, Dynamic Linking & Continuous Evolution
๐ฏ๐ A-MEM Transforms AI Agent Memory with Zettelkasten Method, Atomic Notes, Dynamic Linking & Continuous Evolution!
This Novel Memory fixes rigid structures with adaptable, evolving, and interconnected knowledge networks, delivering 2x performance in complex reasoning tasks.
๐ง๐ต๐ถ๐ ๐ถ๐ ๐๐ต๐ฎ๐ ๐ ๐น๐ฒ๐ฎ๐ฟ๐ป๐ฒ๐ฑ:
๏น๏น๏น๏น๏น๏น๏น๏น๏น
ใ ๐ช๐ต๐ ๐ง๐ฟ๐ฎ๐ฑ๐ถ๐๐ถ๐ผ๐ป๐ฎ๐น ๐ ๐ฒ๐บ๐ผ๐ฟ๐ ๐๐ฎ๐น๐น ๐ฆ๐ต๐ผ๐ฟ๐
Most AI agents today rely on simplistic storage and retrieval but break down when faced with complex, multi-step reasoning tasks.
โธ Common Limitations:
โ Fixed schemas: Conventional memory systems require predefined structures that limit flexibility.
โ Limited adaptability: When new information arises, old memories remain static and disconnected, reducing an agentโs ability to build on past experiences.
โ Ineffective long-term retention: AI agents often struggle to recall relevant past interactions, leading to redundant processing and inefficiencies.
๏น๏น๏น๏น๏น๏น๏น๏น๏น
ใ๐-๐ ๐๐ : ๐๐๐ผ๐บ๐ถ๐ฐ ๐ป๐ผ๐๐ฒ๐ ๐ฎ๐ป๐ฑ ๐๐๐ป๐ฎ๐บ๐ถ๐ฐ ๐น๐ถ๐ป๐ธ๐ถ๐ป๐ด
A-MEM organizes knowledge in a way that mirrors how humans create and refine ideas over time.
โธ How it Works:
โ Atomic notes: Information is broken down into small, self-contained knowledge units, ensuring clarity and easy integration with future knowledge.
โ Dynamic linking: Instead of relying on static categories, A-MEM automatically creates connections between related knowledge, forming a network of interrelated ideas.
๏น๏น๏น๏น๏น๏น๏น๏น๏น
ใ ๐ฃ๐ฟ๐ผ๐๐ฒ๐ป ๐ฃ๐ฒ๐ฟ๐ณ๐ผ๐ฟ๐บ๐ฎ๐ป๐ฐ๐ฒ ๐๐ฑ๐๐ฎ๐ป๐๐ฎ๐ด๐ฒ
A-MEM delivers measurable improvements.
โธ Empirical results demonstrate:
โ Over 2x performance improvement in complex reasoning tasks, where AI must synthesize multiple pieces of information across different timeframes.
โ Superior efficiency across top foundation models, including GPT, Llama, and Qwenโproving its versatility and broad applicability.
๏น๏น๏น๏น๏น๏น๏น๏น๏น
ใ ๐๐ป๐๐ถ๐ฑ๐ฒ ๐-๐ ๐๐
โธ Note Construction:
โ AI-generated structured notes that capture essential details and contextual insights.
โ Each memory is assigned metadata, including keywords and summaries, for faster retrieval.
โธ Link Generation:
โ The system autonomously connects new memories to relevant past knowledge.
โ Relationships between concepts emerge naturally, allowing AI to recognize patterns over time.
โธ Memory Evolution:
โ Older memories are continuously updated as new insights emerge.
โ The system dynamically refines knowledge structures, mimicking the way human memory strengthens connections over time.
โฃโฃโฃโฃโฃโฃโฃโฃโฃโฃโฃโฃโฃโฃโฃโฃโฃโฃโฃโฃโฃโฃโฃโฃโฃโฃ
โซธ๊ Want to build Real-World AI agents?
Join My ๐๐ฎ๐ป๐ฑ๐-๐ผ๐ป ๐๐ ๐๐ด๐ฒ๐ป๐ ๐ฐ-๐ถ๐ป-๐ญ ๐ง๐ฟ๐ฎ๐ถ๐ป๐ถ๐ป๐ด TODAY! ๐ฐ๐ด๐ฌ+ already Enrolled.
โ Build Real-World AI Agents for Healthcare, Finance,Smart Cities,Sales
โ Learn 4 Framework: LangGraph | PydanticAI | CrewAI | OpenAI Swarm
โ Work with Text, Audio, Video and Tabular Data
๐๐๐ป๐ฟ๐ผ๐น๐น ๐ก๐ข๐ช (๐ฐ๐ฑ% ๐ฑ๐ถ๐๐ฐ๐ผ๐๐ป๐):
https://lnkd.in/eGuWr4CH
| 27 comments on LinkedIn
A-MEM Transforms AI Agent Memory with Zettelkasten Method, Atomic Notes, Dynamic Linking & Continuous Evolution
LLMs generate possibilities; knowledge graphs remember what works
LLMs generate possibilities; knowledge graphs remember what works. Together, they forge the recursive memory and creative engine that enables AI systems to truly evolve themselves.
Combining neural components (like large language models) with symbolic verification creates a powerful framework for self-evolution that overcomes limitations of either approach used independently.
AlphaEvolve demonstrates that self-evolving systems face a fundamental tension between generating novel solutions and ensuring those solutions actually work.
The paper shows how AlphaEvolve addresses this through a hybrid architecture where:
Neural components (LLMs) provide creative generation of code modifications by drawing on patterns learned from vast training data
Symbolic components (code execution) provide ground truth verification through deterministic evaluation
Without this combination, a system would either generate interesting but incorrect solutions (neural-only approach) or be limited to small, safe modifications within known patterns (symbolic-only approach).
The system can operate at multiple levels of abstraction depending on the problem: raw solution evolution, constructor function evolution, search algorithm evolution, or co-evolution of intermediate solutions and search algorithms.
This capability emanates directly from the neurosymbolic integration, where:
Neural networks excel at working with continuous, high-dimensional spaces and recognizing patterns across abstraction levels
Symbolic systems provide precise representations of discrete structures and logical relationships
This enables AlphaEvolve to modify everything from specific lines of code to entire algorithmic approaches.
While AlphaEvolve currently uses an evolutionary database, a knowledge graph structure could significantly enhance self-evolution by:
Capturing evolutionary relationships between solutions
Identifying patterns of code changes that consistently lead to improvements
Representing semantic connections between different solution approaches
Supporting transfer learning across problem domains
Automated, objective evaluation is the core foundation enabling self-evolution:
The main limitation of AlphaEvolve is that it handles problems for which it is possible to devise an automated evaluator.
This evaluation component provides the "ground truth" feedback that guides evolution, allowing the system to:
Differentiate between successful and unsuccessful modifications
Create selection pressure toward better-performing solutions
Avoid hallucinations or non-functional solutions that might emerge from neural components alone.
When applied to optimize Gemini's training kernels, the system essentially improved the very LLM technology that powers it. | 12 comments on LinkedIn
LLMs generate possibilities; knowledge graphs remember what works
NodeRAG restructures knowledge into a heterograph: a rich, layered, musical graph where each node plays a different role
NodeRAG restructures knowledge into a heterograph: a rich, layered, musical graph where each node plays a different role.
Itโs not just smarter retrieval. Itโs structured memory for AI agents.
ใ Why NodeRAG?
Most Retrieval-Augmented Generation (RAG) methods retrieve chunks of text. Good enough โ until you need reasoning, precision, and multi-hop understanding.
This is how NodeRAG solves these problems:
ใ ๐นStep 1: Graph Decomposition
NodeRAG begins by decomposing raw text into smart building blocks:
โธ Semantic Units (S): Little event nuggets ("Hinton won the Nobel Prize.")
โธ Entities (N): Key names or concepts ("Hinton", "Nobel Prize")
โธ Relationships (R): Links between entities ("awarded to")
โฉ This is like teaching your AI to recognize the actors, actions, and scenes inside any document.
ใ ๐นStep 2: Graph Augmentation
Decomposition alone isn't enough. NodeRAG augments the graph by identifying important hubs:
โธ Node Importance: Using K-Core and Betweenness Centrality to find critical nodes
โฉ Important entities get special attention โ their attributes are summarized into new nodes (A).
โธ Community Detection: Grouping related nodes into communities and summarizing them into high-level insights (H).
โฉ Each community gets a "headline" overview node (O) for quick retrieval.
It's like adding context and intuition to raw facts.
ใ ๐น Step 3: Graph Enrichment
Knowledge without detail is brittle. So NodeRAG enriches the graph:
โธ Original Text: Full chunks are linked back into the graph (Text nodes, T)
โธ Semantic Edges: Using HNSW for fast, meaningful similarity connections
โฉ Only smart nodes are embedded (not everything!) โ saving huge storage space.
โฉ Dual search (exact + vector) makes retrieval laser-sharp.
Itโs like turning a 2D map into a 3D living world.
ใ ๐น Step 4: Graph Searching
Now comes the magic.
โธ Dual Search: First find strong entry points (by name or by meaning)
โธ Shallow Personalized PageRank (PPR): Expand carefully from entry points to nearby relevant nodes.
โฉ No wandering into irrelevant parts of the graph. The search is surgical.
โฉ Retrieval includes fine-grained semantic units, attributes, high-level elements โ everything you need, nothing you don't.
Itโs like sending out agents into a city โ and they return not with everything they saw, but exactly what you asked for, summarized and structured.
ใ Results: NodeRAG's Performance
Compared to GraphRAG, LightRAG, NaiveRAG, and HyDE โ NodeRAG wins across every major domain: Tech, Science, Writing, Recreation, and Finance.
NodeRAG isnโt just a better graph. NodeRAG is a new operating system for memory.
โฃโฃโฃโฃโฃโฃโฃโฃโฃโฃโฃโฃโฃโฃโฃโฃโฃโฃโฃโฃโฃโฃโฃโฃโฃโฃ
โซธ๊ Want to build Real-World AI agents?
Join My ๐๐ฎ๐ป๐ฑ๐-๐ผ๐ป ๐๐ ๐๐ด๐ฒ๐ป๐ ๐ง๐ฟ๐ฎ๐ถ๐ป๐ถ๐ป๐ด TODAY!
โ Build Real-World AI Agents + RAG Pipelines
โ Learn 3 Tools: LangGraph/LangChain | CrewAI | OpenAI Swarm
โ Work with Text, Audio, Video and Tabular Data
๐๐๐ป๐ฟ๐ผ๐น๐น ๐ก๐ข๐ช (๐ฏ๐ฐ% ๐ฑ๐ถ๐๐ฐ๐ผ๐๐ป๐):
https://lnkd.in/eGuWr4CH
| 20 comments on LinkedIn
NodeRAG restructures knowledge into a heterograph: a rich, layered, musical graph where each node plays a different role
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning ...
๐ Why This Matters
Most AI systems blend knowledge graphs (structured data) with large language models (flexible reasoning). But thereโs a hidden variable: "how" you translate the graph into text for the AI. Researchers discovered that the formatting choice alone can swing performance by up to "17.5%" on reasoning tasks. Imagine solving 1 in 5 more problems correctly just by adjusting how you present data.
๐ What They Built
KG-LLM-Bench is a new benchmark to test how language models reason with knowledge graphs.
It includes five tasks:
- Triple verification (โDoes this fact exist?โ)
- Shortest path finding (โHow are two concepts connected?โ)
- Aggregation (โHow many entities meet X condition?โ)
- Multi-hop reasoning (โWhich entities linked to A also have property B?โ)
- Global analysis (โWhich node is most central?โ)
The team tested seven models (Claude, GPT-4o, Gemini, Llama, Nova) with five ways to โtextualizeโ graphs, from simple edge lists to structured JSON and semantic web formats like RDF Turtle.
๐ Key Insights
1. Format matters more than assumed:
ย ย - Structured JSON and edge lists performed best overall, but results varied by task.
ย ย - For example, JSON excels at aggregation tasks (data is grouped by entity), while edge lists help identify central nodes (repeated mentions highlight connections).
2. Models donโt cheat:
Replacing real entity names with fake ones (e.g., โFranceโ โ โVerdaniaโ) caused only a 0.2% performance drop, proving models rely on context, not memorized knowledge.
3. Token efficiency:
ย ย - Edge lists used ~2,600 tokens vs. JSON-LDโs ~13,500. Shorter formats free up context space for complex reasoning.
ย ย - But concise โ always better: structured formats improved accuracy for tasks requiring grouped data.
4. Models struggle with directionality:
ย
Counting outgoing edges (e.g., โWhich countries does France border?โ) is easier than incoming ones (โWhich countries border France?โ), likely due to formatting biases.
๐ Practical Takeaways
- Optimize for your task: Use JSON for aggregation, edge lists for centrality.
- Test your model: The best format depends on the LLMโClaude thrived with RDF Turtle, while Gemini preferred edge lists.
- Donโt fear pseudonyms: Masking real names minimally impacts performance, useful for sensitive data.
The benchmark is openly available, inviting researchers to add new tasks, graphs, and models. As AI handles larger knowledge bases, choosing the right โdata languageโ becomes as critical as the reasoning logic itself.
Paper: [KG-LLM-Bench: A Scalable Benchmark for Evaluating LLM Reasoning on Textualized Knowledge Graphs]
Authors: Elan Markowitz, Krupa Galiya, Greg Ver Steeg, Aram Galstyan
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning
Terminology Augmented Generation (TAG)? Recently some fellow terminologists have proposed the new term "Terminology-Augmented Generation (TAG)" to refer toโฆ | 29 comments on LinkedIn
What is really Graph RAG? Inspired by "From Local to Global: A Graph RAG Approach to Query-Focused Summarization" paper from Microsoft! How do you combineโฆ | 12 comments on LinkedIn
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph semantic distance metric
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graphโฆ
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph semantic distance metric
ReLiK: Retrieve and LinK, Fast and Accurate Entity Linking and Relation Extraction on an Academic Budget
โจ Attention Information Extraction Enthusiasts โจ I am excited to announce the release of our latest paper and model family, ReLiK, a cutting-edgeโฆ | 33 comments on LinkedIn
Unlocking the Secrets of Scientific Discovery with AI and Knowledge Graphs
Unlocking the Secrets of Scientific Discovery with AI and Knowledge Graphs ... Have you ever wondered how AI could revolutionize the way we conduct scientificโฆ | 17 comments on LinkedIn