Found 42 bookmarks
Custom sorting
how both OWL and SHACL can be employed during the decision-making phase for AI Agents when using a knowledge graph instead of relying on an LLM that hallucinates
how both OWL and SHACL can be employed during the decision-making phase for AI Agents when using a knowledge graph instead of relying on an LLM that hallucinates
๐™๐™๐™ค๐™ช๐™œ๐™๐™ฉ ๐™›๐™ค๐™ง ๐™ฉ๐™๐™š ๐™™๐™–๐™ฎ: I've been mulling over how both OWL and SHACL can be employed during the decision-making phase for AI Agents when using a knowledge graph instead of relying on an LLM that hallucinates. In this way, the LLM can still be used for assessment and sensory feedback, but it augments the graph, not the other way around. OWL and SHACL serve different roles. SHACL is not just a preprocessing validator; it can play an active role in constraining, guiding, or triggering decisions, especially when integrated into AI pipelines. However, OWL is typically more central to inferencing and reasoning tasks. SHACL can actively participate in decision-making, especially when decisions require data integrity, constraint enforcement, or trigger-based logic. In complex agents, OWL provides the inferencing engine, while SHACL acts as the constraint gatekeeper and occasionally contributes to rule-based decision-making. For example, an AI agent processes RDF data describing an applicant's skills, degree, and experience. SHACL validates the data's structure, ensuring required fields are present and correctly formatted. OWL reasoning infers that the applicant is qualified for a technical role and matches the profile of a backend developer. SHACL is then used again to check policy compliance. With all checks passed, the applicant is shortlisted, and a follow-up email is triggered. In AI agent decision-making, OWL and SHACL often work together in complementary ways. SHACL is commonly used as a preprocessing step to validate incoming RDF data. If the data fails validation, it's flagged or excluded, ensuring only clean, structurally sound data reaches the OWL reasoner. In this role, SHACL acts as a gatekeeper. They can also operate in parallel or in an interleaved manner within a pipeline. As decisions evolve, SHACL shapes may be checked mid-process. Some AI agents even use SHACL as a rule engineโ€”to trigger alerts, detect actionable patterns, or constrain reasoning pathsโ€”while OWL continues to handle more complex semantic inferences, such as class hierarchies or property logic. Finally, SHACL can augment decision-making by confirming whether OWL-inferred actions comply with specific constraints. OWL may infer that โ€œA is a type of B, so do X,โ€ and SHACL then determines whether doing X adheres to a policy or requirement. Because SHACL supports closed-world assumptions (which OWL does not), it plays a valuable role in enforcing policies or compliance rules during decision execution. Illustrated:
how both OWL and SHACL can be employed during the decision-making phase for AI Agents when using a knowledge graph instead of relying on an LLM that hallucinates
ยทlinkedin.comยท
how both OWL and SHACL can be employed during the decision-making phase for AI Agents when using a knowledge graph instead of relying on an LLM that hallucinates
Transform Claude's Hidden Memory Into Interactive Knowledge Graphs
Transform Claude's Hidden Memory Into Interactive Knowledge Graphs
Transform Claude's Hidden Memory Into Interactive Knowledge Graphs Universal tool to visualize any Claude user's memory.json in beautiful interactive graphs. Transform your Claude Memory MCP data into stunning interactive visualizations to see how your AI assistant's knowledge connects and evolves over time. Enterprise teams using Claude lack visibility into how their AI assistant accumulates and organizes institutional knowledge. Claude Memory Viz provides zero-configuration visualization that automatically finds memory files and displays 72 entities with 93 relationships in real-time force-directed layouts. Teams can filter by entity type, search across all data, and explore detailed connections through rich tooltips. The technical implementation supports Claude's standard NDJSON memory format, automatically detecting and color-coding entity types from personality profiles to technical tools. Node size reflects connection count, while adjustable physics parameters enable optimal spacing for large knowledge graphs. Built with Cytoscape.js for performance optimization. Built with the philosophy "Solve it once and for all," the tool works for any Claude user with zero configuration. The visualizer automatically searches common memory file locations, provides demo data fallback, and offers clear guidance when files aren't found. Integration requires just git clone and one command execution. This matters because AI memory has been invisible to users, creating trust and accountability gaps in enterprise AI deployment. When teams can visualize how their AI assistant organizes knowledge, they gain insights into decision-making patterns and can optimize their AI collaboration strategies. ๐Ÿ‘ฉโ€๐Ÿ’ปhttps://lnkd.in/e__RQh_q | 10 comments on LinkedIn
Transform Claude's Hidden Memory Into Interactive Knowledge Graphs
ยทlinkedin.comยท
Transform Claude's Hidden Memory Into Interactive Knowledge Graphs
Building AI Agents with LLMs, RAG, and Knowledge Graphs: A practical guide to autonomous and modern AI agents
Building AI Agents with LLMs, RAG, and Knowledge Graphs: A practical guide to autonomous and modern AI agents
๐๐จ๐จ๐ค ๐ฉ๐ซ๐จ๐ฆ๐จ๐ญ๐ข๐จ๐ง ๐›๐ž๐œ๐š๐ฎ๐ฌ๐ž ๐ญ๐ก๐ข๐ฌ ๐จ๐ง๐ž ๐ข๐ฌ ๐ฐ๐จ๐ซ๐ญ๐ก ๐ข๐ญ.. ๐€๐ ๐ž๐ง๐ญ๐ข๐œ ๐€๐ˆ ๐š๐ญ ๐ข๐ญ๐ฌ ๐›๐ž๐ฌ๐ญ.. This masterpiece was published by Salvatore Raieli and Gabriele Iuculano, and it is available for orders from today, and it's already a ๐๐ž๐ฌ๐ญ๐ฌ๐ž๐ฅ๐ฅ๐ž๐ซ! While many resources focus on LLMs or basic agentic workflows, what makes this book stand out is its deep dive into grounding LLMs with real-world data and action through the powerful combination of ๐˜™๐˜ฆ๐˜ต๐˜ณ๐˜ช๐˜ฆ๐˜ท๐˜ข๐˜ญ-๐˜ˆ๐˜ถ๐˜จ๐˜ฎ๐˜ฆ๐˜ฏ๐˜ต๐˜ฆ๐˜ฅ ๐˜Ž๐˜ฆ๐˜ฏ๐˜ฆ๐˜ณ๐˜ข๐˜ต๐˜ช๐˜ฐ๐˜ฏ (๐˜™๐˜ˆ๐˜Ž) ๐˜ข๐˜ฏ๐˜ฅ ๐˜’๐˜ฏ๐˜ฐ๐˜ธ๐˜ญ๐˜ฆ๐˜ฅ๐˜จ๐˜ฆ ๐˜Ž๐˜ณ๐˜ข๐˜ฑ๐˜ฉ๐˜ด. This isn't just about building Agents; it's about building AI that reasons, retrieves accurate information, and acts autonomously by leveraging structured knowledge alongside advanced LLMs. The book offers a practical roadmap, packed with concrete Python examples and real-world case studies, guiding you from concept to deployment of intelligent, robust, and hallucination-minimized AI solutions, even orchestrating multi-agent systems. Order your copy here - https://packt.link/RpzGM #AI #LLMs #KnowledgeGraphs #AIAgents #RAG #GenerativeAI #MachineLearning
ยทlinkedin.comยท
Building AI Agents with LLMs, RAG, and Knowledge Graphs: A practical guide to autonomous and modern AI agents
Foundation Models Know Enough
Foundation Models Know Enough
LLMs already contain overlapping world models. You just have to ask them right. Ontologists reply to an LLM output,ย โ€œThatโ€™s not a real ontologyโ€”itโ€™s not a formal conceptualization.โ€ But thatโ€™s just the No True Scotsman fallacy dressed up in OWL. Boring. Not growth-oriented. Look forward, angel. A foundation model is a compression of human knowledge. The real problem isn't that we "lack a conceptualization". The real problem with an FM is that they contain too many. FMs contain conceptualizationsโ€”plural. Messy? Sure. But usable. At Stardog, weโ€™re turning this latent structure intoย real ontologiesย usingย symbolic knowledge distillation. Prompt orchestration โ†’ structure extraction โ†’ formal encoding. OWL, SHACL, and friends. Shake till mixed. Rinse. Repeat. Secret sauce simmered and reduced. This isn't theoretical hard. We avoid that. Itโ€™s merely engineering hard. We LTF into that! But the payoff means bootstrapping rich, new ontologies at scale: faster, cheaper, with lineage. It's the intersection of FM latent space, formal ontology, and user intent expressed via CQs. We call it the Symbolic Latent Layer (SLL). Cute eh? The future of enterprise AI isnโ€™t just documents. Itโ€™s distillingย structured symbolic knowledgeย from LLMs and plugging it into agents, workflows, and reasoning engines. You donโ€™t need a priesthood to get a formal ontology anymore. You need a good prompt and a smarter pipeline and the right EKG platform. There's a lot more to say about this so I said it at Stardog Labs https://lnkd.in/eY5Sibed | 17 comments on LinkedIn
ยทlinkedin.comยท
Foundation Models Know Enough
Graph is the new star schema. Change my mind.
Graph is the new star schema. Change my mind.
Graph is the new star schema. Change my mind. Why? Your agents can't be autonomous unless your structured data is a graph. It is really very simple. 1๏ธโƒฃ To act autonomously, an agent must reason across structured data. Every autonomous decision - human or agent - hinges on a judgment: have I done enough? โ€œEnough" boils down to driving the probability of success over some threshold. 2๏ธโƒฃ You canโ€™t just point the agent at your structured data store. Context windows are too small. Schema sprawl is too real. If you think it works, you probably havenโ€™t tried it. 3๏ธโƒฃ Agent must first retrieve - with RAG - the right tables, columns, and snippets. Decision making is a retrieval problem before itโ€™s a reasoning problem. 4๏ธโƒฃ Standard RAG breaks on enterprise metadata. The corpus is too entity-rich. Semantic similarity is breaking on enterprise help articles - it won't perform on column descriptions. 5๏ธโƒฃ To make structured RAG work, you need a graph. Just like unstructured RAG needed links between articles, structured RAG needs links between tables, fields, and - most importantly - meaning. Yes, graphs are painful. But so was deep learningโ€”until the return was undeniable. Agents need reasoning over structured data. That makes graphs non-optional. The rest is just engineering. Letโ€™s stop modeling for reportingโ€”and start modeling for autonomy. | 28 comments on LinkedIn
Graph is the new star schema. Change my mind.
ยทlinkedin.comยท
Graph is the new star schema. Change my mind.
How can you turn business questions into production-ready agentic knowledge graphs?
How can you turn business questions into production-ready agentic knowledge graphs?
โ“ How can you turn business questions into production-ready agentic knowledge graphs? Join Prashanth Rao and Dennis Irorere at the Agentic AI Summit to find out. Prashanth is an AI Engineer and DevRel lead at Kรนzu Inc.โ€”the open-source graph database startupโ€”where he blends NLP, ML, and data engineering to power agentic workflows. Dennis is a Data Engineer at Tripadvisorโ€™s Viator Marketing Technology team and Director of Innovation at GraphGeeks, driving scalable, AI-driven graph solutions for customer growth. In โ€œAgentic Workflows for Graph RAG: Building Production-Ready Knowledge Graphs,โ€ theyโ€™ll guide you through three hands-on lessons: ๐Ÿ”น From Business Question to Graph Schema โ€“ Modeling your domain for downstream agents and LLMs, using live data sources like AskNews. ๐Ÿ”น From Unstructured Data to Agent-Ready Graphs with BAML โ€“ Writing declarative pipelines that reliably extract entities and relationships at scale. ๐Ÿ”น Agentic Graph RAG in Action โ€“ Completing the loop: translating NL queries into Cypher, retrieving graph data, and synthesizing responsesโ€”with fallback strategies when matches are missing. If youโ€™re building internal tools or public-facing AI agents that rely on knowledge graphs, this workshop is for you. ๐Ÿ—“๏ธ Learn more & register free: https://hubs.li/Q03qHnpQ0 #AgenticAI #GraphRAG #KnowledgeGraphs #AgentWorkflows #AIEngineering #ODSC #Kuzu #Tripadvisor
How can you turn business questions into production-ready agentic knowledge graphs?
ยทlinkedin.comยท
How can you turn business questions into production-ready agentic knowledge graphs?
The Developer's Guide to GraphRAG
The Developer's Guide to GraphRAG
Find out how to combine a knowledge graph with RAG for GraphRAG. Provide more complete GenAI outputs.
Youโ€™ve built a RAG system and grounded it in your own data. Then you ask a complex question that needs to draw from multiple sources. Your heart sinks when the answers you get are vague or plain wrong.ย ย  How could this happen? Traditional vector-only RAG bases its outputs on just the words you use in your prompt. It misses out on valuable context because it pulls from different documents and data structures. Basically, it misses out on the bigger, more connected picture. Your AI needs a mental model of your data with all its context and nuances. A knowledge graph provides just that by mapping your data as connected entities and relationships. Pair it with RAG to create a GraphRAG architecture to feed your LLM information about dependencies, sequences, hierarchies, and deeper meaning. Check out The Developerโ€™s Guide to GraphRAG. Youโ€™ll learn how to: Prepare a knowledge graph for GraphRAG Combine a knowledge graph with native vector search Implement three GraphRAG retrieval patterns
ยทneo4j.comยท
The Developer's Guide to GraphRAG
AI Engineer World's Fair 2025: GraphRAG Track Spotlight
AI Engineer World's Fair 2025: GraphRAG Track Spotlight
๐Ÿ“ฃ AI Engineer World's Fair 2025: GraphRAG Track Spotlight! ๐Ÿš€ So grateful to have hosted the GraphRAG Track at the Fair. The sessions were great, highlighting the depth and breadth of graph thinking for AI. Shoutouts to... - Mitesh Patel "HybridRAG" as a fusion of graph and vector retrieval designed to master complex data interpretation and specialized terminology for question answering - Chin Keong Lam "Wisdom Discovery at Scale" using Knowledge Augmented Generation (KAG) in a multi agent system with n8n - Sam Julien "When Vectors Break Down" carefully explaining how graph-based RAG architecture achieved a whopping 86.31% accuracy for dense enterprise knowledge - Daniel Chalef "Stop Using RAG as Memory" explored temporally-aware knowledge graphs, built by the open-source Graphiti framework, to provide precise, context-rich memory for agents, - Ola Mabadeje "Witness the power of Multi-Agent AI & Network Knowledge Graphs" showing dramatic improvements in ticket resolution efficiency and overall execution quality in network operations. - Thomas Smoker "Beyond Documents"! casually mentioning scraping the entire internet to distill a knowledge graph focused with legal agents - Mark Bain hosting an excellent Agentic Memory with Knowledge Graphs lunch&learn, with expansive thoughts and demos from Vasilije Markovic Daniel Chalef and Alexander Gilmore Also, of course, huge congrats to Shawn swyx W and Benjamin Dunphy on an excellent conference. ๐ŸŽฉ #graphrag Neo4j AI Engineer
AI Engineer World's Fair 2025: GraphRAG Track Spotlight
ยทlinkedin.comยท
AI Engineer World's Fair 2025: GraphRAG Track Spotlight
Want to Fix LLM Hallucination? Neurosymbolic Alone Wonโ€™t Cut It
Want to Fix LLM Hallucination? Neurosymbolic Alone Wonโ€™t Cut It
Want to Fix LLM Hallucination? Neurosymbolic Alone Wonโ€™t Cut It The Conversationโ€™s new piece makes a clear case for neurosymbolic AIโ€”integrating symbolic logic with statistical learningโ€”as the long-term fix for LLM hallucinations. Itโ€™s a timely and necessary argument: โ€œNo matter how large a language model gets, it canโ€™t escape its fundamental lack of grounding in rules, logic, or real-world structure. Hallucination isnโ€™t a bug, itโ€™s the default.โ€ But whatโ€™s crucialโ€”and often glossed overโ€”is that symbolic logic alone isnโ€™t enough. The real leap comes from adding formal ontologies and semantic constraints that make meaning machine-computable. OWL, Shapes Constraint Language (SHACL), and frameworks like BFO, Descriptive Ontology for Linguistic and Cognitive Engineering (DOLCE), the Suggested Upper Merged Ontology (SUMO), and the Common Core Ontologies (CCO) donโ€™t just โ€œrepresent rulesโ€โ€”they define what exists, what can relate, and under what conditions inference is valid. Thatโ€™s the difference between โ€œdecoratingโ€ a knowledge graph and engineering one that can detect, explain, and prevent hallucinations in practice. Iโ€™d go further: โ€ข Most enterprise LLM hallucinations are just semantic errorsโ€”mislabeling, misattribution, or class confusion that only formal ontologies can prevent. โ€ข Neurosymbolic systems only deliver if their symbolic half is grounded in ontological reality, not just handcrafted rules or taxonomies. The upshot: We need to move beyond mere integration of symbols and neurons. We need semantic scaffoldingโ€”ontologies as infrastructureโ€”to ensure AI isnโ€™t just fluent, but actually right. Curious if others are layering formal ontologies (BFO, DOLCE, SUMO) into their AI stacks yet? Or are we still hoping that more compute and prompt engineering will do the trick? #NeuroSymbolicAI #SemanticAI #Ontology #LLMs #AIHallucination #KnowledgeGraphs #AITrust #AIReasoning
Want to Fix LLM Hallucination? Neurosymbolic Alone Wonโ€™t Cut It
ยทlinkedin.comยท
Want to Fix LLM Hallucination? Neurosymbolic Alone Wonโ€™t Cut It
Introducing FACT: Fast Augmented Context Tools (3.2x faster, 90% cost reduction vs RAG)
Introducing FACT: Fast Augmented Context Tools (3.2x faster, 90% cost reduction vs RAG)
Introducing FACT: Fast Augmented Context Tools (3.2x faster, 90% cost reduction vs RAG) RAG had its run, but itโ€™s not built for agentic systems. Vectors are fuzzy, slow, and blind to context. They work fine for static data, but once you enter recursive, real-time workflows, where agents need to reason, act, and reflect. RAG collapses under its own ambiguity. Thatโ€™s why I built FACT: Fast Augmented Context Tools. Traditional Approach: User Query โ†’ Database โ†’ Processing โ†’ Response (2-5 seconds) FACT Approach: User Query โ†’ Intelligent Cache โ†’ [If Miss] โ†’ Optimized Processing โ†’ Response (50ms) It replaces vector search in RAG pipelines with a combination of intelligent prompt caching and deterministic tool execution via MCP. Instead of guessing which chunk is relevant, FACT explicitly retrieves structured data, SQL queries, live APIs, internal tools, then intelligently caches the result if itโ€™s useful downstream. The prompt caching isnโ€™t just basic storage. Itโ€™s intelligent using the prompt cache from Anthropic and other LLM providers, tuned for feedback-driven loops: static elements get reused, transient ones expire, and the system adapts in real time. Some things you always want cached, schemas, domain prompts. Others, like live data, need freshness. Traditional RAG is particularly bad at this. Ask anyone force to frequently update vector DBs. I'm also using Arcade.dev to handle secure, scalable execution across both local and cloud environments, giving FACT hybrid intelligence for complex pipelines and automatic tool selection. If you're building serious agents, skip the embeddings. RAG is a workaround. FACT is a foundation. Itโ€™s cheaper, faster, and designed for how agents actually work: with tools, memory, and intent. To get started point your favorite coding agent at: https://lnkd.in/gek_akem | 38 comments on LinkedIn
Introducing FACT: Fast Augmented Context Tools (3.2x faster, 90% cost reduction vs RAG)
ยทlinkedin.comยท
Introducing FACT: Fast Augmented Context Tools (3.2x faster, 90% cost reduction vs RAG)
A-MEM Transforms AI Agent Memory with Zettelkasten Method, Atomic Notes, Dynamic Linking & Continuous Evolution
A-MEM Transforms AI Agent Memory with Zettelkasten Method, Atomic Notes, Dynamic Linking & Continuous Evolution
๐Ÿฏ๐Ÿ‡ A-MEM Transforms AI Agent Memory with Zettelkasten Method, Atomic Notes, Dynamic Linking & Continuous Evolution! This Novel Memory fixes rigid structures with adaptable, evolving, and interconnected knowledge networks, delivering 2x performance in complex reasoning tasks. ๐—ง๐—ต๐—ถ๐˜€ ๐—ถ๐˜€ ๐˜„๐—ต๐—ฎ๐˜ ๐—œ ๐—น๐—ฒ๐—ฎ๐—ฟ๐—ป๐—ฒ๐—ฑ: ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ ใ€‹ ๐—ช๐—ต๐˜† ๐—ง๐—ฟ๐—ฎ๐—ฑ๐—ถ๐˜๐—ถ๐—ผ๐—ป๐—ฎ๐—น ๐— ๐—ฒ๐—บ๐—ผ๐—ฟ๐˜† ๐—™๐—ฎ๐—น๐—น ๐—ฆ๐—ต๐—ผ๐—ฟ๐˜ Most AI agents today rely on simplistic storage and retrieval but break down when faced with complex, multi-step reasoning tasks. โœธ Common Limitations: โ˜† Fixed schemas: Conventional memory systems require predefined structures that limit flexibility. โ˜† Limited adaptability: When new information arises, old memories remain static and disconnected, reducing an agentโ€™s ability to build on past experiences. โ˜† Ineffective long-term retention: AI agents often struggle to recall relevant past interactions, leading to redundant processing and inefficiencies. ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ ใ€‹๐—”-๐— ๐—˜๐— : ๐—”๐˜๐—ผ๐—บ๐—ถ๐—ฐ ๐—ป๐—ผ๐˜๐—ฒ๐˜€ ๐—ฎ๐—ป๐—ฑ ๐——๐˜†๐—ป๐—ฎ๐—บ๐—ถ๐—ฐ ๐—น๐—ถ๐—ป๐—ธ๐—ถ๐—ป๐—ด A-MEM organizes knowledge in a way that mirrors how humans create and refine ideas over time. โœธ How it Works: โ˜† Atomic notes: Information is broken down into small, self-contained knowledge units, ensuring clarity and easy integration with future knowledge. โ˜† Dynamic linking: Instead of relying on static categories, A-MEM automatically creates connections between related knowledge, forming a network of interrelated ideas. ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ ใ€‹ ๐—ฃ๐—ฟ๐—ผ๐˜ƒ๐—ฒ๐—ป ๐—ฃ๐—ฒ๐—ฟ๐—ณ๐—ผ๐—ฟ๐—บ๐—ฎ๐—ป๐—ฐ๐—ฒ ๐—”๐—ฑ๐˜ƒ๐—ฎ๐—ป๐˜๐—ฎ๐—ด๐—ฒ A-MEM delivers measurable improvements. โœธ Empirical results demonstrate: โ˜† Over 2x performance improvement in complex reasoning tasks, where AI must synthesize multiple pieces of information across different timeframes. โ˜† Superior efficiency across top foundation models, including GPT, Llama, and Qwenโ€”proving its versatility and broad applicability. ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ ใ€‹ ๐—œ๐—ป๐˜€๐—ถ๐—ฑ๐—ฒ ๐—”-๐— ๐—˜๐—  โœธ Note Construction: โ˜† AI-generated structured notes that capture essential details and contextual insights. โ˜† Each memory is assigned metadata, including keywords and summaries, for faster retrieval. โœธ Link Generation: โ˜† The system autonomously connects new memories to relevant past knowledge. โ˜† Relationships between concepts emerge naturally, allowing AI to recognize patterns over time. โœธ Memory Evolution: โ˜† Older memories are continuously updated as new insights emerge. โ˜† The system dynamically refines knowledge structures, mimicking the way human memory strengthens connections over time. โ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃ โซธ๊†› Want to build Real-World AI agents? Join My ๐—›๐—ฎ๐—ป๐—ฑ๐˜€-๐—ผ๐—ป ๐—”๐—œ ๐—”๐—ด๐—ฒ๐—ป๐˜ ๐Ÿฐ-๐—ถ๐—ป-๐Ÿญ ๐—ง๐—ฟ๐—ฎ๐—ถ๐—ป๐—ถ๐—ป๐—ด TODAY! ๐Ÿฐ๐Ÿด๐Ÿฌ+ already Enrolled. โž  Build Real-World AI Agents for Healthcare, Finance,Smart Cities,Sales โž  Learn 4 Framework: LangGraph | PydanticAI | CrewAI | OpenAI Swarm โž  Work with Text, Audio, Video and Tabular Data ๐Ÿ‘‰๐—˜๐—ป๐—ฟ๐—ผ๐—น๐—น ๐—ก๐—ข๐—ช (๐Ÿฐ๐Ÿฑ% ๐—ฑ๐—ถ๐˜€๐—ฐ๐—ผ๐˜‚๐—ป๐˜): https://lnkd.in/eGuWr4CH | 27 comments on LinkedIn
A-MEM Transforms AI Agent Memory with Zettelkasten Method, Atomic Notes, Dynamic Linking & Continuous Evolution
ยทlinkedin.comยท
A-MEM Transforms AI Agent Memory with Zettelkasten Method, Atomic Notes, Dynamic Linking & Continuous Evolution
RAG vs Graph RAG, explained visually
RAG vs Graph RAG, explained visually
RAG vs Graph RAG, explained visually. (it's a popular LLM interview question) Imagine you have a long document, say a biography, about an individual (X) who has accomplished several things in this life. โ†ณ Chapter 1: Talks about Accomplishment-1. โ†ณ Chapter 2: Talks about Accomplishment-2. ... โ†ณ Chapter 10: Talks about Accomplishment-10. Summarizing all these accomplishments via RAG might never be possible since... ...it must require the entire context... ...but one might only be fetching the top-k relevant chunks from the vector db. Moreover, since traditional RAG systems retrieve each chunk independently, this can often leave the LLM to infer the connections between them (provided the chunks are retrieved). Graph RAG solves this. The idea is to first create a graph (entities & relationships) from the documents and then do traversal over that graph during the retrieval phase. See how Graph RAG solves the above problems. - First, a system (typically an LLM) will create the graph by understanding the biography. - This will produce a full graph of nodes entities & relationships, and a subgraph will look like this: โ†ณ X โ†’ โ†’ Accomplishment-1. โ†ณ X โ†’ โ†’ Accomplishment-2. ... โ†ณ X โ†’ โ†’ Accomplishment-N. When summarizing these accomplishments, the retrieval phase can do a graph traversal to fetch all the relevant context related to X's accomplishments. This context, when passed to the LLM, will produce a more coherent and complete answer as opposed to traditional RAG. Another reason why Graph RAG systems are so effective is because LLMs are inherently adept at reasoning with structured data. Graph RAG instills that structure into them with their retrieval mechanism. ๐Ÿ‘‰ Over to you: What are some other issues with traditional RAG systems that Graph RAG solves? ____ Find me โ†’ย Avi Chawla Every day, I share tutorials and insights on DS, ML, LLMs, and RAGs. | 24 comments on LinkedIn
RAG vs Graph RAG, explained visually
ยทlinkedin.comยท
RAG vs Graph RAG, explained visually
Graph RAG open source stack to generate and visualize knowledge graphs
Graph RAG open source stack to generate and visualize knowledge graphs
A serious knowledge graph effort is much more than a bit of Github, but customers and adventurous minds keep asking me if there is an easy to use (read: POC click-and-go solution) graph RAG open source stack they can use to generate knowledge graphs. So, here is my list of projects I keep an eye on. Mind, there is nothing simple if you venture into graphs, despite all the claims and marketing. Things like graph machine learning, graph layout and distributed graph analytics is more than a bit of pip install. The best solutions are hidden inside multi-nationals, custom made. Equity firms and investors sometimes ask me to evaluate innovations. It's amazing what talented people develop and never shows up in the news, or on Github. TrustGraph - The Knowledge Platform for AI https://trustgraph.ai/ The only one with a distributed architecture and made for enterprise KG. itext2kg - https://lnkd.in/e-eQbwV5 Clean and plain. Wrapped prompts done right. Fast GraphRAG - https://lnkd.in/e7jZ9GZH Popular and with some basic visualization. ZEP - https://lnkd.in/epxtKtCU Geared towards agentic memory. Triplex - https://lnkd.in/eGV8FR56 LLM to extract triples. GraphRAG Local with UI - https://lnkd.in/ePGeqqQE Another starting point for small KG efforts. Or to convince your investors. GraphRAG visualizer - https://lnkd.in/ePuMmfkR Makes pretty pictures but not for drill-downs. Neo4j's GraphRAG - https://lnkd.in/ex_A52RU A python package with a focus on getting data into Neo4j. OpenSPG - https://lnkd.in/er4qUFJv Has a different take and more academic. Microsoft GraphRAG - https://lnkd.in/e_a-mPum A classic but I don't think anyone is using this beyond experimentation. yWorks - https://www.yworks.com If you are serious about interactive graph layout. Ogma - https://lnkd.in/evwnJCBK If you are serious about graph data viz. Orbifold Consulting - https://lnkd.in/e-Dqg4Zx If you are serious about your KG journey. #GraphRAG #GraphViz #GraphMachineLearning #KnowledgeGraphs
graph RAG open source stack they can use to generate knowledge graphs.
ยทlinkedin.comยท
Graph RAG open source stack to generate and visualize knowledge graphs
LLMs generate possibilities; knowledge graphs remember what works
LLMs generate possibilities; knowledge graphs remember what works
LLMs generate possibilities; knowledge graphs remember what works. Together, they forge the recursive memory and creative engine that enables AI systems to truly evolve themselves. Combining neural components (like large language models) with symbolic verification creates a powerful framework for self-evolution that overcomes limitations of either approach used independently. AlphaEvolve demonstrates that self-evolving systems face a fundamental tension between generating novel solutions and ensuring those solutions actually work. The paper shows how AlphaEvolve addresses this through a hybrid architecture where: Neural components (LLMs) provide creative generation of code modifications by drawing on patterns learned from vast training data Symbolic components (code execution) provide ground truth verification through deterministic evaluation Without this combination, a system would either generate interesting but incorrect solutions (neural-only approach) or be limited to small, safe modifications within known patterns (symbolic-only approach). The system can operate at multiple levels of abstraction depending on the problem: raw solution evolution, constructor function evolution, search algorithm evolution, or co-evolution of intermediate solutions and search algorithms. This capability emanates directly from the neurosymbolic integration, where: Neural networks excel at working with continuous, high-dimensional spaces and recognizing patterns across abstraction levels Symbolic systems provide precise representations of discrete structures and logical relationships This enables AlphaEvolve to modify everything from specific lines of code to entire algorithmic approaches. While AlphaEvolve currently uses an evolutionary database, a knowledge graph structure could significantly enhance self-evolution by: Capturing evolutionary relationships between solutions Identifying patterns of code changes that consistently lead to improvements Representing semantic connections between different solution approaches Supporting transfer learning across problem domains Automated, objective evaluation is the core foundation enabling self-evolution: The main limitation of AlphaEvolve is that it handles problems for which it is possible to devise an automated evaluator. This evaluation component provides the "ground truth" feedback that guides evolution, allowing the system to: Differentiate between successful and unsuccessful modifications Create selection pressure toward better-performing solutions Avoid hallucinations or non-functional solutions that might emerge from neural components alone. When applied to optimize Gemini's training kernels, the system essentially improved the very LLM technology that powers it. | 12 comments on LinkedIn
LLMs generate possibilities; knowledge graphs remember what works
ยทlinkedin.comยท
LLMs generate possibilities; knowledge graphs remember what works
NodeRAG restructures knowledge into a heterograph: a rich, layered, musical graph where each node plays a different role
NodeRAG restructures knowledge into a heterograph: a rich, layered, musical graph where each node plays a different role
NodeRAG restructures knowledge into a heterograph: a rich, layered, musical graph where each node plays a different role. Itโ€™s not just smarter retrieval. Itโ€™s structured memory for AI agents. ใ€‹ Why NodeRAG? Most Retrieval-Augmented Generation (RAG) methods retrieve chunks of text. Good enough โ€” until you need reasoning, precision, and multi-hop understanding. This is how NodeRAG solves these problems: ใ€‹ ๐Ÿ”นStep 1: Graph Decomposition NodeRAG begins by decomposing raw text into smart building blocks: โœธ Semantic Units (S): Little event nuggets ("Hinton won the Nobel Prize.") โœธ Entities (N): Key names or concepts ("Hinton", "Nobel Prize") โœธ Relationships (R): Links between entities ("awarded to") โœฉ This is like teaching your AI to recognize the actors, actions, and scenes inside any document. ใ€‹ ๐Ÿ”นStep 2: Graph Augmentation Decomposition alone isn't enough. NodeRAG augments the graph by identifying important hubs: โœธ Node Importance: Using K-Core and Betweenness Centrality to find critical nodes โœฉ Important entities get special attention โ€” their attributes are summarized into new nodes (A). โœธ Community Detection: Grouping related nodes into communities and summarizing them into high-level insights (H). โœฉ Each community gets a "headline" overview node (O) for quick retrieval. It's like adding context and intuition to raw facts. ใ€‹ ๐Ÿ”น Step 3: Graph Enrichment Knowledge without detail is brittle. So NodeRAG enriches the graph: โœธ Original Text: Full chunks are linked back into the graph (Text nodes, T) โœธ Semantic Edges: Using HNSW for fast, meaningful similarity connections โœฉ Only smart nodes are embedded (not everything!) โ€” saving huge storage space. โœฉ Dual search (exact + vector) makes retrieval laser-sharp. Itโ€™s like turning a 2D map into a 3D living world. ใ€‹ ๐Ÿ”น Step 4: Graph Searching Now comes the magic. โœธ Dual Search: First find strong entry points (by name or by meaning) โœธ Shallow Personalized PageRank (PPR): Expand carefully from entry points to nearby relevant nodes. โœฉ No wandering into irrelevant parts of the graph. The search is surgical. โœฉ Retrieval includes fine-grained semantic units, attributes, high-level elements โ€” everything you need, nothing you don't. Itโ€™s like sending out agents into a city โ€” and they return not with everything they saw, but exactly what you asked for, summarized and structured. ใ€‹ Results: NodeRAG's Performance Compared to GraphRAG, LightRAG, NaiveRAG, and HyDE โ€” NodeRAG wins across every major domain: Tech, Science, Writing, Recreation, and Finance. NodeRAG isnโ€™t just a better graph. NodeRAG is a new operating system for memory. โ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃ โซธ๊†› Want to build Real-World AI agents? Join My ๐—›๐—ฎ๐—ป๐—ฑ๐˜€-๐—ผ๐—ป ๐—”๐—œ ๐—”๐—ด๐—ฒ๐—ป๐˜ ๐—ง๐—ฟ๐—ฎ๐—ถ๐—ป๐—ถ๐—ป๐—ด TODAY! โž  Build Real-World AI Agents + RAG Pipelines โž  Learn 3 Tools: LangGraph/LangChain | CrewAI | OpenAI Swarm โž  Work with Text, Audio, Video and Tabular Data ๐Ÿ‘‰๐—˜๐—ป๐—ฟ๐—ผ๐—น๐—น ๐—ก๐—ข๐—ช (๐Ÿฏ๐Ÿฐ% ๐—ฑ๐—ถ๐˜€๐—ฐ๐—ผ๐˜‚๐—ป๐˜): https://lnkd.in/eGuWr4CH | 20 comments on LinkedIn
NodeRAG restructures knowledge into a heterograph: a rich, layered, musical graph where each node plays a different role
ยทlinkedin.comยท
NodeRAG restructures knowledge into a heterograph: a rich, layered, musical graph where each node plays a different role
Announcing general availability of Amazon Bedrock Knowledge Bases GraphRAG with Amazon Neptune Analytics | Amazon Web Services
Announcing general availability of Amazon Bedrock Knowledge Bases GraphRAG with Amazon Neptune Analytics | Amazon Web Services
Today, Amazon Web Services (AWS) announced the general availability of Amazon Bedrock Knowledge Bases GraphRAG (GraphRAG), a capability in Amazon Bedrock Knowledge Bases that enhances Retrieval-Augmented Generation (RAG) with graph data in Amazon Neptune Analytics. In this post, we discuss the benefits of GraphRAG and how to get started with it in Amazon Bedrock Knowledge Bases.
ยทaws.amazon.comยท
Announcing general availability of Amazon Bedrock Knowledge Bases GraphRAG with Amazon Neptune Analytics | Amazon Web Services
๐—ง๐—ต๐—ฒ ๐—™๐˜‚๐˜๐˜‚๐—ฟ๐—ฒ ๐—œ๐˜€ ๐—–๐—น๐—ฒ๐—ฎ๐—ฟ: ๐—”๐—ด๐—ฒ๐—ป๐˜๐˜€ ๐—ช๐—ถ๐—น๐—น ๐—ก๐—˜๐—˜๐—— ๐—š๐—ฟ๐—ฎ๐—ฝ๐—ต ๐—ฅ๐—”๐—š
๐—ง๐—ต๐—ฒ ๐—™๐˜‚๐˜๐˜‚๐—ฟ๐—ฒ ๐—œ๐˜€ ๐—–๐—น๐—ฒ๐—ฎ๐—ฟ: ๐—”๐—ด๐—ฒ๐—ป๐˜๐˜€ ๐—ช๐—ถ๐—น๐—น ๐—ก๐—˜๐—˜๐—— ๐—š๐—ฟ๐—ฎ๐—ฝ๐—ต ๐—ฅ๐—”๐—š
๐Ÿคบ ๐—ง๐—ต๐—ฒ ๐—™๐˜‚๐˜๐˜‚๐—ฟ๐—ฒ ๐—œ๐˜€ ๐—–๐—น๐—ฒ๐—ฎ๐—ฟ: ๐—”๐—ด๐—ฒ๐—ป๐˜๐˜€ ๐—ช๐—ถ๐—น๐—น ๐—ก๐—˜๐—˜๐—— ๐—š๐—ฟ๐—ฎ๐—ฝ๐—ต ๐—ฅ๐—”๐—š Why? It combines Multi-hop reasoning, Non-Parameterized / Learning-Based Retrieval, Topology-Aware Prompting. ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ ๐Ÿคบ ๐—ช๐—ต๐—ฎ๐˜ ๐—œ๐˜€ ๐—š๐—ฟ๐—ฎ๐—ฝ๐—ต-๐—˜๐—ป๐—ต๐—ฎ๐—ป๐—ฐ๐—ฒ๐—ฑ ๐—ฅ๐—ฒ๐˜๐—ฟ๐—ถ๐—ฒ๐˜ƒ๐—ฎ๐—น-๐—”๐˜‚๐—ด๐—บ๐—ฒ๐—ป๐˜๐—ฒ๐—ฑ ๐—š๐—ฒ๐—ป๐—ฒ๐—ฟ๐—ฎ๐˜๐—ถ๐—ผ๐—ป (๐—ฅ๐—”๐—š)? โœฉ LLMs hallucinate. โœฉ LLMs forget. โœฉ LLMs struggle with complex reasoning. Graphs connect facts. They organize knowledge into neat, structured webs. So when RAG retrieves from a graph, the LLM doesn't just guess โ€” it reasons. It follows the map. ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ ๐Ÿคบ ๐—ง๐—ต๐—ฒ ๐Ÿฐ-๐—ฆ๐˜๐—ฒ๐—ฝ ๐—ช๐—ผ๐—ฟ๐—ธ๐—ณ๐—น๐—ผ๐˜„ ๐—ผ๐—ณ ๐—š๐—ฟ๐—ฎ๐—ฝ๐—ต ๐—ฅ๐—”๐—š 1๏ธโƒฃ โ€” User Query: The user asks a question. ("Tell me how Einstein used Riemannian geometry?") 2๏ธโƒฃ โ€” Retrieval Module: The system fetches the most structurally relevant knowledge from a graph. (Entities: Einstein, Grossmann, Riemannian Geometry.) 3๏ธโƒฃ โ€” Prompting Module: Retrieved knowledge is reshaped into a golden prompt โ€” sometimes as structured triples, sometimes as smart text. 4๏ธโƒฃ โ€” Output Response: LLM generates a fact-rich, logically sound answer. ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ๏นŒ ๐Ÿคบ ๐—ฆ๐˜๐—ฒ๐—ฝ ๐Ÿญ: ๐—•๐˜‚๐—ถ๐—น๐—ฑ ๐—š๐—ฟ๐—ฎ๐—ฝ๐—ต-๐—ฃ๐—ผ๐˜„๐—ฒ๐—ฟ๐—ฒ๐—ฑ ๐——๐—ฎ๐˜๐—ฎ๐—ฏ๐—ฎ๐˜€๐—ฒ๐˜€ โœฉ Use Existing Knowledge Graphs like Freebase or Wikidata โ€” structured, reliable, but static. โœฉ Or Build New Graphs From Text (OpenIE, instruction-tuned LLMs) โ€” dynamic, adaptable, messy but powerful. ๐Ÿคบ ๐—ฆ๐˜๐—ฒ๐—ฝ ๐Ÿฎ: ๐—ฅ๐—ฒ๐˜๐—ฟ๐—ถ๐—ฒ๐˜ƒ๐—ฎ๐—น ๐—ฎ๐—ป๐—ฑ ๐—ฃ๐—ฟ๐—ผ๐—บ๐—ฝ๐˜๐—ถ๐—ป๐—ด ๐—”๐—น๐—ด๐—ผ๐—ฟ๐—ถ๐˜๐—ต๐—บ๐˜€ โœฉ Non-Parameterized Retrieval (Deterministic, Probabilistic, Heuristic) โ˜… Think Dijkstra's algorithm, PageRank, 1-hop neighbors. Fast but rigid. โœฉ Learning-Based Retrieval (GNNs, Attention Models) โ˜… Think "graph convolution" or "graph attention." Smarter, deeper, but heavier. โœฉ Prompting Approaches: โ˜… Topology-Aware: Preserve graph structure โ€” multi-hop reasoning. โ˜… Text Prompting: Flatten into readable sentences โ€” easier for vanilla LLMs. ๐Ÿคบ ๐—ฆ๐˜๐—ฒ๐—ฝ ๐Ÿฏ: ๐—š๐—ฟ๐—ฎ๐—ฝ๐—ต-๐—ฆ๐˜๐—ฟ๐˜‚๐—ฐ๐˜๐˜‚๐—ฟ๐—ฒ๐—ฑ ๐—ฃ๐—ถ๐—ฝ๐—ฒ๐—น๐—ถ๐—ป๐—ฒ๐˜€ โœฉ Sequential Pipelines: Straightforward query โž” retrieve โž” prompt โž” answer. โœฉ Loop Pipelines: Iterative refinement until the best evidence is found. โœฉ Tree Pipelines: Parallel exploration โž” multiple knowledge paths at once. ๐Ÿคบ ๐—ฆ๐˜๐—ฒ๐—ฝ ๐Ÿฐ: ๐—š๐—ฟ๐—ฎ๐—ฝ๐—ต-๐—ข๐—ฟ๐—ถ๐—ฒ๐—ป๐˜๐—ฒ๐—ฑ ๐—ง๐—ฎ๐˜€๐—ธ๐˜€ โœฉ Knowledge Graph QA (KGQA): Answering deep, logical questions with graphs. โœฉ Graph Tasks: Node classification, link prediction, graph summarization. โœฉ Domain-Specific Applications: Biomedicine, law, scientific discovery, finance. โ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃ Join my ๐—›๐—ฎ๐—ป๐—ฑ๐˜€-๐—ผ๐—ป ๐—”๐—œ ๐—”๐—ด๐—ฒ๐—ป๐˜ ๐—ง๐—ฟ๐—ฎ๐—ถ๐—ป๐—ถ๐—ป๐—ด. Skip the fluff and build real AI agents โ€” fast. ๐—ช๐—ต๐—ฎ๐˜ ๐˜†๐—ผ๐˜‚ ๐—ด๐—ฒ๐˜: โœ… Create Smart Agents + Powerful RAG Pipelines โœ… Master ๐—Ÿ๐—ฎ๐—ป๐—ด๐—–๐—ต๐—ฎ๐—ถ๐—ป, ๐—–๐—ฟ๐—ฒ๐˜„๐—”๐—œ & ๐—ฆ๐˜„๐—ฎ๐—ฟ๐—บ โ€“ all in one training โœ… Projects with Text, Audio, Video & Tabular Data ๐Ÿฐ๐Ÿฒ๐Ÿฌ+ engineers already enrolled ๐—˜๐—ป๐—ฟ๐—ผ๐—น๐—น ๐—ป๐—ผ๐˜„ โ€” ๐Ÿฏ๐Ÿฐ% ๐—ผ๐—ณ๐—ณ, ๐—ฒ๐—ป๐—ฑ๐˜€ ๐˜€๐—ผ๐—ผ๐—ป:ย https://lnkd.in/eGuWr4CH | 35 comments on LinkedIn
๐—ง๐—ต๐—ฒ ๐—™๐˜‚๐˜๐˜‚๐—ฟ๐—ฒ ๐—œ๐˜€ ๐—–๐—น๐—ฒ๐—ฎ๐—ฟ: ๐—”๐—ด๐—ฒ๐—ป๐˜๐˜€ ๐—ช๐—ถ๐—น๐—น ๐—ก๐—˜๐—˜๐—— ๐—š๐—ฟ๐—ฎ๐—ฝ๐—ต ๐—ฅ๐—”๐—š
ยทlinkedin.comยท
๐—ง๐—ต๐—ฒ ๐—™๐˜‚๐˜๐˜‚๐—ฟ๐—ฒ ๐—œ๐˜€ ๐—–๐—น๐—ฒ๐—ฎ๐—ฟ: ๐—”๐—ด๐—ฒ๐—ป๐˜๐˜€ ๐—ช๐—ถ๐—น๐—น ๐—ก๐—˜๐—˜๐—— ๐—š๐—ฟ๐—ฎ๐—ฝ๐—ต ๐—ฅ๐—”๐—š