Found 463 bookmarks
Custom sorting
Transform Claude's Hidden Memory Into Interactive Knowledge Graphs
Transform Claude's Hidden Memory Into Interactive Knowledge Graphs
Transform Claude's Hidden Memory Into Interactive Knowledge Graphs Universal tool to visualize any Claude user's memory.json in beautiful interactive graphs. Transform your Claude Memory MCP data into stunning interactive visualizations to see how your AI assistant's knowledge connects and evolves over time. Enterprise teams using Claude lack visibility into how their AI assistant accumulates and organizes institutional knowledge. Claude Memory Viz provides zero-configuration visualization that automatically finds memory files and displays 72 entities with 93 relationships in real-time force-directed layouts. Teams can filter by entity type, search across all data, and explore detailed connections through rich tooltips. The technical implementation supports Claude's standard NDJSON memory format, automatically detecting and color-coding entity types from personality profiles to technical tools. Node size reflects connection count, while adjustable physics parameters enable optimal spacing for large knowledge graphs. Built with Cytoscape.js for performance optimization. Built with the philosophy "Solve it once and for all," the tool works for any Claude user with zero configuration. The visualizer automatically searches common memory file locations, provides demo data fallback, and offers clear guidance when files aren't found. Integration requires just git clone and one command execution. This matters because AI memory has been invisible to users, creating trust and accountability gaps in enterprise AI deployment. When teams can visualize how their AI assistant organizes knowledge, they gain insights into decision-making patterns and can optimize their AI collaboration strategies. 👩‍💻https://lnkd.in/e__RQh_q | 10 comments on LinkedIn
Transform Claude's Hidden Memory Into Interactive Knowledge Graphs
·linkedin.com·
Transform Claude's Hidden Memory Into Interactive Knowledge Graphs
Building AI Agents with LLMs, RAG, and Knowledge Graphs: A practical guide to autonomous and modern AI agents
Building AI Agents with LLMs, RAG, and Knowledge Graphs: A practical guide to autonomous and modern AI agents
𝐁𝐨𝐨𝐤 𝐩𝐫𝐨𝐦𝐨𝐭𝐢𝐨𝐧 𝐛𝐞𝐜𝐚𝐮𝐬𝐞 𝐭𝐡𝐢𝐬 𝐨𝐧𝐞 𝐢𝐬 𝐰𝐨𝐫𝐭𝐡 𝐢𝐭.. 𝐀𝐠𝐞𝐧𝐭𝐢𝐜 𝐀𝐈 𝐚𝐭 𝐢𝐭𝐬 𝐛𝐞𝐬𝐭.. This masterpiece was published by Salvatore Raieli and Gabriele Iuculano, and it is available for orders from today, and it's already a 𝐁𝐞𝐬𝐭𝐬𝐞𝐥𝐥𝐞𝐫! While many resources focus on LLMs or basic agentic workflows, what makes this book stand out is its deep dive into grounding LLMs with real-world data and action through the powerful combination of 𝘙𝘦𝘵𝘳𝘪𝘦𝘷𝘢𝘭-𝘈𝘶𝘨𝘮𝘦𝘯𝘵𝘦𝘥 𝘎𝘦𝘯𝘦𝘳𝘢𝘵𝘪𝘰𝘯 (𝘙𝘈𝘎) 𝘢𝘯𝘥 𝘒𝘯𝘰𝘸𝘭𝘦𝘥𝘨𝘦 𝘎𝘳𝘢𝘱𝘩𝘴. This isn't just about building Agents; it's about building AI that reasons, retrieves accurate information, and acts autonomously by leveraging structured knowledge alongside advanced LLMs. The book offers a practical roadmap, packed with concrete Python examples and real-world case studies, guiding you from concept to deployment of intelligent, robust, and hallucination-minimized AI solutions, even orchestrating multi-agent systems. Order your copy here - https://packt.link/RpzGM #AI #LLMs #KnowledgeGraphs #AIAgents #RAG #GenerativeAI #MachineLearning
·linkedin.com·
Building AI Agents with LLMs, RAG, and Knowledge Graphs: A practical guide to autonomous and modern AI agents
Foundation Models Know Enough
Foundation Models Know Enough
LLMs already contain overlapping world models. You just have to ask them right. Ontologists reply to an LLM output, “That’s not a real ontology—it’s not a formal conceptualization.” But that’s just the No True Scotsman fallacy dressed up in OWL. Boring. Not growth-oriented. Look forward, angel. A foundation model is a compression of human knowledge. The real problem isn't that we "lack a conceptualization". The real problem with an FM is that they contain too many. FMs contain conceptualizations—plural. Messy? Sure. But usable. At Stardog, we’re turning this latent structure into real ontologies using symbolic knowledge distillation. Prompt orchestration → structure extraction → formal encoding. OWL, SHACL, and friends. Shake till mixed. Rinse. Repeat. Secret sauce simmered and reduced. This isn't theoretical hard. We avoid that. It’s merely engineering hard. We LTF into that! But the payoff means bootstrapping rich, new ontologies at scale: faster, cheaper, with lineage. It's the intersection of FM latent space, formal ontology, and user intent expressed via CQs. We call it the Symbolic Latent Layer (SLL). Cute eh? The future of enterprise AI isn’t just documents. It’s distilling structured symbolic knowledge from LLMs and plugging it into agents, workflows, and reasoning engines. You don’t need a priesthood to get a formal ontology anymore. You need a good prompt and a smarter pipeline and the right EKG platform. There's a lot more to say about this so I said it at Stardog Labs https://lnkd.in/eY5Sibed | 17 comments on LinkedIn
·linkedin.com·
Foundation Models Know Enough
Graph is the new star schema. Change my mind.
Graph is the new star schema. Change my mind.
Graph is the new star schema. Change my mind. Why? Your agents can't be autonomous unless your structured data is a graph. It is really very simple. 1️⃣ To act autonomously, an agent must reason across structured data. Every autonomous decision - human or agent - hinges on a judgment: have I done enough? “Enough" boils down to driving the probability of success over some threshold. 2️⃣ You can’t just point the agent at your structured data store. Context windows are too small. Schema sprawl is too real. If you think it works, you probably haven’t tried it. 3️⃣ Agent must first retrieve - with RAG - the right tables, columns, and snippets. Decision making is a retrieval problem before it’s a reasoning problem. 4️⃣ Standard RAG breaks on enterprise metadata. The corpus is too entity-rich. Semantic similarity is breaking on enterprise help articles - it won't perform on column descriptions. 5️⃣ To make structured RAG work, you need a graph. Just like unstructured RAG needed links between articles, structured RAG needs links between tables, fields, and - most importantly - meaning. Yes, graphs are painful. But so was deep learning—until the return was undeniable. Agents need reasoning over structured data. That makes graphs non-optional. The rest is just engineering. Let’s stop modeling for reporting—and start modeling for autonomy. | 28 comments on LinkedIn
Graph is the new star schema. Change my mind.
·linkedin.com·
Graph is the new star schema. Change my mind.
How can you turn business questions into production-ready agentic knowledge graphs?
How can you turn business questions into production-ready agentic knowledge graphs?
❓ How can you turn business questions into production-ready agentic knowledge graphs? Join Prashanth Rao and Dennis Irorere at the Agentic AI Summit to find out. Prashanth is an AI Engineer and DevRel lead at Kùzu Inc.—the open-source graph database startup—where he blends NLP, ML, and data engineering to power agentic workflows. Dennis is a Data Engineer at Tripadvisor’s Viator Marketing Technology team and Director of Innovation at GraphGeeks, driving scalable, AI-driven graph solutions for customer growth. In “Agentic Workflows for Graph RAG: Building Production-Ready Knowledge Graphs,” they’ll guide you through three hands-on lessons: 🔹 From Business Question to Graph Schema – Modeling your domain for downstream agents and LLMs, using live data sources like AskNews. 🔹 From Unstructured Data to Agent-Ready Graphs with BAML – Writing declarative pipelines that reliably extract entities and relationships at scale. 🔹 Agentic Graph RAG in Action – Completing the loop: translating NL queries into Cypher, retrieving graph data, and synthesizing responses—with fallback strategies when matches are missing. If you’re building internal tools or public-facing AI agents that rely on knowledge graphs, this workshop is for you. 🗓️ Learn more & register free: https://hubs.li/Q03qHnpQ0 #AgenticAI #GraphRAG #KnowledgeGraphs #AgentWorkflows #AIEngineering #ODSC #Kuzu #Tripadvisor
How can you turn business questions into production-ready agentic knowledge graphs?
·linkedin.com·
How can you turn business questions into production-ready agentic knowledge graphs?
The Developer's Guide to GraphRAG
The Developer's Guide to GraphRAG
Find out how to combine a knowledge graph with RAG for GraphRAG. Provide more complete GenAI outputs.
You’ve built a RAG system and grounded it in your own data. Then you ask a complex question that needs to draw from multiple sources. Your heart sinks when the answers you get are vague or plain wrong.   How could this happen? Traditional vector-only RAG bases its outputs on just the words you use in your prompt. It misses out on valuable context because it pulls from different documents and data structures. Basically, it misses out on the bigger, more connected picture. Your AI needs a mental model of your data with all its context and nuances. A knowledge graph provides just that by mapping your data as connected entities and relationships. Pair it with RAG to create a GraphRAG architecture to feed your LLM information about dependencies, sequences, hierarchies, and deeper meaning. Check out The Developer’s Guide to GraphRAG. You’ll learn how to: Prepare a knowledge graph for GraphRAG Combine a knowledge graph with native vector search Implement three GraphRAG retrieval patterns
·neo4j.com·
The Developer's Guide to GraphRAG
Introducing RAG-Anything: All-in-One RAG System
Introducing RAG-Anything: All-in-One RAG System
🚀 Introducing RAG-Anything: All-in-One RAG System! ⚡ LightRAG + Multi-Modal = RAG-Anything 🔗 Get started today: https://lnkd.in/gF3D8rnc 📦 Install: pip install raganything No more switching between multiple tools or losing critical visual information! With RAG-Anything, you get ONE unified solution that understands your documents as completely as you do ✨ 🌟 What makes RAG-Anything innovative: - 🔄 End-to-End Multimodal Pipeline: Complete workflow from document ingestion and parsing to intelligent multimodal query answering. - 📄 Universal Document Support: Seamless processing of PDFs, Office documents (DOC/DOCX/PPT/PPTX/XLS/XLSX), images, and diverse file formats. - 🧠 Specialized Content Analysis: Dedicated processors for images, tables, mathematical equations, and heterogeneous content types. - 🔗 Multimodal Knowledge Graph: Automatic entity extraction and cross-modal relationship discovery for enhanced understanding. - ⚡ Adaptive Processing Modes: Flexible MinerU-based parsing or direct multimodal content injection workflows. - 🎯 Hybrid Intelligent Retrieval: Advanced search capabilities spanning textual and multimodal content with contextual understanding. 💡 Well-suited for: - 🎓 Academic research with complex documents - 📋 Technical documentation processing - 💼 Financial report analysis - 🏢 Enterprise knowledge management
Introducing RAG-Anything: All-in-One RAG System
·linkedin.com·
Introducing RAG-Anything: All-in-One RAG System
Towards Multi-modal Graph Large Language Model
Towards Multi-modal Graph Large Language Model
Multi-modal graphs are everywhere in the digital world. Yet the tools used to understand them haven't evolved as much as one would expect. What if the same model could handle your social network analysis, molecular discovery, AND urban planning tasks? A new paper from Tsinghua University proposes Multi-modal Graph Large Language Models (MG-LLM) - a paradigm shift in how we process complex interconnected data that combines text, images, audio, and structured relationships. Think of it as ChatGPT for graphs, but, metaphorically speaking, with eyes, ears, and structural understanding. Their key insight? Treating all graph tasks as generative problems. Instead of training separate models for node classification, link prediction, or graph reasoning, MG-LLM frames everything as transforming one multi-modal graph into another. This unified approach means the same model that predicts protein interactions could also analyze social media networks or urban traffic patterns. What makes this particularly exciting is the vision for natural language interaction with graph data. Imagine querying complex molecular structures or editing knowledge graphs using plain English, without learning specialized query languages. The challenges remain substantial - from handling the multi-granularity of data (pixels to full images) to managing multi-scale tasks (entire graph input, single node output). But if successful, this could fundamentally change the level of graph-based insights across industries that have barely scratched the surface of AI adoption. ↓ 𝐖𝐚𝐧𝐭 𝐭𝐨 𝐤𝐞𝐞𝐩 𝐮𝐩? Join my newsletter with 50k+ readers and be the first to learn about the latest AI research: llmwatch.com 💡
Towards Multi-modal Graph Large Language Model
·linkedin.com·
Towards Multi-modal Graph Large Language Model
Multi-modal Graph Large Language Models (MG-LLM)
Multi-modal Graph Large Language Models (MG-LLM)
Multi-modal graphs are everywhere in the digital world. Yet the tools used to understand them haven't evolved as much as one would expect. What if the same model could handle your social network analysis, molecular discovery, AND urban planning tasks? A new paper from Tsinghua University proposes Multi-modal Graph Large Language Models (MG-LLM) - a paradigm shift in how we process complex interconnected data that combines text, images, audio, and structured relationships. Think of it as ChatGPT for graphs, but, metaphorically speaking, with eyes, ears, and structural understanding. Their key insight? Treating all graph tasks as generative problems. Instead of training separate models for node classification, link prediction, or graph reasoning, MG-LLM frames everything as transforming one multi-modal graph into another. This unified approach means the same model that predicts protein interactions could also analyze social media networks or urban traffic patterns. What makes this particularly exciting is the vision for natural language interaction with graph data. Imagine querying complex molecular structures or editing knowledge graphs using plain English, without learning specialized query languages. The challenges remain substantial - from handling the multi-granularity of data (pixels to full images) to managing multi-scale tasks (entire graph input, single node output). But if successful, this could fundamentally change the level of graph-based insights across industries that have barely scratched the surface of AI adoption. ↓ 𝐖𝐚𝐧𝐭 𝐭𝐨 𝐤𝐞𝐞𝐩 𝐮𝐩? Join my newsletter with 50k+ readers and be the first to learn about the latest AI research: llmwatch.com 💡
Multi-modal Graph Large Language Models (MG-LLM)
·linkedin.com·
Multi-modal Graph Large Language Models (MG-LLM)
AI Engineer World's Fair 2025: GraphRAG Track Spotlight
AI Engineer World's Fair 2025: GraphRAG Track Spotlight
📣 AI Engineer World's Fair 2025: GraphRAG Track Spotlight! 🚀 So grateful to have hosted the GraphRAG Track at the Fair. The sessions were great, highlighting the depth and breadth of graph thinking for AI. Shoutouts to... - Mitesh Patel "HybridRAG" as a fusion of graph and vector retrieval designed to master complex data interpretation and specialized terminology for question answering - Chin Keong Lam "Wisdom Discovery at Scale" using Knowledge Augmented Generation (KAG) in a multi agent system with n8n - Sam Julien "When Vectors Break Down" carefully explaining how graph-based RAG architecture achieved a whopping 86.31% accuracy for dense enterprise knowledge - Daniel Chalef "Stop Using RAG as Memory" explored temporally-aware knowledge graphs, built by the open-source Graphiti framework, to provide precise, context-rich memory for agents, - Ola Mabadeje "Witness the power of Multi-Agent AI & Network Knowledge Graphs" showing dramatic improvements in ticket resolution efficiency and overall execution quality in network operations. - Thomas Smoker "Beyond Documents"! casually mentioning scraping the entire internet to distill a knowledge graph focused with legal agents - Mark Bain hosting an excellent Agentic Memory with Knowledge Graphs lunch&learn, with expansive thoughts and demos from Vasilije Markovic Daniel Chalef and Alexander Gilmore Also, of course, huge congrats to Shawn swyx W and Benjamin Dunphy on an excellent conference. 🎩 #graphrag Neo4j AI Engineer
AI Engineer World's Fair 2025: GraphRAG Track Spotlight
·linkedin.com·
AI Engineer World's Fair 2025: GraphRAG Track Spotlight
Want to Fix LLM Hallucination? Neurosymbolic Alone Won’t Cut It
Want to Fix LLM Hallucination? Neurosymbolic Alone Won’t Cut It
Want to Fix LLM Hallucination? Neurosymbolic Alone Won’t Cut It The Conversation’s new piece makes a clear case for neurosymbolic AI—integrating symbolic logic with statistical learning—as the long-term fix for LLM hallucinations. It’s a timely and necessary argument: “No matter how large a language model gets, it can’t escape its fundamental lack of grounding in rules, logic, or real-world structure. Hallucination isn’t a bug, it’s the default.” But what’s crucial—and often glossed over—is that symbolic logic alone isn’t enough. The real leap comes from adding formal ontologies and semantic constraints that make meaning machine-computable. OWL, Shapes Constraint Language (SHACL), and frameworks like BFO, Descriptive Ontology for Linguistic and Cognitive Engineering (DOLCE), the Suggested Upper Merged Ontology (SUMO), and the Common Core Ontologies (CCO) don’t just “represent rules”—they define what exists, what can relate, and under what conditions inference is valid. That’s the difference between “decorating” a knowledge graph and engineering one that can detect, explain, and prevent hallucinations in practice. I’d go further: • Most enterprise LLM hallucinations are just semantic errors—mislabeling, misattribution, or class confusion that only formal ontologies can prevent. • Neurosymbolic systems only deliver if their symbolic half is grounded in ontological reality, not just handcrafted rules or taxonomies. The upshot: We need to move beyond mere integration of symbols and neurons. We need semantic scaffolding—ontologies as infrastructure—to ensure AI isn’t just fluent, but actually right. Curious if others are layering formal ontologies (BFO, DOLCE, SUMO) into their AI stacks yet? Or are we still hoping that more compute and prompt engineering will do the trick? #NeuroSymbolicAI #SemanticAI #Ontology #LLMs #AIHallucination #KnowledgeGraphs #AITrust #AIReasoning
Want to Fix LLM Hallucination? Neurosymbolic Alone Won’t Cut It
·linkedin.com·
Want to Fix LLM Hallucination? Neurosymbolic Alone Won’t Cut It
AutoSchemaKG: Building Billion-Node Knowledge Graphs Without Human Schemas
AutoSchemaKG: Building Billion-Node Knowledge Graphs Without Human Schemas
AutoSchemaKG: Building Billion-Node Knowledge Graphs Without Human Schemas 👉 Why This Matters Traditional knowledge graphs face a paradox: they require expert-crafted schemas to organize information, creating bottlenecks for scalability and adaptability. This limits their ability to handle dynamic real-world knowledge or cross-domain applications effectively. 👉 What Changed AutoSchemaKG eliminates manual schema design through three innovations: 1. Dynamic schema induction: LLMs automatically create conceptual hierarchies while extracting entities/events 2. Event-aware modeling: Captures temporal relationships and procedural knowledge missed by entity-only approaches 3. Multi-level conceptualization: Organizes instances into semantic categories through abstraction layers The system processed 50M+ documents to build ATLAS - a family of KGs with: - 900M+ nodes (entities/events/concepts) - 5.9B+ relationships - 95% alignment with human-created schemas (zero manual intervention) 👉 How It Works 1. Triple extraction pipeline:   - LLMs identify entity-entity, entity-event, and event-event relationships   - Processes text at document level rather than sentence level for context preservation 2. Schema induction:   - Automatically groups instances into conceptual categories   - Creates hierarchical relationships between specific facts and abstract concepts 3. Scale optimization:   - Handles web-scale corpora through GPU-accelerated batch processing   - Maintains semantic consistency across 3 distinct domains (Wikipedia, academic papers, Common Crawl) 👉 Proven Impact - Boosts multi-hop QA accuracy by 12-18% over state-of-the-art baselines - Improves LLM factuality by up to 9% on specialized domains like medicine and law - Enables complex reasoning through conceptual bridges between disparate facts 👉 Key Insight The research demonstrates that billion-scale KGs with dynamic schemas can effectively complement parametric knowledge in LLMs when they reach critical mass (1B+ facts). This challenges the assumption that retrieval augmentation needs domain-specific tuning to be effective. Question for Discussion As autonomous KG construction becomes viable, how should we rethink the role of human expertise in knowledge representation? Should curation shift from schema design to validation and ethical oversight? | 15 comments on LinkedIn
AutoSchemaKG: Building Billion-Node Knowledge Graphs Without Human Schemas
·linkedin.com·
AutoSchemaKG: Building Billion-Node Knowledge Graphs Without Human Schemas