Found 109 bookmarks
Newest
A Knowledge Graph of code by GitLab
A Knowledge Graph of code by GitLab
If you could hire the smartest engineers and drop them in your code base would you expect miracles overnight? No, of course not! Because even if they are the best of coders, they don’t have context on your project, engineering processes and culture, security and compliance rules, user personas, business priorities, etc. The same is true of the very best agents.. they may know how to write (mostly) technically correct code, and have the context of your source code, but they’re still missing tons of context. Building agents that can deliver high quality outcomes, faster, is going to require much more than your source code, rules and a few prompts. Agents need the same full lifecyle context your engineers gain after being months and years on the job. LLMs will never have access to your company’s engineering systems to train on, so something has to bridge the knowledge gap and it shouldn’t be you, one prompt at a time. This is why we're building what we call our Knowledge Graph at GitLab. It's not just indexing files and code; it's mapping the relationships across your entire development environment. When an agent understands that a particular code block contains three security vulnerabilities, impacts two downstream services, and connects to a broader epic about performance improvements, it can make smarter recommendations and changes than just technically correct code. This kind of contextual reasoning is what separates valuable AI agents from expensive, slow, LLM driven search tools. We're moving toward a world where institutional knowledge becomes portable and queryable. The context of a veteran engineer who knows "why we built it this way" or "what happened last time we tried this approach" can now be captured, connected, and made available to both human teammates and AI agents. See the awesome demos below and I look forward to sharing more later this month in our 18.4 beta update!
·linkedin.com·
A Knowledge Graph of code by GitLab
Tried Automating Knowledge Graphs — Ended Up Rewriting Everything I Knew
Tried Automating Knowledge Graphs — Ended Up Rewriting Everything I Knew
This post captures the desire for a short cut to #KnowledgeGraphs, the inability of #LLMs to reliably generate #StructuredKnowledge, and the lengths folks will go to realize even basic #semantic queries (the author manually encoded 1,000 #RDF triples, but didn’t use #OWL). https://lnkd.in/eJE_27gS #Ontologists by nature are generally rigorous, if not a tad bit pedantic, as they seek to structure #domain knowledge. 25 years of #SemanticWeb and this is still primarily a manual, tedious, time-consuming and error-prone process. In part, #DeepLearning is a reaction to #structured, #labelled, manually #curated #data (#SymbolicAI). When #GenAI exploded on the scene a couple of years ago, #Ontologist were quick to note the limitations of LLMs. Now some #Ontologists are having a "Road to Damascus" moment - they are aspirationally looking to Language Models as an interface for #Ontologies to lower barrier to ontology creation and use, which are then used for #GraphRAG, but this is a circular firing squad given the LLM weaknesses they have decried. This isn't a solution, it's a Hail Mary. They are lowering the standards on quality and setting up the even more tedious task of identifying non-obvious, low-level LLM errors in an #Ontology (same issue Developers have run into with LLM CodeGen - good for prototypes, not for production code). The answer is not to resign ourselves and subordinate ontologies to LLMs, but to take the high-road using #UpperOntologies to ease and speed the design, use and maintenance of #KGs. An upper ontology is a graph of high-level concepts, types and policies independent of a specific #domain implementation. It provides an abstraction layer with re-usable primitives, building blocks and services that streamline and automate domain modeling tasks (i.e., a #DSL for DSLs). Importantly, an upper ontology drives well-formed and consistent objects and relationships and provides for governance (e.g., security/identity, change management). This is what we do EnterpriseWeb. #Deterministic, reliable, trusted ontologies should be the center of #BusinessArchitecture, not a side-car to an LLM.
·linkedin.com·
Tried Automating Knowledge Graphs — Ended Up Rewriting Everything I Knew
Blue Morpho: A new solution for building AI apps on top of knowledge bases
Blue Morpho: A new solution for building AI apps on top of knowledge bases
Blue Morpho: A new solution for building AI apps on top of knowledge bases Blue Morpho helps you build AI agents that understand your business context, using ontologies and knowledge graphs. Knowledge Graphs work great with LLMs. The problem is that building KGs from unstructured data is hard. Blue Morpho promises a system that turns PDFs and text files into knowledge graphs. KGs are then used to augment LLMs with the right context to answer queries, make decisions, produce reports, and automate workflows. How it works: 1. Upload documents (pdf or txt). 2. Define your ontology: concepts, properties, and relationships. (Coming soon: ontology generation via AI assistant.) 3. Extract a knowledge graph from documents based on that ontology. Entities are automatically deduplicated across chunks and documents, so every mention of “Walmart,” for example, resolves to the same node. 4. Build agents on top. Connect external ones via MCP, or use Blue Morpho: Q&A (“text-to-cypher”) and Dashboard Generation agents. Blue Morpho differentiation: - Strong focus on reliability. Guardrails in place to make sure LLMs follow instructions and the ontology.  - Entity deduplication, with AI reviewing edge cases. - Easy to iterate on ontologies: they are versioned, extraction runs are versioned as well with all their parameters, and changes only trigger necessary recomputes.  - Vector embeddings are only used in very special circumstances, coupled with other techniques. Link in comments. Jérémy Thomas #KnowledgeGraph #AI #Agents #MCP #NewRelease #Ontology #LLMs #GenAI #Application -- Connected Data London 2025 is coming! 20-21 November, Leonardo Royal Hotel London Tower Bridge Join us for all things #KnowledgeGraph #Graph #analytics #datascience #AI #graphDB #SemTech #Ontology 🎟️ Ticket sales are open. Benefit from early bird prices with discounts up to 30%. https://lnkd.in/diXHEXNE 📺 Sponsorship opportunities are available. Maximize your exposure with early onboarding. Contact us at info@connected-data.london for more.
Blue Morpho: A new solution for building AI apps on top of knowledge bases
·linkedin.com·
Blue Morpho: A new solution for building AI apps on top of knowledge bases
A new notebook exploring Semantic Entity Resolution & Extraction using DSPy and Google's new LangExtract library.
A new notebook exploring Semantic Entity Resolution & Extraction using DSPy and Google's new LangExtract library.
Just released a new notebook exploring Semantic Entity Resolution & Extraction using DSPy (Community) and Google's new LangExtract library. Inspired by Russell Jurney’s excellent work on semantic entity resolution, this demo follows his approach of combining: ✅ embeddings, ✅ kNN blocking, ✅ and LLM matching with DSPy (Community). On top of that, I added a general extraction layer to test-drive LangExtract, a Gemini-powered, open-source Python library for reliable structured information extraction. The goal? Detect and merge mentions of the same real-world entities across text. It’s an end-to-end flow tackling one of the most persistent data challenges. Check it out, experiment with your own data, 𝐞𝐧𝐣𝐨𝐲 𝐭𝐡𝐞 𝐬𝐮𝐦𝐦𝐞𝐫 and let me know your thoughts! cc Paco Nathan you might like this 😉 https://wor.ai/8kQ2qa
a new notebook exploring Semantic Entity Resolution & Extraction using DSPy (Community) and Google's new LangExtract library.
·linkedin.com·
A new notebook exploring Semantic Entity Resolution & Extraction using DSPy and Google's new LangExtract library.
Agentic Knowledge Graph Construction
Agentic Knowledge Graph Construction
Stop manually building your company's brain. ❌ Having reviewed the excellent DeepLearning.AI lecture on Agentic Knowledge Graph Construction, by Andreas Kollegger and writing a book on Agentic graph system with Sam Julien, it is clear that the use of agentic systems represents a shift in how we build and maintain knowledge graphs (KGs). Most organizations are sitting on a goldmine of data spread across CSVs, documents, and databases. The dream is to connect it all into a unified Knowledge Graph, an intelligent brain that understands your entire business. The reality? It's a brutal, expensive, and unscalable manual process. But a new approach is changing everything. Here’s the new playbook for building intelligent systems: 🧠 Deploy an AI Agent Workforce Instead of rigid scripts, you use a cognitive assembly line of specialized AI agents. A Proposer agent designs the data model, a Critic refines it, and an Extractor pulls the facts. This modular approach is proven to reduce errors and improve the accuracy and coherence of the final graph. 🎨 Treat AI as a Designer, Not Just a Doer The agents act as data architects. In discovery mode, they analyze unstructured data (like customer reviews) and propose a new logical structure from scratch. In an enterprise with an existing data model, they switch to alignment mode, mapping new information to the established structure. 🏛️ Use a 3-Part Graph Architecture This technique is key to managing data quality and uncertainty. You create three interconnected graphs: The Domain Graph: Your single source of truth, built from trusted, structured data. The Lexical Graph: The raw, original text from your documents, preserving the evidence. The Subject Graph: An AI-generated bridge that connects them. It holds extracted insights that are validated before being linked to your trusted data. Jaro-Winkler is a string comparison algorithm that measures the similarity or edit distance between two strings. It can be used here for entity resolution, the process of identifying and linking entities from the unstructured text (Subject Graph) to the official entities in the structured database (Domain Graph). For example, the algorithm compares a product name extracted from a customer review (e.g., "the gothenburg table") with the official product names in the database. If the Jaro-Winkler similarity score is above a certain threshold, the system automatically creates a CORRESPONDS_TO relationship, effectively linking the customer's comment to the correct product in the supply chain graph. 🤝 Augment Humans, Don't Replace Them The workflow is Propose, then Approve. AI does the heavy lifting, but a human expert makes the final call. This process is made reliable by tools like Pydantic and Outlines, which enforce a rigid contract on the AI's output, ensuring every piece of data is perfectly structured and consistent. And once discovered and validated, a schema can be enforced. | 32 comments on LinkedIn
·linkedin.com·
Agentic Knowledge Graph Construction
FinReflectKG: Agentic Construction and Evaluation of Financial Knowledge Graphs
FinReflectKG: Agentic Construction and Evaluation of Financial Knowledge Graphs
Sharing our recent research 𝐅𝐢𝐧𝐑𝐞𝐟𝐥𝐞𝐜𝐭𝐊𝐆: 𝐀𝐠𝐞𝐧𝐭𝐢𝐜 𝐂𝐨𝐧𝐬𝐭𝐫𝐮𝐜𝐭𝐢𝐨𝐧 𝐚𝐧𝐝 𝐄𝐯𝐚𝐥𝐮𝐚𝐭𝐢𝐨𝐧 𝐨𝐟 𝐅𝐢𝐧𝐚𝐧𝐜𝐢𝐚𝐥 𝐊𝐧𝐨𝐰𝐥𝐞𝐝𝐠𝐞 𝐆𝐫𝐚𝐩𝐡𝐬. It is the largest financial knowledge graph built from unstructured data. The preprint of our article is out on arXiv now (link is in the comments). It is coauthored with Abhinav Arun | Fabrizio Dimino | Tejas Prakash Agrawal While LLMs make it easier than ever to generate knowledge graphs, the real challenge lies in ensuring quality without hallucinations, with strong coverage, precision, comprehensiveness, and relevance. FinReflectKG tackles this through an iterative, evaluation-driven agentic approach, carefully optimized across multiple evaluation metrics to deliver a trustworthy and high-quality knowledge graph. Designed to power use cases like entity search, question answering, signal generation, predictive modeling, and financial network analysis, FinReflectKG sets a new benchmark for building reliable financial KGs and showcases the potential of agentic workflows in LLM-driven systems. We will be creating a suite of benchmarks using FinReflectKG for KG related tasks in financial services. More details to come soon. | 15 comments on LinkedIn
·linkedin.com·
FinReflectKG: Agentic Construction and Evaluation of Financial Knowledge Graphs
Knowledge Graphs and LLMs in Action - Alessandro Negro with Vlastimil Kus, Giuseppe Futia and Fabio Montagna
Knowledge Graphs and LLMs in Action - Alessandro Negro with Vlastimil Kus, Giuseppe Futia and Fabio Montagna
Knowledge graphs help understand relationships between the objects, events, situations, and concepts in your data so you can readily identify important patterns and make better decisions. This book provides tools and techniques for efficiently labeling data, modeling a knowledge graph, and using it to derive useful insights. In Knowledge Graphs and LLMs in Action you will learn how to: Model knowledge graphs with an iterative top-down approach based in business needs Create a knowledge graph starting from ontologies, taxonomies, and structured data Use machine learning algorithms to hone and complete your graphs Build knowledge graphs from unstructured text data sources Reason on the knowledge graph and apply machine learning algorithms Move beyond analyzing data and start making decisions based on useful, contextual knowledge. The cutting-edge knowledge graphs (KG) approach puts that power in your hands. In Knowledge Graphs and LLMs in Action, you’ll discover the theory of knowledge graphs and learn how to build services that can demonstrate intelligent behavior. You’ll learn to create KGs from first principles and go hands-on to develop advisor applications for real-world domains like healthcare and finance.
·manning.com·
Knowledge Graphs and LLMs in Action - Alessandro Negro with Vlastimil Kus, Giuseppe Futia and Fabio Montagna
Most people talk about AI agents like they’re already reliable. They aren’t.
Most people talk about AI agents like they’re already reliable. They aren’t.
Most people talk about AI agents like they’re already reliable. They aren’t. They follow instructions. They spit out results. But they forget what they did, why it mattered, or how circumstances have changed. There’s no continuity. No memory. No grasp of unfolding context. Today’s agents can respond - but they can’t reflect, reason, or adapt over time. OpenAI’s new cookbook Temporal Agents with Knowledge Graphs lays out just how limiting that is and offers a credible path forward. It introduces a new class of temporal agents: systems built not around isolated prompts, but around structured, persistent memory. At the core is a knowledge graph that acts as an evolving world model - not a passive record, but a map of what happened, why it mattered, and what it connects to. This lets agents handle questions like: “What changed since last week?” “Why was this decision made?” “What’s still pending and what’s blocking it?” It’s an architectural shift that turns time, intent, and interdependence into first-class elements. This mirrors Tony Seale’s argument about enterprise data: most data products don’t fail because of missing pipelines - they fail because they don’t align with how the business actually thinks. Data lives in tables and schemas. Business lives in concepts like churn, margin erosion, customer health, or risk exposure. Tony’s answer is a business ontology: a formal, machine-readable layer that defines the language of the business and anchors data products to it. It’s a shift from structure to semantics - from warehouse to shared understanding. That’s the same shift OpenAI is proposing for agents. In both cases, what’s missing isn’t infrastructure. It’s interpretation. The challenge isn’t access. It’s alignment. If we want agents that behave reliably in real-world settings, it’s not enough to fine-tune them on PDFs or dump Slack threads into context windows. They need to be wired into shared ontologies - concept-level scaffolding like: Who are our customers? What defines success? What risks are emerging, and how are they evolving? The temporal knowledge graph becomes more than just memory. It becomes an interface - a structured bridge between reasoning and meaning. This goes far beyond another agent orchestration blueprint. It points to something deeper: Without time and meaning, there is no true delegation. We don’t need agents that mimic tasks. We need agents that internalise context and navigate change. That means building systems that don’t just handle data, but understand how it fits into the changing world we care about. OpenAI’s temporal memory graphs and Tony’s business ontologies aren’t separate ideas. They’re converging on the same missing layer: AI that reasons in the language of time and meaning. H/T Vin Vashishta for the pointer to the OpenAI cookbook, and image nicked from Tony (as usual). | 72 comments on LinkedIn
Most people talk about AI agents like they’re already reliable. They aren’t.
·linkedin.com·
Most people talk about AI agents like they’re already reliable. They aren’t.
Graph-R1: Towards Agentic GraphRAG Framework via End-to-end Reinforcement Learning
Graph-R1: Towards Agentic GraphRAG Framework via End-to-end Reinforcement Learning
Graph-R1 New RAG framework just dropped! Combines agents, GraphRAG, and RL. Here are my notes: Introduces a novel RAG framework that moves beyond traditional one-shot or chunk-based retrieval by integrating graph-structured knowledge, agentic multi-turn interaction, and RL. Graph-R1 is an agent that reasons over a knowledge hypergraph environment by iteratively issuing queries and retrieving subgraphs using a multi-step “think-retrieve-rethink-generate” loop. Unlike prior GraphRAG systems that perform fixed retrieval, Graph-R1 dynamically explores the graph based on evolving agent state. Retrieval is modeled as a dual-path mechanism: entity-based hyperedge retrieval and direct hyperedge similarity, fused via reciprocal rank aggregation to return semantically rich subgraphs. These are used to ground subsequent reasoning steps. The agent is trained end-to-end using GRPO with a composite reward that incorporates structural format adherence and answer correctness. Rewards are only granted if reasoning follows the proper format, encouraging interpretable and complete reasoning traces. On six RAG benchmarks (e.g., HotpotQA, 2WikiMultiHopQA), Graph-R1 achieves state-of-the-art F1 and generation scores, outperforming prior methods including HyperGraphRAG, R1-Searcher, and Search-R1. It shows particularly strong gains on harder, multi-hop datasets and under OOD conditions. The authors find that Graph-R1’s performance degrades sharply without its three key components: hypergraph construction, multi-turn interaction, and RL. Ablation study supports that graph-based and multi-turn retrieval improves information density and accuracy, while end-to-end RL bridges the gap between structure and language. Paper: https://lnkd.in/eGbf4HhX | 15 comments on LinkedIn
Graph-R1: Towards Agentic GraphRAG Framework via End-to-end Reinforcement Learning
·linkedin.com·
Graph-R1: Towards Agentic GraphRAG Framework via End-to-end Reinforcement Learning
Millions of G∈AR-s: Extending GraphRAG to Millions of Documents
Millions of G∈AR-s: Extending GraphRAG to Millions of Documents
Scaling GraphRAG to Millions of Documents: Lessons from the SIGIR 2025 LiveRAG Challenge 👉 WHY THIS MATTERS Retrieval-augmented generation (RAG) struggles with multi-hop questions that require connecting information across documents. While graph-based RAG methods like GEAR improve reasoning by structuring knowledge as entity-relationship triples, scaling these approaches to web-sized datasets (millions/billions of documents) remains a bottleneck. The culprit? Traditional methods rely heavily on LLMs to extract triples—a process too slow and expensive for large corpora. 👉 WHAT THEY DID Researchers from Huawei and the University of Edinburgh reimagined GEAR to sidestep costly offline triple extraction. Their solution: - Pseudo-alignment: Link retrieved passages to existing triples in Wikidata via sparse retrieval. - Iterative expansion: Use a lightweight LLM (Falcon-3B-Instruct) to iteratively rewrite queries and retrieve additional evidence through Wikidata’s graph structure. - Multi-step filtering: Combine Reciprocal Rank Fusion (RRF) and prompt-based filtering to reconcile noisy alignments between Wikidata and document content. This approach achieved 87.6% correctness and 53% faithfulness on the SIGIR 2025 LiveRAG benchmark, despite challenges in aligning Wikidata’s generic triples with domain-specific document content. 👉 KEY INSIGHTS 1. Trade-offs in alignment: Linking Wikidata triples to documents works best for general knowledge but falters with niche topics (e.g., "Pacific geoduck reproduction" mapped incorrectly to oyster biology). 2. Cost efficiency: Avoiding LLM-based triple extraction reduced computational overhead, enabling scalability. 3. The multi-step advantage: Query rewriting and iterative retrieval improved performance on complex questions requiring 2+ reasoning hops. 👉 OPEN QUESTIONS - How can we build asymmetric semantic models to better align text and graph data? - Can hybrid alignment strategies (e.g., blending domain-specific KGs with Wikidata) mitigate topic drift? - Does graph expansion improve linearly with scale, or are diminishing returns inevitable? Why read this paper? It’s a pragmatic case study in balancing scalability with reasoning depth in RAG systems. The code and prompts are fully disclosed, offering a blueprint for adapting GraphRAG to real-world, large-scale applications. Paper: "Millions of G∈AR-s: Extending GraphRAG to Millions of Documents" (Shen et al., SIGIR 2025). Preprint: arXiv:2307.17399.
Millions of G∈AR-s: Extending GraphRAG to Millions of Documents
·linkedin.com·
Millions of G∈AR-s: Extending GraphRAG to Millions of Documents
Baking π and Building Better AI | LinkedIn
Baking π and Building Better AI | LinkedIn
I've spent long, hard years learning how to talk about knowledge graphs and semantics with software engineers who have little training in linguistics. I feel quite fluent at this point, after investing huge amounts of effort into understanding statistics (I was a humanities undergrad) and into unpac
·linkedin.com·
Baking π and Building Better AI | LinkedIn
how both OWL and SHACL can be employed during the decision-making phase for AI Agents when using a knowledge graph instead of relying on an LLM that hallucinates
how both OWL and SHACL can be employed during the decision-making phase for AI Agents when using a knowledge graph instead of relying on an LLM that hallucinates
𝙏𝙝𝙤𝙪𝙜𝙝𝙩 𝙛𝙤𝙧 𝙩𝙝𝙚 𝙙𝙖𝙮: I've been mulling over how both OWL and SHACL can be employed during the decision-making phase for AI Agents when using a knowledge graph instead of relying on an LLM that hallucinates. In this way, the LLM can still be used for assessment and sensory feedback, but it augments the graph, not the other way around. OWL and SHACL serve different roles. SHACL is not just a preprocessing validator; it can play an active role in constraining, guiding, or triggering decisions, especially when integrated into AI pipelines. However, OWL is typically more central to inferencing and reasoning tasks. SHACL can actively participate in decision-making, especially when decisions require data integrity, constraint enforcement, or trigger-based logic. In complex agents, OWL provides the inferencing engine, while SHACL acts as the constraint gatekeeper and occasionally contributes to rule-based decision-making. For example, an AI agent processes RDF data describing an applicant's skills, degree, and experience. SHACL validates the data's structure, ensuring required fields are present and correctly formatted. OWL reasoning infers that the applicant is qualified for a technical role and matches the profile of a backend developer. SHACL is then used again to check policy compliance. With all checks passed, the applicant is shortlisted, and a follow-up email is triggered. In AI agent decision-making, OWL and SHACL often work together in complementary ways. SHACL is commonly used as a preprocessing step to validate incoming RDF data. If the data fails validation, it's flagged or excluded, ensuring only clean, structurally sound data reaches the OWL reasoner. In this role, SHACL acts as a gatekeeper. They can also operate in parallel or in an interleaved manner within a pipeline. As decisions evolve, SHACL shapes may be checked mid-process. Some AI agents even use SHACL as a rule engine—to trigger alerts, detect actionable patterns, or constrain reasoning paths—while OWL continues to handle more complex semantic inferences, such as class hierarchies or property logic. Finally, SHACL can augment decision-making by confirming whether OWL-inferred actions comply with specific constraints. OWL may infer that “A is a type of B, so do X,” and SHACL then determines whether doing X adheres to a policy or requirement. Because SHACL supports closed-world assumptions (which OWL does not), it plays a valuable role in enforcing policies or compliance rules during decision execution. Illustrated:
how both OWL and SHACL can be employed during the decision-making phase for AI Agents when using a knowledge graph instead of relying on an LLM that hallucinates
·linkedin.com·
how both OWL and SHACL can be employed during the decision-making phase for AI Agents when using a knowledge graph instead of relying on an LLM that hallucinates
I'm trying to build a Knowledge Graph.
I'm trying to build a Knowledge Graph.
I'm trying to build a Knowledge Graph. Our team has done experiments with current libraries available (𝐋𝐥𝐚𝐦𝐚𝐈𝐧𝐝𝐞𝐱, 𝐌𝐢𝐜𝐫𝐨𝐬𝐨𝐟𝐭'𝐬 𝐆𝐫𝐚𝐩𝐡𝐑𝐀𝐆, 𝐋𝐢𝐠𝐡𝐫𝐚𝐠, 𝐆𝐫𝐚𝐩𝐡𝐢𝐭𝐢 etc.) From a Product perspective, they seem to be missing the basic, common-sense features. 𝐒𝐭𝐢𝐜𝐤 𝐭𝐨 𝐚 𝐅𝐢𝐱𝐞𝐝 𝐓𝐞𝐦𝐩𝐥𝐚𝐭𝐞: My business organizes information in a specific way. I need the system to use our predefined entities and relationships, not invent its own. The output has to be consistent and predictable every time. 𝐒𝐭𝐚𝐫𝐭 𝐰𝐢𝐭𝐡 𝐖𝐡𝐚𝐭 𝐖𝐞 𝐀𝐥𝐫𝐞𝐚𝐝𝐲 𝐊𝐧𝐨𝐰: We already have lists of our products, departments, and key employees. The AI shouldn't have to guess this information from documents. I want to seed this this data upfront so that the graph can be build on this foundation of truth. 𝐂𝐥𝐞𝐚𝐧 𝐔𝐩 𝐚𝐧𝐝 𝐌𝐞𝐫𝐠𝐞 𝐃𝐮𝐩𝐥𝐢𝐜𝐚𝐭𝐞𝐬: The graph I currently get is messy. It sees "First Quarter Sales" and "Q1 Sales Report" as two completely different things. This is probably easy but want to make sure this does not happen. 𝐅𝐥𝐚𝐠 𝐖𝐡𝐞𝐧 𝐒𝐨𝐮𝐫𝐜𝐞𝐬 𝐃𝐢𝐬𝐚𝐠𝐫𝐞𝐞: If one chunk says our sales were $10M and another says $12M, I need the library to flag this disagreement, not just silently pick one. It also needs to show me exactly which documents the numbers came from so we can investigate. Has anyone solved this? I'm looking for a library —that gets these fundamentals right. | 21 comments on LinkedIn
·linkedin.com·
I'm trying to build a Knowledge Graph.
❓ Why I Wrote This Book?
❓ Why I Wrote This Book?
❓ Why I Wrote This Book? In the past two to three years, we've witnessed a revolution. First with ChatGPT, and now with autonomous AI agents. This is only the beginning. In the years ahead, AI will transform not only how we work but how we live. At the core of this transformation lies a single breakthrough technology: large language models (LLMs). That’s why I decided to write this book. This book explores what an LLM is, how it works, and how it develops its remarkable capabilities. It also shows how to put these capabilities into practice, like turning an LLM into the beating heart of an AI agent. Dissatisfied with the overly simplified or fragmented treatments found in many current books, I’ve aimed to provide both solid theoretical foundations and hands-on demonstrations. You'll learn how to build agents using LLMs, integrate technologies like retrieval-augmented generation (RAG) and knowledge graphs, and explore one of today’s most fascinating frontiers: multi-agent systems. Finally, I’ve included a section on open research questions (areas where today’s models still fall short, ethical issues, doubts, and so on), and where tomorrow’s breakthroughs may lie. 🧠 Who is this book for? Anyone curious about LLMs, how they work, and how to use them effectively. Whether you're just starting out or already have experience, this book offers both accessible explanations and practical guidance. It's for those who want to understand the theory and apply it in the real world. 🛑 Who is this book not for? Those who dismiss AI as a passing fad or have no interest in what lies ahead. But for everyone else this book is for you. Because AI agents are no longer speculative. They’re real, and they’re here. A huge thanks to my co-author Gabriele Iuculano, and the Packt's team: Gebin George, Sanjana Gupta, Ali A., Sonia Chauhan, Vignesh Raju., Malhar Deshpande #AI #LLMs #KnowledgeGraphs #AIagents #RAG #GenerativeAI #MachineLearning #NLP #Agents #DeepLearning | 22 comments on LinkedIn
·linkedin.com·
❓ Why I Wrote This Book?
Transform Claude's Hidden Memory Into Interactive Knowledge Graphs
Transform Claude's Hidden Memory Into Interactive Knowledge Graphs
Transform Claude's Hidden Memory Into Interactive Knowledge Graphs Universal tool to visualize any Claude user's memory.json in beautiful interactive graphs. Transform your Claude Memory MCP data into stunning interactive visualizations to see how your AI assistant's knowledge connects and evolves over time. Enterprise teams using Claude lack visibility into how their AI assistant accumulates and organizes institutional knowledge. Claude Memory Viz provides zero-configuration visualization that automatically finds memory files and displays 72 entities with 93 relationships in real-time force-directed layouts. Teams can filter by entity type, search across all data, and explore detailed connections through rich tooltips. The technical implementation supports Claude's standard NDJSON memory format, automatically detecting and color-coding entity types from personality profiles to technical tools. Node size reflects connection count, while adjustable physics parameters enable optimal spacing for large knowledge graphs. Built with Cytoscape.js for performance optimization. Built with the philosophy "Solve it once and for all," the tool works for any Claude user with zero configuration. The visualizer automatically searches common memory file locations, provides demo data fallback, and offers clear guidance when files aren't found. Integration requires just git clone and one command execution. This matters because AI memory has been invisible to users, creating trust and accountability gaps in enterprise AI deployment. When teams can visualize how their AI assistant organizes knowledge, they gain insights into decision-making patterns and can optimize their AI collaboration strategies. 👩‍💻https://lnkd.in/e__RQh_q | 10 comments on LinkedIn
Transform Claude's Hidden Memory Into Interactive Knowledge Graphs
·linkedin.com·
Transform Claude's Hidden Memory Into Interactive Knowledge Graphs
Building AI Agents with LLMs, RAG, and Knowledge Graphs: A practical guide to autonomous and modern AI agents
Building AI Agents with LLMs, RAG, and Knowledge Graphs: A practical guide to autonomous and modern AI agents
𝐁𝐨𝐨𝐤 𝐩𝐫𝐨𝐦𝐨𝐭𝐢𝐨𝐧 𝐛𝐞𝐜𝐚𝐮𝐬𝐞 𝐭𝐡𝐢𝐬 𝐨𝐧𝐞 𝐢𝐬 𝐰𝐨𝐫𝐭𝐡 𝐢𝐭.. 𝐀𝐠𝐞𝐧𝐭𝐢𝐜 𝐀𝐈 𝐚𝐭 𝐢𝐭𝐬 𝐛𝐞𝐬𝐭.. This masterpiece was published by Salvatore Raieli and Gabriele Iuculano, and it is available for orders from today, and it's already a 𝐁𝐞𝐬𝐭𝐬𝐞𝐥𝐥𝐞𝐫! While many resources focus on LLMs or basic agentic workflows, what makes this book stand out is its deep dive into grounding LLMs with real-world data and action through the powerful combination of 𝘙𝘦𝘵𝘳𝘪𝘦𝘷𝘢𝘭-𝘈𝘶𝘨𝘮𝘦𝘯𝘵𝘦𝘥 𝘎𝘦𝘯𝘦𝘳𝘢𝘵𝘪𝘰𝘯 (𝘙𝘈𝘎) 𝘢𝘯𝘥 𝘒𝘯𝘰𝘸𝘭𝘦𝘥𝘨𝘦 𝘎𝘳𝘢𝘱𝘩𝘴. This isn't just about building Agents; it's about building AI that reasons, retrieves accurate information, and acts autonomously by leveraging structured knowledge alongside advanced LLMs. The book offers a practical roadmap, packed with concrete Python examples and real-world case studies, guiding you from concept to deployment of intelligent, robust, and hallucination-minimized AI solutions, even orchestrating multi-agent systems. Order your copy here - https://packt.link/RpzGM #AI #LLMs #KnowledgeGraphs #AIAgents #RAG #GenerativeAI #MachineLearning
·linkedin.com·
Building AI Agents with LLMs, RAG, and Knowledge Graphs: A practical guide to autonomous and modern AI agents