Found 494 bookmarks
Newest
FinReflectKG: Agentic Construction and Evaluation of Financial Knowledge Graphs
FinReflectKG: Agentic Construction and Evaluation of Financial Knowledge Graphs
Sharing our recent research 𝐅𝐢𝐧𝐑𝐞𝐟𝐥𝐞𝐜𝐭𝐊𝐆: 𝐀𝐠𝐞𝐧𝐭𝐢𝐜 𝐂𝐨𝐧𝐬𝐭𝐫𝐮𝐜𝐭𝐢𝐨𝐧 𝐚𝐧𝐝 𝐄𝐯𝐚𝐥𝐮𝐚𝐭𝐢𝐨𝐧 𝐨𝐟 𝐅𝐢𝐧𝐚𝐧𝐜𝐢𝐚𝐥 𝐊𝐧𝐨𝐰𝐥𝐞𝐝𝐠𝐞 𝐆𝐫𝐚𝐩𝐡𝐬. It is the largest financial knowledge graph built from unstructured data. The preprint of our article is out on arXiv now (link is in the comments). It is coauthored with Abhinav Arun | Fabrizio Dimino | Tejas Prakash Agrawal While LLMs make it easier than ever to generate knowledge graphs, the real challenge lies in ensuring quality without hallucinations, with strong coverage, precision, comprehensiveness, and relevance. FinReflectKG tackles this through an iterative, evaluation-driven agentic approach, carefully optimized across multiple evaluation metrics to deliver a trustworthy and high-quality knowledge graph. Designed to power use cases like entity search, question answering, signal generation, predictive modeling, and financial network analysis, FinReflectKG sets a new benchmark for building reliable financial KGs and showcases the potential of agentic workflows in LLM-driven systems. We will be creating a suite of benchmarks using FinReflectKG for KG related tasks in financial services. More details to come soon. | 15 comments on LinkedIn
·linkedin.com·
FinReflectKG: Agentic Construction and Evaluation of Financial Knowledge Graphs
From raw data to a knowledge graph with SynaLinks
From raw data to a knowledge graph with SynaLinks
SynaLinks is an open-source framework designed to make it easier to partner language models (LMs) with your graph technologies. Since most companies are not in a position to train their own language models from scratch, SynaLinks empowers you to adapt existing LMs on the market to specialized tasks.
·gdotv.com·
From raw data to a knowledge graph with SynaLinks
Knowledge Graphs and LLMs in Action - Alessandro Negro with Vlastimil Kus, Giuseppe Futia and Fabio Montagna
Knowledge Graphs and LLMs in Action - Alessandro Negro with Vlastimil Kus, Giuseppe Futia and Fabio Montagna
Knowledge graphs help understand relationships between the objects, events, situations, and concepts in your data so you can readily identify important patterns and make better decisions. This book provides tools and techniques for efficiently labeling data, modeling a knowledge graph, and using it to derive useful insights. In Knowledge Graphs and LLMs in Action you will learn how to: Model knowledge graphs with an iterative top-down approach based in business needs Create a knowledge graph starting from ontologies, taxonomies, and structured data Use machine learning algorithms to hone and complete your graphs Build knowledge graphs from unstructured text data sources Reason on the knowledge graph and apply machine learning algorithms Move beyond analyzing data and start making decisions based on useful, contextual knowledge. The cutting-edge knowledge graphs (KG) approach puts that power in your hands. In Knowledge Graphs and LLMs in Action, you’ll discover the theory of knowledge graphs and learn how to build services that can demonstrate intelligent behavior. You’ll learn to create KGs from first principles and go hands-on to develop advisor applications for real-world domains like healthcare and finance.
·manning.com·
Knowledge Graphs and LLMs in Action - Alessandro Negro with Vlastimil Kus, Giuseppe Futia and Fabio Montagna
Most people talk about AI agents like they’re already reliable. They aren’t.
Most people talk about AI agents like they’re already reliable. They aren’t.
Most people talk about AI agents like they’re already reliable. They aren’t. They follow instructions. They spit out results. But they forget what they did, why it mattered, or how circumstances have changed. There’s no continuity. No memory. No grasp of unfolding context. Today’s agents can respond - but they can’t reflect, reason, or adapt over time. OpenAI’s new cookbook Temporal Agents with Knowledge Graphs lays out just how limiting that is and offers a credible path forward. It introduces a new class of temporal agents: systems built not around isolated prompts, but around structured, persistent memory. At the core is a knowledge graph that acts as an evolving world model - not a passive record, but a map of what happened, why it mattered, and what it connects to. This lets agents handle questions like: “What changed since last week?” “Why was this decision made?” “What’s still pending and what’s blocking it?” It’s an architectural shift that turns time, intent, and interdependence into first-class elements. This mirrors Tony Seale’s argument about enterprise data: most data products don’t fail because of missing pipelines - they fail because they don’t align with how the business actually thinks. Data lives in tables and schemas. Business lives in concepts like churn, margin erosion, customer health, or risk exposure. Tony’s answer is a business ontology: a formal, machine-readable layer that defines the language of the business and anchors data products to it. It’s a shift from structure to semantics - from warehouse to shared understanding. That’s the same shift OpenAI is proposing for agents. In both cases, what’s missing isn’t infrastructure. It’s interpretation. The challenge isn’t access. It’s alignment. If we want agents that behave reliably in real-world settings, it’s not enough to fine-tune them on PDFs or dump Slack threads into context windows. They need to be wired into shared ontologies - concept-level scaffolding like: Who are our customers? What defines success? What risks are emerging, and how are they evolving? The temporal knowledge graph becomes more than just memory. It becomes an interface - a structured bridge between reasoning and meaning. This goes far beyond another agent orchestration blueprint. It points to something deeper: Without time and meaning, there is no true delegation. We don’t need agents that mimic tasks. We need agents that internalise context and navigate change. That means building systems that don’t just handle data, but understand how it fits into the changing world we care about. OpenAI’s temporal memory graphs and Tony’s business ontologies aren’t separate ideas. They’re converging on the same missing layer: AI that reasons in the language of time and meaning. H/T Vin Vashishta for the pointer to the OpenAI cookbook, and image nicked from Tony (as usual). | 72 comments on LinkedIn
Most people talk about AI agents like they’re already reliable. They aren’t.
·linkedin.com·
Most people talk about AI agents like they’re already reliable. They aren’t.
Over two years ago, I wrote about the emerging synergy between LLMs and ontologies - and how, together, they could create a self-reinforcing loop of continuous improvement.
Over two years ago, I wrote about the emerging synergy between LLMs and ontologies - and how, together, they could create a self-reinforcing loop of continuous improvement.
Over two years ago, I wrote about the emerging synergy between LLMs and ontologies - and how, together, they could create a self-reinforcing loop of continuous improvement. That post struck a chord. With GPT-5 now here, it’s the right moment to revisit the idea. Back then, GPT-3.5 and GPT-4 could draft ontology structures, but there were limits in context, reasoning, and abstraction. With GPT-5 (and other frontier models), that’s changing: 🔹 Larger context windows let entire ontologies sit in working memory at once.   🔹 Test-time compute enables better abstraction of concepts.   🔹 Multimodal input can turn diagrams, tables, and videos into structured ontology scaffolds.   🔹 Tool use allows ontologies to be validated, aligned, and extended in one flow. But some fundamentals remain. GPT-5 is still curve-fitting to a training set - and that brings limits: 🔹 The flipside of flexibility is hallucination. OpenAI has reduced it, but GPT-5 still scores 0.55 on SimpleQA, with a 5% hallucination rate on its own public-question dataset.   🔹 The model is bound by the landscape of its training data. That landscape is vast, but it excludes your private, proprietary data - and increasingly, an organisation’s edge will track directly to the data it owns outside that distribution. Fortunately, the benefits flow both ways. LLMs can help build ontologies, but ontologies and knowledge graphs can also help improve LLMs. The two systems can work in tandem.   Ontologies bring structure, consistency, and domain-specific context.   LLMs bring adaptability, speed, and pattern recognition that ontologies can’t achieve in isolation.   Each offsets the other’s weaknesses - and together they make both stronger. The feedback loop is no longer theory - we’ve been proving it:   Better LLM → Better Ontology → Better LLM - in your domain. There is a lot of hype around AI. GPT-5 is good, but not ground-breaking. Still, the progress over two years is remarkable. For the foreseeable future, we are living in a world where models keep improving - but where we must pair classic formal symbolic systems with these new probabilistic models. For organisations, the challenge is to match growing model power with equally strong growth in the power of their proprietary symbolic formalisation. Not all formalisations are equal. We want fewer brittle IF statements buried in application code, and more rich, flexible abstractions embedded in the data itself. That’s what ontologies and knowledge graphs promise to deliver. Two years ago, this was a hopeful idea.   Today, it’s looking less like a nice-to-have…   …and more like the only sensible way forward for organisations. ⭕ Neural-Symbolic Loop: https://lnkd.in/eJ7S22hF 🔗 Turn your data into a competitive edge: https://lnkd.in/eDd-5hpV
·linkedin.com·
Over two years ago, I wrote about the emerging synergy between LLMs and ontologies - and how, together, they could create a self-reinforcing loop of continuous improvement.
Palantir hit $175/share because they understand what 99% of AI companies don't: ontologies
Palantir hit $175/share because they understand what 99% of AI companies don't: ontologies
palantir hit $175/share because they understand what 99% of AI companies don't: ontologies. in 2021, the word "ontology" appeared 0 times in their earnings calls. by Q3 2024? 9 times. their US commercial revenue is growing 153% YoY. why? because LLMs are becoming the commodity, while ontologies are becoming the moat. let me explain why most enterprise AI initiatives are failing without one: every enterprise has the same problem: 47 different systems ❗️ 19 definitions of "customer" ❗️ 34 versions of "product"❗️ business logic scattered across 100+ applications ❗️ you throw AI at something like this? it hallucinates. but if you build an ontology first? it gains the context and data depth to be able to reason. palantir figured this out years ago. but here's what palantir doesn't do: verticalize at scale. they're brilliant at defense, government, contracting. but specialized industries need specialized ontologies. take telecommunications. a telco's "customer" isn't just a record - it's: ➕ a subscriber with multiple services ➕ a hierarchy of accounts and sub-accounts ➕ real-time network states ➕ billing cycles across geographies ➕ regulatory compliance per jurisdiction Orgs have tried to standardize this before. but standards aren't ontologies. they're just vocabularies. this is why Totogi has spent so much time and effort building their telco-specific ontology layer while palantir was perfecting horizontal enterprise ontologies, we went deep on telecom's unique semantic complexity. now telcos can deploy AI that takes one action - 'activate new customer' - and correctly translates it across systems that call it 'create subscriber' (BSS), 'provision user' (network), 'establish account' (billing), and 'initialize profile' (CRM). No more manual steps, no more dropped handoffs between systems. palantir proved the model. but they can't be everywhere. the future belongs to industry-specific semantic platforms like Totogi's BSS Magic 🚀 | 18 comments on LinkedIn
palantir hit $175/share because they understand what 99% of AI companies don't:ontologies
·linkedin.com·
Palantir hit $175/share because they understand what 99% of AI companies don't: ontologies
A gentle introduction to DSPy for graph data enrichment | Kuzu
A gentle introduction to DSPy for graph data enrichment | Kuzu

📢 Check out our latest blog post by Prashanth Rao, where we introduce the DSPy framework to help you build composable pipelines with LLMs and graphs. In the post, we dive into a fascinating dataset of Nobel laureates and their mentorship networks for a data enrichment task. 👇🏽

✅ The source data that contains the tree structures is enriched with data from the official Nobel Prize API.

✅ We showcase a 2-step methodology that combines the benefits of Kuzu's vector search capabilities with DSPy's powerful primitives to build an LLM-as-a-judge pipeline that help disambiguate entities in the data.

✅ The DSPy approach is scalable, low-cost and efficient, and is flexible enough to apply to a wide variety of domains and use cases.

·blog.kuzudb.com·
A gentle introduction to DSPy for graph data enrichment | Kuzu
Graph-R1: Towards Agentic GraphRAG Framework via End-to-end Reinforcement Learning
Graph-R1: Towards Agentic GraphRAG Framework via End-to-end Reinforcement Learning
Graph-R1 New RAG framework just dropped! Combines agents, GraphRAG, and RL. Here are my notes: Introduces a novel RAG framework that moves beyond traditional one-shot or chunk-based retrieval by integrating graph-structured knowledge, agentic multi-turn interaction, and RL. Graph-R1 is an agent that reasons over a knowledge hypergraph environment by iteratively issuing queries and retrieving subgraphs using a multi-step “think-retrieve-rethink-generate” loop. Unlike prior GraphRAG systems that perform fixed retrieval, Graph-R1 dynamically explores the graph based on evolving agent state. Retrieval is modeled as a dual-path mechanism: entity-based hyperedge retrieval and direct hyperedge similarity, fused via reciprocal rank aggregation to return semantically rich subgraphs. These are used to ground subsequent reasoning steps. The agent is trained end-to-end using GRPO with a composite reward that incorporates structural format adherence and answer correctness. Rewards are only granted if reasoning follows the proper format, encouraging interpretable and complete reasoning traces. On six RAG benchmarks (e.g., HotpotQA, 2WikiMultiHopQA), Graph-R1 achieves state-of-the-art F1 and generation scores, outperforming prior methods including HyperGraphRAG, R1-Searcher, and Search-R1. It shows particularly strong gains on harder, multi-hop datasets and under OOD conditions. The authors find that Graph-R1’s performance degrades sharply without its three key components: hypergraph construction, multi-turn interaction, and RL. Ablation study supports that graph-based and multi-turn retrieval improves information density and accuracy, while end-to-end RL bridges the gap between structure and language. Paper: https://lnkd.in/eGbf4HhX | 15 comments on LinkedIn
Graph-R1: Towards Agentic GraphRAG Framework via End-to-end Reinforcement Learning
·linkedin.com·
Graph-R1: Towards Agentic GraphRAG Framework via End-to-end Reinforcement Learning
iText2KG v0.0.8 is out
iText2KG v0.0.8 is out
Alhamdulillah, iText2KG v0.0.8 is finally out! (Yes, I’ve been quite busy these past few months 😅) .. and it can now build dynamic knowledge graphs. The GIF below shows a dynamic KG generated from OpenAI tweets between June 18 and July 17. (Note: Temporal/logical conflicts aren't handled yet in this version, but you can still resolve them with a post-processing filter.) Here are the main updated features: - iText2KG_Star: Introduced a simpler and more efficient version of iText2KG that eliminates the separate entity extraction step. Instead of extracting entities and relations separately, iText2KG_Star directly extracts triplets from text. This approach is more efficient as it reduces processing time and token consumption and does not need to handle invented/isolated entities. - Facts-Based KG Construction: Enhanced the framework with facts-based knowledge graph construction using the Document Distiller to extract structured facts from documents, which are then used for incremental KG building. This approach provides more exhaustive and precise knowledge graphs. - Dynamic Knowledge Graphs: iText2KG now supports building dynamic knowledge graphs that evolve. By leveraging the incremental nature of the framework and document snapshots with observation dates, users can track how knowledge changes and grows. Check out the new version and an example of OpenAI Dynamic KG Construction in the first comment.
iText2KG v0.0.8 is finally out
·linkedin.com·
iText2KG v0.0.8 is out
Millions of G∈AR-s: Extending GraphRAG to Millions of Documents
Millions of G∈AR-s: Extending GraphRAG to Millions of Documents
Scaling GraphRAG to Millions of Documents: Lessons from the SIGIR 2025 LiveRAG Challenge 👉 WHY THIS MATTERS Retrieval-augmented generation (RAG) struggles with multi-hop questions that require connecting information across documents. While graph-based RAG methods like GEAR improve reasoning by structuring knowledge as entity-relationship triples, scaling these approaches to web-sized datasets (millions/billions of documents) remains a bottleneck. The culprit? Traditional methods rely heavily on LLMs to extract triples—a process too slow and expensive for large corpora. 👉 WHAT THEY DID Researchers from Huawei and the University of Edinburgh reimagined GEAR to sidestep costly offline triple extraction. Their solution: - Pseudo-alignment: Link retrieved passages to existing triples in Wikidata via sparse retrieval. - Iterative expansion: Use a lightweight LLM (Falcon-3B-Instruct) to iteratively rewrite queries and retrieve additional evidence through Wikidata’s graph structure. - Multi-step filtering: Combine Reciprocal Rank Fusion (RRF) and prompt-based filtering to reconcile noisy alignments between Wikidata and document content. This approach achieved 87.6% correctness and 53% faithfulness on the SIGIR 2025 LiveRAG benchmark, despite challenges in aligning Wikidata’s generic triples with domain-specific document content. 👉 KEY INSIGHTS 1. Trade-offs in alignment: Linking Wikidata triples to documents works best for general knowledge but falters with niche topics (e.g., "Pacific geoduck reproduction" mapped incorrectly to oyster biology). 2. Cost efficiency: Avoiding LLM-based triple extraction reduced computational overhead, enabling scalability. 3. The multi-step advantage: Query rewriting and iterative retrieval improved performance on complex questions requiring 2+ reasoning hops. 👉 OPEN QUESTIONS - How can we build asymmetric semantic models to better align text and graph data? - Can hybrid alignment strategies (e.g., blending domain-specific KGs with Wikidata) mitigate topic drift? - Does graph expansion improve linearly with scale, or are diminishing returns inevitable? Why read this paper? It’s a pragmatic case study in balancing scalability with reasoning depth in RAG systems. The code and prompts are fully disclosed, offering a blueprint for adapting GraphRAG to real-world, large-scale applications. Paper: "Millions of G∈AR-s: Extending GraphRAG to Millions of Documents" (Shen et al., SIGIR 2025). Preprint: arXiv:2307.17399.
Millions of G∈AR-s: Extending GraphRAG to Millions of Documents
·linkedin.com·
Millions of G∈AR-s: Extending GraphRAG to Millions of Documents
Baking π and Building Better AI | LinkedIn
Baking π and Building Better AI | LinkedIn
I've spent long, hard years learning how to talk about knowledge graphs and semantics with software engineers who have little training in linguistics. I feel quite fluent at this point, after investing huge amounts of effort into understanding statistics (I was a humanities undergrad) and into unpac
·linkedin.com·
Baking π and Building Better AI | LinkedIn
What’s the difference between context engineering and ontology engineering?
What’s the difference between context engineering and ontology engineering?
What’s the difference between context engineering and ontology engineering? We hear a lot about “context engineering” these days in AI wonderland. A lot of good thing are being said but it’s worth noting what’s missing. Yes, context matters. But context without structure is narrative, not knowledge. And if AI is going to scale beyond demos and copilots into systems that reason, track memory, and interoperate across domains… then context alone isn’t enough. We need ontology engineering. Here’s the difference: - Context engineering is about curating inputs: prompts, memory, user instructions, embeddings. It’s the art of framing. - Ontology engineering is about modeling the world: defining entities, relations, axioms, and constraints that make reasoning possible. In other words: Context guides attention. Ontology shapes understanding. What’s dangerous is that many teams stop at context, assuming that if you feed the right words to an LLM, you’ll get truth, traceability, or decisions you can trust. This is what I call “hallucination of control”. Ontologies provide what LLMs lack: grounding, consistency, and interoperability, but they are hard to build without the right methods, adapted from the original discipline that started 20+ years ago with the semantic web, now it’s time to work it out for the LLM AI era. If you’re serious about scaling AI across business processes or mission-critical systems, the real challenge is more than context, it’s shared meaning. And tech alone cannot solve this. That’s why we need put ontology discussion in the board room, because integrating AI into organizations is much more complicated than just providing the right context in a prompt or a context window. That’s it for today. More tomorrow! I’m trying to get back at journaling here every day. 🤙 hope you will find something useful in what I write. | 71 comments on LinkedIn
What’s the difference between context engineering and ontology engineering?
·linkedin.com·
What’s the difference between context engineering and ontology engineering?
how both OWL and SHACL can be employed during the decision-making phase for AI Agents when using a knowledge graph instead of relying on an LLM that hallucinates
how both OWL and SHACL can be employed during the decision-making phase for AI Agents when using a knowledge graph instead of relying on an LLM that hallucinates
𝙏𝙝𝙤𝙪𝙜𝙝𝙩 𝙛𝙤𝙧 𝙩𝙝𝙚 𝙙𝙖𝙮: I've been mulling over how both OWL and SHACL can be employed during the decision-making phase for AI Agents when using a knowledge graph instead of relying on an LLM that hallucinates. In this way, the LLM can still be used for assessment and sensory feedback, but it augments the graph, not the other way around. OWL and SHACL serve different roles. SHACL is not just a preprocessing validator; it can play an active role in constraining, guiding, or triggering decisions, especially when integrated into AI pipelines. However, OWL is typically more central to inferencing and reasoning tasks. SHACL can actively participate in decision-making, especially when decisions require data integrity, constraint enforcement, or trigger-based logic. In complex agents, OWL provides the inferencing engine, while SHACL acts as the constraint gatekeeper and occasionally contributes to rule-based decision-making. For example, an AI agent processes RDF data describing an applicant's skills, degree, and experience. SHACL validates the data's structure, ensuring required fields are present and correctly formatted. OWL reasoning infers that the applicant is qualified for a technical role and matches the profile of a backend developer. SHACL is then used again to check policy compliance. With all checks passed, the applicant is shortlisted, and a follow-up email is triggered. In AI agent decision-making, OWL and SHACL often work together in complementary ways. SHACL is commonly used as a preprocessing step to validate incoming RDF data. If the data fails validation, it's flagged or excluded, ensuring only clean, structurally sound data reaches the OWL reasoner. In this role, SHACL acts as a gatekeeper. They can also operate in parallel or in an interleaved manner within a pipeline. As decisions evolve, SHACL shapes may be checked mid-process. Some AI agents even use SHACL as a rule engine—to trigger alerts, detect actionable patterns, or constrain reasoning paths—while OWL continues to handle more complex semantic inferences, such as class hierarchies or property logic. Finally, SHACL can augment decision-making by confirming whether OWL-inferred actions comply with specific constraints. OWL may infer that “A is a type of B, so do X,” and SHACL then determines whether doing X adheres to a policy or requirement. Because SHACL supports closed-world assumptions (which OWL does not), it plays a valuable role in enforcing policies or compliance rules during decision execution. Illustrated:
how both OWL and SHACL can be employed during the decision-making phase for AI Agents when using a knowledge graph instead of relying on an LLM that hallucinates
·linkedin.com·
how both OWL and SHACL can be employed during the decision-making phase for AI Agents when using a knowledge graph instead of relying on an LLM that hallucinates
I'm trying to build a Knowledge Graph.
I'm trying to build a Knowledge Graph.
I'm trying to build a Knowledge Graph. Our team has done experiments with current libraries available (𝐋𝐥𝐚𝐦𝐚𝐈𝐧𝐝𝐞𝐱, 𝐌𝐢𝐜𝐫𝐨𝐬𝐨𝐟𝐭'𝐬 𝐆𝐫𝐚𝐩𝐡𝐑𝐀𝐆, 𝐋𝐢𝐠𝐡𝐫𝐚𝐠, 𝐆𝐫𝐚𝐩𝐡𝐢𝐭𝐢 etc.) From a Product perspective, they seem to be missing the basic, common-sense features. 𝐒𝐭𝐢𝐜𝐤 𝐭𝐨 𝐚 𝐅𝐢𝐱𝐞𝐝 𝐓𝐞𝐦𝐩𝐥𝐚𝐭𝐞: My business organizes information in a specific way. I need the system to use our predefined entities and relationships, not invent its own. The output has to be consistent and predictable every time. 𝐒𝐭𝐚𝐫𝐭 𝐰𝐢𝐭𝐡 𝐖𝐡𝐚𝐭 𝐖𝐞 𝐀𝐥𝐫𝐞𝐚𝐝𝐲 𝐊𝐧𝐨𝐰: We already have lists of our products, departments, and key employees. The AI shouldn't have to guess this information from documents. I want to seed this this data upfront so that the graph can be build on this foundation of truth. 𝐂𝐥𝐞𝐚𝐧 𝐔𝐩 𝐚𝐧𝐝 𝐌𝐞𝐫𝐠𝐞 𝐃𝐮𝐩𝐥𝐢𝐜𝐚𝐭𝐞𝐬: The graph I currently get is messy. It sees "First Quarter Sales" and "Q1 Sales Report" as two completely different things. This is probably easy but want to make sure this does not happen. 𝐅𝐥𝐚𝐠 𝐖𝐡𝐞𝐧 𝐒𝐨𝐮𝐫𝐜𝐞𝐬 𝐃𝐢𝐬𝐚𝐠𝐫𝐞𝐞: If one chunk says our sales were $10M and another says $12M, I need the library to flag this disagreement, not just silently pick one. It also needs to show me exactly which documents the numbers came from so we can investigate. Has anyone solved this? I'm looking for a library —that gets these fundamentals right. | 21 comments on LinkedIn
·linkedin.com·
I'm trying to build a Knowledge Graph.
❓ Why I Wrote This Book?
❓ Why I Wrote This Book?
❓ Why I Wrote This Book? In the past two to three years, we've witnessed a revolution. First with ChatGPT, and now with autonomous AI agents. This is only the beginning. In the years ahead, AI will transform not only how we work but how we live. At the core of this transformation lies a single breakthrough technology: large language models (LLMs). That’s why I decided to write this book. This book explores what an LLM is, how it works, and how it develops its remarkable capabilities. It also shows how to put these capabilities into practice, like turning an LLM into the beating heart of an AI agent. Dissatisfied with the overly simplified or fragmented treatments found in many current books, I’ve aimed to provide both solid theoretical foundations and hands-on demonstrations. You'll learn how to build agents using LLMs, integrate technologies like retrieval-augmented generation (RAG) and knowledge graphs, and explore one of today’s most fascinating frontiers: multi-agent systems. Finally, I’ve included a section on open research questions (areas where today’s models still fall short, ethical issues, doubts, and so on), and where tomorrow’s breakthroughs may lie. 🧠 Who is this book for? Anyone curious about LLMs, how they work, and how to use them effectively. Whether you're just starting out or already have experience, this book offers both accessible explanations and practical guidance. It's for those who want to understand the theory and apply it in the real world. 🛑 Who is this book not for? Those who dismiss AI as a passing fad or have no interest in what lies ahead. But for everyone else this book is for you. Because AI agents are no longer speculative. They’re real, and they’re here. A huge thanks to my co-author Gabriele Iuculano, and the Packt's team: Gebin George, Sanjana Gupta, Ali A., Sonia Chauhan, Vignesh Raju., Malhar Deshpande #AI #LLMs #KnowledgeGraphs #AIagents #RAG #GenerativeAI #MachineLearning #NLP #Agents #DeepLearning | 22 comments on LinkedIn
·linkedin.com·
❓ Why I Wrote This Book?
GraphRAG in Action: A Simple Agent for Know-Your-Customer Investigations | Towards Data Science
GraphRAG in Action: A Simple Agent for Know-Your-Customer Investigations | Towards Data Science
This blog post provides a hands-on guide for AI engineers and developers on how to build an initial KYC agent prototype with the OpenAI Agents SDK. We'll explore how to equip our agent with a suite of tools (including MCP Server tools) to uncover and investigate potential fraud patterns.
·towardsdatascience.com·
GraphRAG in Action: A Simple Agent for Know-Your-Customer Investigations | Towards Data Science
Transform Claude's Hidden Memory Into Interactive Knowledge Graphs
Transform Claude's Hidden Memory Into Interactive Knowledge Graphs
Transform Claude's Hidden Memory Into Interactive Knowledge Graphs Universal tool to visualize any Claude user's memory.json in beautiful interactive graphs. Transform your Claude Memory MCP data into stunning interactive visualizations to see how your AI assistant's knowledge connects and evolves over time. Enterprise teams using Claude lack visibility into how their AI assistant accumulates and organizes institutional knowledge. Claude Memory Viz provides zero-configuration visualization that automatically finds memory files and displays 72 entities with 93 relationships in real-time force-directed layouts. Teams can filter by entity type, search across all data, and explore detailed connections through rich tooltips. The technical implementation supports Claude's standard NDJSON memory format, automatically detecting and color-coding entity types from personality profiles to technical tools. Node size reflects connection count, while adjustable physics parameters enable optimal spacing for large knowledge graphs. Built with Cytoscape.js for performance optimization. Built with the philosophy "Solve it once and for all," the tool works for any Claude user with zero configuration. The visualizer automatically searches common memory file locations, provides demo data fallback, and offers clear guidance when files aren't found. Integration requires just git clone and one command execution. This matters because AI memory has been invisible to users, creating trust and accountability gaps in enterprise AI deployment. When teams can visualize how their AI assistant organizes knowledge, they gain insights into decision-making patterns and can optimize their AI collaboration strategies. 👩‍💻https://lnkd.in/e__RQh_q | 10 comments on LinkedIn
Transform Claude's Hidden Memory Into Interactive Knowledge Graphs
·linkedin.com·
Transform Claude's Hidden Memory Into Interactive Knowledge Graphs
Building AI Agents with LLMs, RAG, and Knowledge Graphs: A practical guide to autonomous and modern AI agents
Building AI Agents with LLMs, RAG, and Knowledge Graphs: A practical guide to autonomous and modern AI agents
𝐁𝐨𝐨𝐤 𝐩𝐫𝐨𝐦𝐨𝐭𝐢𝐨𝐧 𝐛𝐞𝐜𝐚𝐮𝐬𝐞 𝐭𝐡𝐢𝐬 𝐨𝐧𝐞 𝐢𝐬 𝐰𝐨𝐫𝐭𝐡 𝐢𝐭.. 𝐀𝐠𝐞𝐧𝐭𝐢𝐜 𝐀𝐈 𝐚𝐭 𝐢𝐭𝐬 𝐛𝐞𝐬𝐭.. This masterpiece was published by Salvatore Raieli and Gabriele Iuculano, and it is available for orders from today, and it's already a 𝐁𝐞𝐬𝐭𝐬𝐞𝐥𝐥𝐞𝐫! While many resources focus on LLMs or basic agentic workflows, what makes this book stand out is its deep dive into grounding LLMs with real-world data and action through the powerful combination of 𝘙𝘦𝘵𝘳𝘪𝘦𝘷𝘢𝘭-𝘈𝘶𝘨𝘮𝘦𝘯𝘵𝘦𝘥 𝘎𝘦𝘯𝘦𝘳𝘢𝘵𝘪𝘰𝘯 (𝘙𝘈𝘎) 𝘢𝘯𝘥 𝘒𝘯𝘰𝘸𝘭𝘦𝘥𝘨𝘦 𝘎𝘳𝘢𝘱𝘩𝘴. This isn't just about building Agents; it's about building AI that reasons, retrieves accurate information, and acts autonomously by leveraging structured knowledge alongside advanced LLMs. The book offers a practical roadmap, packed with concrete Python examples and real-world case studies, guiding you from concept to deployment of intelligent, robust, and hallucination-minimized AI solutions, even orchestrating multi-agent systems. Order your copy here - https://packt.link/RpzGM #AI #LLMs #KnowledgeGraphs #AIAgents #RAG #GenerativeAI #MachineLearning
·linkedin.com·
Building AI Agents with LLMs, RAG, and Knowledge Graphs: A practical guide to autonomous and modern AI agents