GraphFaker: Instant Graphs for Prototyping, Teaching, and Beyond
I can't tell you how many times I've had a graph analytics idea, only to spend days trying to find decent data to test it on. ๐คSound familiar?
That's why I'm excited about the talk next week by Dennis Irorere on GraphFaker - a free tool from the GraphGeeks Lab to help with the graph data problem.
Good graph data is ridiculously hard to come by. It's either locked behind privacy walls, messy beyond belief, or not really relationship-centric. I've been there, we've all been there.
Dennis will show us how to:
- Generate realistic social networks quickly
- Pull actual street network data without the headaches
- Access air travel networks, Wikipedia graphs, and more
๐ Join us on July 29 - Or register for the recording.
https://lnkd.in/gBxjrWGS
Whether you're in research, prototyping new features, or teaching graph algorithms, this could shorten your workflow. โAnd what really caught my attention is that this will allow me to focus on the fun part of testing ideas. ๐ค
Whatโs the difference between context engineering and ontology engineering?
Whatโs the difference between context engineering and ontology engineering?
We hear a lot about โcontext engineeringโ these days in AI wonderland. A lot of good thing are being said but itโs worth noting whatโs missing.
Yes, context matters. But context without structure is narrative, not knowledge. And if AI is going to scale beyond demos and copilots into systems that reason, track memory, and interoperate across domainsโฆ then context alone isnโt enough.
We need ontology engineering.
Hereโs the difference:
- Context engineering is about curating inputs: prompts, memory, user instructions, embeddings. Itโs the art of framing.
- Ontology engineering is about modeling the world: defining entities, relations, axioms, and constraints that make reasoning possible.
In other words:
Context guides attention. Ontology shapes understanding.
Whatโs dangerous is that many teams stop at context, assuming that if you feed the right words to an LLM, youโll get truth, traceability, or decisions you can trust. This is what I call โhallucination of controlโ.
Ontologies provide what LLMs lack: grounding, consistency, and interoperability, but they are hard to build without the right methods, adapted from the original discipline that started 20+ years ago with the semantic web, now itโs time to work it out for the LLM AI era.
If youโre serious about scaling AI across business processes or mission-critical systems, the real challenge is more than context, itโs shared meaning. And tech alone cannot solve this.
Thatโs why we need put ontology discussion in the board room, because integrating AI into organizations is much more complicated than just providing the right context in a prompt or a context window.
Thatโs it for today. More tomorrow!
Iโm trying to get back at journaling here every day. ๐ค hope you will find something useful in what I write. | 71 comments on LinkedIn
Whatโs the difference between context engineering and ontology engineering?
how both OWL and SHACL can be employed during the decision-making phase for AI Agents when using a knowledge graph instead of relying on an LLM that hallucinates
๐๐๐ค๐ช๐๐๐ฉ ๐๐ค๐ง ๐ฉ๐๐ ๐๐๐ฎ: I've been mulling over how both OWL and SHACL can be employed during the decision-making phase for AI Agents when using a knowledge graph instead of relying on an LLM that hallucinates. In this way, the LLM can still be used for assessment and sensory feedback, but it augments the graph, not the other way around. OWL and SHACL serve different roles. SHACL is not just a preprocessing validator; it can play an active role in constraining, guiding, or triggering decisions, especially when integrated into AI pipelines. However, OWL is typically more central to inferencing and reasoning tasks.
SHACL can actively participate in decision-making, especially when decisions require data integrity, constraint enforcement, or trigger-based logic. In complex agents, OWL provides the inferencing engine, while SHACL acts as the constraint gatekeeper and occasionally contributes to rule-based decision-making.
For example, an AI agent processes RDF data describing an applicant's skills, degree, and experience. SHACL validates the data's structure, ensuring required fields are present and correctly formatted. OWL reasoning infers that the applicant is qualified for a technical role and matches the profile of a backend developer. SHACL is then used again to check policy compliance. With all checks passed, the applicant is shortlisted, and a follow-up email is triggered.
In AI agent decision-making, OWL and SHACL often work together in complementary ways. SHACL is commonly used as a preprocessing step to validate incoming RDF data. If the data fails validation, it's flagged or excluded, ensuring only clean, structurally sound data reaches the OWL reasoner. In this role, SHACL acts as a gatekeeper.
They can also operate in parallel or in an interleaved manner within a pipeline. As decisions evolve, SHACL shapes may be checked mid-process. Some AI agents even use SHACL as a rule engineโto trigger alerts, detect actionable patterns, or constrain reasoning pathsโwhile OWL continues to handle more complex semantic inferences, such as class hierarchies or property logic.
Finally, SHACL can augment decision-making by confirming whether OWL-inferred actions comply with specific constraints. OWL may infer that โA is a type of B, so do X,โ and SHACL then determines whether doing X adheres to a policy or requirement. Because SHACL supports closed-world assumptions (which OWL does not), it plays a valuable role in enforcing policies or compliance rules during decision execution.
Illustrated:
how both OWL and SHACL can be employed during the decision-making phase for AI Agents when using a knowledge graph instead of relying on an LLM that hallucinates
Itโs already the end of Sunday โ I hope you all had a wonderful week. Mine was exceptionally busy, with the GUG seminar and the upcoming tutorial preparation. I usually take time for a personalโฆ
I'm trying to build a Knowledge Graph. Our team has done experiments with current libraries available (๐๐ฅ๐๐ฆ๐๐๐ง๐๐๐ฑ, ๐๐ข๐๐ซ๐จ๐ฌ๐จ๐๐ญ'๐ฌ ๐๐ซ๐๐ฉ๐ก๐๐๐, ๐๐ข๐ ๐ก๐ซ๐๐ , ๐๐ซ๐๐ฉ๐ก๐ข๐ญ๐ข etc.) From a Product perspective, they seem to be missing the basic, common-sense features.
๐๐ญ๐ข๐๐ค ๐ญ๐จ ๐ ๐ ๐ข๐ฑ๐๐ ๐๐๐ฆ๐ฉ๐ฅ๐๐ญ๐:
My business organizes information in a specific way. I need the system to use our predefined entities and relationships, not invent its own. The output has to be consistent and predictable every time.
๐๐ญ๐๐ซ๐ญ ๐ฐ๐ข๐ญ๐ก ๐๐ก๐๐ญ ๐๐ ๐๐ฅ๐ซ๐๐๐๐ฒ ๐๐ง๐จ๐ฐ:
We already have lists of our products, departments, and key employees. The AI shouldn't have to guess this information from documents. I want to seed this this data upfront so that the graph can be build on this foundation of truth.
๐๐ฅ๐๐๐ง ๐๐ฉ ๐๐ง๐ ๐๐๐ซ๐ ๐ ๐๐ฎ๐ฉ๐ฅ๐ข๐๐๐ญ๐๐ฌ:
The graph I currently get is messy. It sees "First Quarter Sales" and "Q1 Sales Report" as two completely different things. This is probably easy but want to make sure this does not happen.
๐ ๐ฅ๐๐ ๐๐ก๐๐ง ๐๐จ๐ฎ๐ซ๐๐๐ฌ ๐๐ข๐ฌ๐๐ ๐ซ๐๐:
If one chunk says our sales were $10M and another says $12M, I need the library to flag this disagreement, not just silently pick one. It also needs to show me exactly which documents the numbers came from so we can investigate.
Has anyone solved this? I'm looking for a library โthat gets these fundamentals right. | 21 comments on LinkedIn
โ Why I Wrote This Book?
In the past two to three years, we've witnessed a revolution. First with ChatGPT, and now with autonomous AI agents. This is only the beginning. In the years ahead, AI will transform not only how we work but how we live. At the core of this transformation lies a single breakthrough technology: large language models (LLMs). Thatโs why I decided to write this book.
This book explores what an LLM is, how it works, and how it develops its remarkable capabilities. It also shows how to put these capabilities into practice, like turning an LLM into the beating heart of an AI agent. Dissatisfied with the overly simplified or fragmented treatments found in many current books, Iโve aimed to provide both solid theoretical foundations and hands-on demonstrations. You'll learn how to build agents using LLMs, integrate technologies like retrieval-augmented generation (RAG) and knowledge graphs, and explore one of todayโs most fascinating frontiers: multi-agent systems. Finally, Iโve included a section on open research questions (areas where todayโs models still fall short, ethical issues, doubts, and so on), and where tomorrowโs breakthroughs may lie.
๐ง Who is this book for?
Anyone curious about LLMs, how they work, and how to use them effectively. Whether you're just starting out or already have experience, this book offers both accessible explanations and practical guidance. It's for those who want to understand the theory and apply it in the real world.
๐ Who is this book not for?
Those who dismiss AI as a passing fad or have no interest in what lies ahead. But for everyone else this book is for you. Because AI agents are no longer speculative. Theyโre real, and theyโre here.
A huge thanks to my co-author Gabriele Iuculano, and the Packt's team: Gebin George, Sanjana Gupta, Ali A., Sonia Chauhan, Vignesh Raju., Malhar Deshpande
#AI #LLMs #KnowledgeGraphs #AIagents #RAG #GenerativeAI #MachineLearning #NLP #Agents #DeepLearning
| 22 comments on LinkedIn
What makes the "๐๐๐ฆ๐๐ง๐ญ๐ข๐ ๐๐๐ญ๐ ๐๐ซ๐จ๐๐ฎ๐๐ญ" so valid in data conversations today?๐ก ๐๐จ๐ฎ๐ง๐๐๐ ๐๐จ๐ง๐ญ๐๐ฑ๐ญ and Right-to-Left Flow from consumers to raw materials.
What makes the "๐๐๐ฆ๐๐ง๐ญ๐ข๐ ๐๐๐ญ๐ ๐๐ซ๐จ๐๐ฎ๐๐ญ" so valid in data conversations today?๐ก ๐๐จ๐ฎ๐ง๐๐๐ ๐๐จ๐ง๐ญ๐๐ฑ๐ญ and Right-to-Left Flow from consumers to raw materials.
Tony Seale perfectly defines the value of bounded context.
โฆ๐ต๐ฐ ๐ด๐ถ๐ด๐ต๐ข๐ช๐ฏ ๐ช๐ต๐ด๐ฆ๐ญ๐ง, ๐ข ๐ด๐บ๐ด๐ต๐ฆ๐ฎ ๐ฎ๐ถ๐ด๐ต ๐ฎ๐ช๐ฏ๐ช๐ฎ๐ช๐ด๐ฆ ๐ช๐ต๐ด ๐ง๐ณ๐ฆ๐ฆ ๐ฆ๐ฏ๐ฆ๐ณ๐จ๐บ- ๐ข ๐ฎ๐ฆ๐ข๐ด๐ถ๐ณ๐ฆ ๐ฐ๐ง ๐ถ๐ฏ๐ค๐ฆ๐ณ๐ต๐ข๐ช๐ฏ๐ต๐บ. ๐๐ช๐ฏ๐ช๐ฎ๐ช๐ด๐ช๐ฏ๐จ ๐ช๐ต ๐ฆ๐ฒ๐ถ๐ข๐ต๐ฆ๐ด ๐ต๐ฐ ๐ญ๐ฐ๐ธ ๐ช๐ฏ๐ต๐ฆ๐ณ๐ฏ๐ข๐ญ ๐ฆ๐ฏ๐ต๐ณ๐ฐ๐ฑ๐บ. ๐ ๐ด๐บ๐ด๐ต๐ฆ๐ฎ ๐ข๐ค๐ฉ๐ช๐ฆ๐ท๐ฆ๐ด ๐ต๐ฉ๐ช๐ด ๐ฃ๐บ ๐ง๐ฐ๐ณ๐ฎ๐ช๐ฏ๐จ ๐ข๐ค๐ค๐ถ๐ณ๐ข๐ต๐ฆ ๐ฑ๐ณ๐ฆ๐ฅ๐ช๐ค๐ต๐ช๐ฐ๐ฏ๐ด ๐ข๐ฃ๐ฐ๐ถ๐ต ๐ต๐ฉ๐ฆ ๐ฆ๐น๐ต๐ฆ๐ณ๐ฏ๐ข๐ญ ๐ฆ๐ฏ๐ท ๐ข๐ฏ๐ฅ ๐ถ๐ฑ๐ฅ๐ข๐ต๐ช๐ฏ๐จ ๐ช๐ต๐ด ๐ช๐ฏ๐ต๐ฆ๐ณ๐ฏ๐ข๐ญ ๐ด๐ต๐ข๐ต๐ฆ๐ด ๐ข๐ค๐ค๐ฐ๐ณ๐ฅ๐ช๐ฏ๐จ๐ญ๐บ, ๐ข๐ญ๐ญ๐ฐ๐ธ๐ช๐ฏ๐จ ๐ง๐ฐ๐ณ ๐ข ๐ฅ๐บ๐ฏ๐ข๐ฎ๐ช๐ค ๐บ๐ฆ๐ต ๐ด๐ต๐ข๐ฃ๐ญ๐ฆ ๐ช๐ฏ๐ต๐ฆ๐ณ๐ข๐ค๐ต๐ช๐ฐ๐ฏ ๐ธ๐ช๐ต๐ฉ ๐ช๐ต๐ด ๐ด๐ถ๐ณ๐ณ๐ฐ๐ถ๐ฏ๐ฅ๐ช๐ฏ๐จ๐ด. ๐๐ฏ๐ญ๐บ ๐ฑ๐ฐ๐ด๐ด๐ช๐ฃ๐ญ๐ฆ ๐ฐ๐ฏ ๐ฅ๐ฆ๐ญ๐ช๐ฏ๐ฆ๐ข๐ต๐ช๐ฏ๐จ ๐ข ๐ฃ๐ฐ๐ถ๐ฏ๐ฅ๐ข๐ณ๐บ ๐ฃ๐ฆ๐ต๐ธ๐ฆ๐ฆ๐ฏ ๐ช๐ฏ๐ต๐ฆ๐ณ๐ฏ๐ข๐ญ ๐ข๐ฏ๐ฅ ๐ฆ๐น๐ต๐ฆ๐ณ๐ฏ๐ข๐ญ ๐ด๐บ๐ด๐ต๐ฆ๐ฎ๐ด. ๐๐ช๐ด๐ค๐ฐ๐ฏ๐ฏ๐ฆ๐ค๐ต๐ฆ๐ฅ ๐ด๐บ๐ด๐ต๐ฆ๐ฎ๐ด ๐ด๐ช๐จ๐ฏ๐ข๐ญ ๐ธ๐ฆ๐ข๐ฌ ๐ฃ๐ฐ๐ถ๐ฏ๐ฅ๐ข๐ณ๐ช๐ฆ๐ด.
Data Products enable a way to bind context to specific business purposes or use cases. This enables data to become:
โ ย Purpose-driven
โ ย Accurately Discoverable
โ ย Easily Understandable & Addressable
โ ย Valuable as an independent entity
๐๐ก๐ ๐๐จ๐ฅ๐ฎ๐ญ๐ข๐จ๐ง: The Data Product Model. A conceptual model that precisely captures the business context through an interface operable by business users or domain experts.
We have often referred to this as The Data Product Prototype, which is essentially a semantic model and captures information on:
โก๏ธ Popular Metrics the Business wants to drive
โก๏ธ Measures & Dimensions
โก๏ธ Relationships & formulas
โก๏ธ Further context with tags, descriptions, synonyms, & observability metrics
โก๏ธ Quality SLOs - or simply, conditions necessary
โก๏ธ Additional policy specs contributed by Governance Stewards
Once the Prototype is validated and given a green flag, development efforts kickstart. Note how all data engineering efforts (left-hand side) are not looped in until this point, saving massive costs and time drainage.
The DE teams, who only have a partial view of the business landscape, are now no longer held accountable for this lack in strong business understanding. ๐๐ก๐ ๐จ๐ฐ๐ง๐๐ซ๐ฌ๐ก๐ข๐ฉ ๐จ๐ ๐ญ๐ก๐ ๐๐๐ญ๐ ๐๐ซ๐จ๐๐ฎ๐๐ญ ๐ฆ๐จ๐๐๐ฅ ๐ข๐ฌ ๐๐ง๐ญ๐ข๐ซ๐๐ฅ๐ฒ ๐ฐ๐ข๐ญ๐ก ๐๐ฎ๐ฌ๐ข๐ง๐๐ฌ๐ฌ.
๐ซ DEs have a blueprint to refer and simply map sources or source data products to the prescribed Data Product Model. Any new request comes through this prototype itself, managed by Data Product Managers in collaboration with business users. Dissolving all bottlenecks from centralised data engineering teams.
At this level, necessary transformations are delivered,
๐ย that activate the SLOs
๐ย enable interoperability with native tools and upstream data products,
๐ย allow reusability of pre-existing transforms in the form of Source or Aggregate data products.
#datamanagement #dataproducts
How Does Graph Theory Shape Our World? | Quanta Magazine
Maria Chudnovsky reflects on her journey in graph theory, her groundbreaking solution to the long-standing perfect graph problem, and the unexpected ways this abstract field intersects with everyday life.
A Graph-Native Workflow Application using Neo4j/Cypher | Medium
A full working Cypher script that simulates a Tendering System with multiple workflows, AI agent interactions, conversations, approvals, and more โ all modeled and executed natively in a Graph.
Improving Text2Cypher for Graph RAG via schema pruning | Kuzu
In this post, we describe how to improve the quality of the Cypher queries generated by Text2Cypher via graph schema pruning, viewed through the lens of context engineering.
GraphRAG in Action: A Simple Agent for Know-Your-Customer Investigations | Towards Data Science
This blog post provides a hands-on guide for AI engineers and developers on how to build an initial KYC agent prototype with the OpenAI Agents SDK.ย We'll explore how to equip our agent with a suite of tools (including MCP Server tools) to uncover and investigate potential fraud patterns.
Transform Claude's Hidden Memory Into Interactive Knowledge Graphs
Transform Claude's Hidden Memory Into Interactive Knowledge Graphs
Universal tool to visualize any Claude user's memory.json in beautiful interactive graphs. Transform your Claude Memory MCP data into stunning interactive visualizations to see how your AI assistant's knowledge connects and evolves over time.
Enterprise teams using Claude lack visibility into how their AI assistant accumulates and organizes institutional knowledge. Claude Memory Viz provides zero-configuration visualization that automatically finds memory files and displays 72 entities with 93 relationships in real-time force-directed layouts. Teams can filter by entity type, search across all data, and explore detailed connections through rich tooltips.
The technical implementation supports Claude's standard NDJSON memory format, automatically detecting and color-coding entity types from personality profiles to technical tools. Node size reflects connection count, while adjustable physics parameters enable optimal spacing for large knowledge graphs. Built with Cytoscape.js for performance optimization.
Built with the philosophy "Solve it once and for all," the tool works for any Claude user with zero configuration. The visualizer automatically searches common memory file locations, provides demo data fallback, and offers clear guidance when files aren't found. Integration requires just git clone and one command execution.
This matters because AI memory has been invisible to users, creating trust and accountability gaps in enterprise AI deployment. When teams can visualize how their AI assistant organizes knowledge, they gain insights into decision-making patterns and can optimize their AI collaboration strategies.
๐ฉโ๐ปhttps://lnkd.in/e__RQh_q | 10 comments on LinkedIn
Transform Claude's Hidden Memory Into Interactive Knowledge Graphs
This is it.
This is the conversation every leadership team needs to be having right now.
"The Orchestration Graph" by WRITER product leader Matan-Paul Shetrit linked in comments is a must-read.
The primary constraint on business is no longerย execution. It'sย supervision.
For a century, we built companies to overcome the high cost of getting things done.
We built hierarchies, departments, and complex processes โ all to manage labor-intensive execution.
That era is over.
With AI agents, execution is becoming abundant, on-demand, and programmatic.
The new bottleneck is our ability to direct, govern, and orchestrate this immense new capacity.
The firm is evolving from a factory into an "operating system."
Your ORG CHART is no longer the map.
The real map is theย Orchestration Graph: the dynamic, software-defined network of humans, models, and agents that actually does the work.
This isn't just a new tool or a productivity hack. It's a fundamental rewiring of the enterprise. It demands we rethink everything:
Structure:ย How do we manage systems, not just people?
Strategy:ย What work do we insource to our agentic "OS" versus outsource to models-as-a-service?
Metrics:ย Are we still measuring human activity, or are we measuring system throughput and intelligence?
This is the WRITER call to arms: The companies that win won't just adopt AI; they will restructure themselves around it. They will build their own Orchestration Graph, with governance and institutional memory at the core.
They will treat AI not as a feature, but as the new foundation.
At WRITER, this is the future we are building every single day โ giving companies the platform to create their own secure, governed, and intelligent orchestration layer.
The time to act is now.
Read the article. Start the conversation with your leaders. And begin rewiring your firm. | 37 comments on LinkedIn
When people discuss how LLMS "reason," youโll often hear that they rely on transduction rather than abduction. It sounds technical, but the distinction matters - especially as we start wiring LLMs into systems that are supposed to think.
๐ต Transduction is case-to-case reasoning. It doesnโt build theories; it draws fuzzy connections based on resemblance. Think: โThis metal conducts electricity, and that one looks similar - so maybe it does too.โ
๐ต Abduction, by contrast, is about generating explanations. Itโs what scientists (and detectives) do: โThis metal is conducting - maybe it contains free electrons. That would explain it.โ
The claim is that LLMs operate more like transducers - navigating high-dimensional spaces of statistical similarity, rather than forming crisp generalisations. But this isnโt the whole picture. In practice, it seems to me that LLMs also perform a kind of induction - abstracting general patterns from oceans of text. They learn the shape of ideas and apply them in novel ways. Thatโs closer to โAll metals of this type have conducted in the past, so this one probably will.โ
Now add tools to the mix - code execution, web search, Elon Musk's tweet history ๐ - and LLMs start doing something even more interesting: program search and synthesis. It's messy, probabilistic, and not at all principled or rigorous. But itโs inching toward a form of abductive reasoning.
Which brings us to a more principled approach for reasoning within an enterprise domain: the neuro-symbolic loop - a collaboration between large language models and knowledge graphs. The graph provides structure: formal semantics, ontologies, logic, and depth. The LLM brings intuition: flexible inference, linguistic creativity, and breadth. One grounds. The other leaps.
๐ก The real breakthrough could come when the grounding isnโt just factual, but conceptual - when the ontology encodes clean, meaningful generalisations. Thatโs when the LLMโs leaps wouldnโt just reach further - theyโd rise higher, landing on novel ideas that hold up under formal scrutiny. ๐ก
So where do metals fit into this new framing?
๐ต Transduction: โThis metal conducts. That one looks the same - it probably does too.โ
๐ต Induction: โIโve tested ten of these. All conducted. Itโs probably a rule.โ
๐ต Abduction: โThis metal is conducting. It shares properties with the โconductive alloyโ class - especially composition and crystal structure. The best explanation is a sea of free electrons.โ
LLMs, in isolation, are limited in their ability to perform structured abduction. But when embedded in a system that includes a formal ontology, logical reasoning, and external tools, they can begin to participate in richer forms of reasoning. These hybrid systems are still far from principled scientific reasoners - but they hint at a path forward: a more integrated and disciplined neuro-symbolic architecture that moves beyond mere pattern completion.
S&P Global Unlocks the Future of AI-driven insights with AI-Ready Metadata on S&P Global Marketplace
๐ When I shared our 2025 goals for the Enterprise Data Organization, one of the things I alluded to was machine-readable column-level metadata. Letโs unpack what that meansโand why it matters.
๐ What: For datasets we deliver via modern cloud distribution, we now provide human - and machine - readable metadata at the column level. Each column has an immutable URL (no auth, no CAPTCHA) that hosts name/value metadata - synonyms, units of measure, descriptions, and more - in multiple human languages. Itโs semantic context that goes far beyond what a traditional data dictionary can convey. We can't embed it, so we link to it.
๐ก Why: Metadata is foundational to agentic, precise consumption of structured data. Our customers are investing in semantic layers, data catalogs, and knowledge graphs - and they shouldnโt have to copy-paste from a PDF to get there. Use curl, Python, Bash - whatever works - to automate ingestion. (We support content negotiation and conditional GETs.)
๐ง Under the hood? Itโs RDF. Love it or hate it, you donโt need to engage with the plumbing unless you want to.
โจ To our knowledge, this hasnโt been done before. This is our MVP. Weโre putting it out there to learn what works - and what doesnโt. Itโs vendor-neutral, web-based, and designed to scale across:
๐ Breadth of datasets across S&P
๐งฌ Depth of metadata
๐ Choice of linking venue
๐ It took a village to make this happen. I canโt name everyone without writing a book, but I want to thank our executive leadership for the trust and support to go build this.
Let us know what you think!
๐ https://lnkd.in/gbe3NApH
Martina Cheung, Saugata Saha, Swamy Kocherlakota, Dave Ernsberger, Mark Eramo, Frank Tarsillo, Warren Breakstone, Hamish B., Erica Robeen, Laura Miller, Justine S Iverson, | 17 comments on LinkedIn
TigerGraph Accelerates Enterprise AI Infrastructure Innovation with Strategic Investment from Cuadrilla Capital - TigerGraph
TigerGraph secures a strategic investment from Cuadrilla Capital to fuel innovation in enterprise AI infrastructure and graph database technology, delivering advanced solutions for fraud detection, customer 360, supply chain optimization, and real-time data analytics.
metaphacts unveils metis, the new Knowledge-driven AI platform for Enterprises
Introducing metis: an enterprise AI platform from metaphactory. Get trusted, context-aware, knowledge-driven AI for actionable insights & intelligent agents.
Should ontologies be treated as organizational resources for semantic capabilities?
๐ก Should ontologies be treated as organizational resources for semantic capabilities?
More and more organizations are investing in data platforms, modeling tools, and integration frameworks. But one key capability is often underused or misunderstood: ontologies as semantic infrastructure.
While databases handle facts and BI platforms handle queries, ontologies structure meaning. They define what things are, not just what data says. When treated as living organizational resources, ontologies can bring:
๐น Shared understanding across silos
๐น Reasoning and inference beyond data queries
๐น Semantic integration of diverse systems
๐น Clarity and coherence in enterprise models
But hereโs the challenge โ ontologies donโt operate in isolation. They must be positioned alongside:
๐ธ Data-oriented technologies (RDF, RDF-star, quad stores) that track facts and provenance
๐ธ Enterprise modeling tools (e.g., ArchiMate) that describe systems and views
๐ธ Exploratory approaches (like semantic cartography) that support emergence over constraint
These layers each come with their own logic โ epistemic vs. ontologic, structural vs. operational, contextual vs. formal.
โ Building semantic capabilities requires aligning all these dimensions.
โ It demands governance, tooling, and a culture of collaboration between ontologists, data managers, architects, and domain experts.
โ And it opens the door to richer insight, smarter automation, and more agile knowledge flows.
๐ With projects like ArchiCG (semantic interactive cartography), I aim to explore how we can visually navigate this landscape โ not constrained by predefined viewpoints, but guided by logic, meaning, and emergent perspectives.
What do you think? Are ontologies ready to take their place as core infrastructure in your organization? | 16 comments on LinkedIn
Nice piece on the comparisons of Vector DBs vs Knowledge Graphs.
Nice piece on the comparisons of Vector DBs vs Knowledge Graphs.
I think this becomes even more true when you start talking about temporal knowledge graphs, in which you are effectively describing temporal causality and contingent assertions.
Building AI Agents with LLMs, RAG, and Knowledge Graphs: A practical guide to autonomous and modern AI agents
๐๐จ๐จ๐ค ๐ฉ๐ซ๐จ๐ฆ๐จ๐ญ๐ข๐จ๐ง ๐๐๐๐๐ฎ๐ฌ๐ ๐ญ๐ก๐ข๐ฌ ๐จ๐ง๐ ๐ข๐ฌ ๐ฐ๐จ๐ซ๐ญ๐ก ๐ข๐ญ.. ๐๐ ๐๐ง๐ญ๐ข๐ ๐๐ ๐๐ญ ๐ข๐ญ๐ฌ ๐๐๐ฌ๐ญ..
This masterpiece was published by Salvatore Raieli and Gabriele Iuculano, and it is available for orders from today, and it's already a ๐๐๐ฌ๐ญ๐ฌ๐๐ฅ๐ฅ๐๐ซ!
While many resources focus on LLMs or basic agentic workflows, what makes this book stand out is its deep dive into grounding LLMs with real-world data and action through the powerful combination of ๐๐ฆ๐ต๐ณ๐ช๐ฆ๐ท๐ข๐ญ-๐๐ถ๐จ๐ฎ๐ฆ๐ฏ๐ต๐ฆ๐ฅ ๐๐ฆ๐ฏ๐ฆ๐ณ๐ข๐ต๐ช๐ฐ๐ฏ (๐๐๐) ๐ข๐ฏ๐ฅ ๐๐ฏ๐ฐ๐ธ๐ญ๐ฆ๐ฅ๐จ๐ฆ ๐๐ณ๐ข๐ฑ๐ฉ๐ด.
This isn't just about building Agents; it's about building AI that reasons, retrieves accurate information, and acts autonomously by leveraging structured knowledge alongside advanced LLMs.
The book offers a practical roadmap, packed with concrete Python examples and real-world case studies, guiding you from concept to deployment of intelligent, robust, and hallucination-minimized AI solutions, even orchestrating multi-agent systems.
Order your copy here - https://packt.link/RpzGM
#AI #LLMs #KnowledgeGraphs #AIAgents #RAG #GenerativeAI #MachineLearning
Semantic Backbone to Business Value: How Meaning Drives Real Results | LinkedIn
Introduction Data with meaning is powerful. But the real advantage comes when meaning leads directly to actionโwhen your semantic backbone becomes the brain and nervous system of your organization.
Semantics in use (part 3): an interview with Saritha V.Kuriakose, VP Research Data Management at Novo Nordisk | LinkedIn
We continue our series of examples of the use of semantics and ontologies across organizations with an interview with Saritha V. Kuriakose from Novo Nordisk, talking about the pervasive and foundational use of ontologies in pharmaceutical R&D.