GraphNews

4357 bookmarks
Custom sorting
When Is an Ontological Approach Not the Right Fit for Sharing and Reusing System Knowledge in Design and Development?
When Is an Ontological Approach Not the Right Fit for Sharing and Reusing System Knowledge in Design and Development?
🧠 When Is an Ontological Approach Not the Right Fit for Sharing and Reusing System Knowledge in Design and Development? Ontologies promise knowledge integration, traceability, reuse, and machine reasoning across the full engineering system lifecycle. From functional models to field failures, ontologies offer a way to encode and connect it all. šŸ’„ However, ontologies are not a silver bullet. There are plenty of scenarios where an ontology is not just unnecessary, it might actually slow you down, confuse your team, or waste resources. So when exactly does the ontological approach become more burden than benefit? Based on my understanding and current work in this space, šŸš€ For engineering design, it's important to recognise situations where adopting a semantic model is not the most effective approach: 1. When tasks are highly localised and routine If you're just tweaking part drawings, running standard FEA simulations, or updating well-established design details, then the knowledge already lives in your tools and practices. Adding an ontology might feel like installing a satellite dish to tune a local radio station. 2. When terminology is unstable or fragmented Ontologies depend on consistent language. If every department speaks its own dialect, and no one agrees on terms, you can't build shared meaning. You’ll end up formalising confusion instead of clarifying it. 3. When speed matters more than structure In prototyping labs, testing grounds, or urgent production lines, agility rules. Engineers solve problems fast, often through direct collaboration. Taking time to define formal semantics? Not always practical. Sometimes the best model is a whiteboard and a sharp marker. 4. When the knowledge won’t be reused Not all projects aim for longevity or cross-team learning. If you're building something once, for one purpose, with no intention of scaling or sharing, skip the ontology. It’s like building a library catalog for a single book. 5. When the infrastructure isn't there Ontological engineering isn’t magic. It needs tools, training, and people who understand the stack. If your team lacks the skills or platforms, even the best-designed ontology will gather dust in a forgotten folder. Use the Right Tool for the Real Problem Ontologies are powerful, but not sacred. They shine when you need to connect knowledge across domains, ensure long-term traceability, or enable intelligent automation. But they’re not a requirement for every task just because they’re clever. The real challenge is not whether to use ontologies, but knowing when they genuinely improve clarity, consistency, and collaboration, and when they just complicate the obvious. 🧠 Feedback and critique are welcome; this is a living conversation. Felician Campean #KnowledgeManagement #SystemsEngineering #Ontology #MBSE #DigitalEngineering #RiskAnalysis #AIinEngineering #OntologyEngineering #SemanticInteroperability #SystemReliability #FailureAnalysis #KnowledgeIntegration | 11 comments on LinkedIn
When Is an Ontological Approach Not the Right Fit for Sharing and Reusing System Knowledge in Design and Development?
Ā·linkedin.comĀ·
When Is an Ontological Approach Not the Right Fit for Sharing and Reusing System Knowledge in Design and Development?
Foundation Models Know Enough
Foundation Models Know Enough
LLMs already contain overlapping world models. You just have to ask them right. Ontologists reply to an LLM output,Ā ā€œThat’s not a real ontology—it’s not a formal conceptualization.ā€ But that’s just the No True Scotsman fallacy dressed up in OWL. Boring. Not growth-oriented. Look forward, angel. A foundation model is a compression of human knowledge. The real problem isn't that we "lack a conceptualization". The real problem with an FM is that they contain too many. FMs contain conceptualizations—plural. Messy? Sure. But usable. At Stardog, we’re turning this latent structure intoĀ real ontologiesĀ usingĀ symbolic knowledge distillation. Prompt orchestration → structure extraction → formal encoding. OWL, SHACL, and friends. Shake till mixed. Rinse. Repeat. Secret sauce simmered and reduced. This isn't theoretical hard. We avoid that. It’s merely engineering hard. We LTF into that! But the payoff means bootstrapping rich, new ontologies at scale: faster, cheaper, with lineage. It's the intersection of FM latent space, formal ontology, and user intent expressed via CQs. We call it the Symbolic Latent Layer (SLL). Cute eh? The future of enterprise AI isn’t just documents. It’s distillingĀ structured symbolic knowledgeĀ from LLMs and plugging it into agents, workflows, and reasoning engines. You don’t need a priesthood to get a formal ontology anymore. You need a good prompt and a smarter pipeline and the right EKG platform. There's a lot more to say about this so I said it at Stardog Labs https://lnkd.in/eY5Sibed | 17 comments on LinkedIn
Ā·linkedin.comĀ·
Foundation Models Know Enough
Use Graph Machine Learning to detect fraud with Amazon Neptune Analytics and GraphStorm | Amazon Web Services
Use Graph Machine Learning to detect fraud with Amazon Neptune Analytics and GraphStorm | Amazon Web Services
Every year, businesses and consumers lose billions of dollars to fraud, with consumers reporting $12.5 billion lost to fraud in 2024, a 25% increase year over year. People who commit fraud often work together in organized fraud networks, running many different schemes that companies struggle to detect and stop. In this post, we discuss how to use Amazon Neptune Analytics, a memory-optimized graph database engine for analytics, and GraphStorm, a scalable open source graph machine learning (ML) library, to build a fraud analysis pipeline with AWS services.
Ā·aws.amazon.comĀ·
Use Graph Machine Learning to detect fraud with Amazon Neptune Analytics and GraphStorm | Amazon Web Services
Graph is the new star schema. Change my mind.
Graph is the new star schema. Change my mind.
Graph is the new star schema. Change my mind. Why? Your agents can't be autonomous unless your structured data is a graph. It is really very simple. 1ļøāƒ£ To act autonomously, an agent must reason across structured data. Every autonomous decision - human or agent - hinges on a judgment: have I done enough? ā€œEnough" boils down to driving the probability of success over some threshold. 2ļøāƒ£ You can’t just point the agent at your structured data store. Context windows are too small. Schema sprawl is too real. If you think it works, you probably haven’t tried it. 3ļøāƒ£ Agent must first retrieve - with RAG - the right tables, columns, and snippets. Decision making is a retrieval problem before it’s a reasoning problem. 4ļøāƒ£ Standard RAG breaks on enterprise metadata. The corpus is too entity-rich. Semantic similarity is breaking on enterprise help articles - it won't perform on column descriptions. 5ļøāƒ£ To make structured RAG work, you need a graph. Just like unstructured RAG needed links between articles, structured RAG needs links between tables, fields, and - most importantly - meaning. Yes, graphs are painful. But so was deep learning—until the return was undeniable. Agents need reasoning over structured data. That makes graphs non-optional. The rest is just engineering. Let’s stop modeling for reporting—and start modeling for autonomy. | 28 comments on LinkedIn
Graph is the new star schema. Change my mind.
Ā·linkedin.comĀ·
Graph is the new star schema. Change my mind.
How can you turn business questions into production-ready agentic knowledge graphs?
How can you turn business questions into production-ready agentic knowledge graphs?
ā“ How can you turn business questions into production-ready agentic knowledge graphs? Join Prashanth Rao and Dennis Irorere at the Agentic AI Summit to find out. Prashanth is an AI Engineer and DevRel lead at Kùzu Inc.—the open-source graph database startup—where he blends NLP, ML, and data engineering to power agentic workflows. Dennis is a Data Engineer at Tripadvisor’s Viator Marketing Technology team and Director of Innovation at GraphGeeks, driving scalable, AI-driven graph solutions for customer growth. In ā€œAgentic Workflows for Graph RAG: Building Production-Ready Knowledge Graphs,ā€ they’ll guide you through three hands-on lessons: šŸ”¹ From Business Question to Graph Schema – Modeling your domain for downstream agents and LLMs, using live data sources like AskNews. šŸ”¹ From Unstructured Data to Agent-Ready Graphs with BAML – Writing declarative pipelines that reliably extract entities and relationships at scale. šŸ”¹ Agentic Graph RAG in Action – Completing the loop: translating NL queries into Cypher, retrieving graph data, and synthesizing responses—with fallback strategies when matches are missing. If you’re building internal tools or public-facing AI agents that rely on knowledge graphs, this workshop is for you. šŸ—“ļø Learn more & register free: https://hubs.li/Q03qHnpQ0 #AgenticAI #GraphRAG #KnowledgeGraphs #AgentWorkflows #AIEngineering #ODSC #Kuzu #Tripadvisor
How can you turn business questions into production-ready agentic knowledge graphs?
Ā·linkedin.comĀ·
How can you turn business questions into production-ready agentic knowledge graphs?
The Developer's Guide to GraphRAG
The Developer's Guide to GraphRAG
Find out how to combine a knowledge graph with RAG for GraphRAG. Provide more complete GenAI outputs.
You’ve built a RAG system and grounded it in your own data. Then you ask a complex question that needs to draw from multiple sources. Your heart sinks when the answers you get are vague or plain wrong.Ā Ā  How could this happen? Traditional vector-only RAG bases its outputs on just the words you use in your prompt. It misses out on valuable context because it pulls from different documents and data structures. Basically, it misses out on the bigger, more connected picture. Your AI needs a mental model of your data with all its context and nuances. A knowledge graph provides just that by mapping your data as connected entities and relationships. Pair it with RAG to create a GraphRAG architecture to feed your LLM information about dependencies, sequences, hierarchies, and deeper meaning. Check out The Developer’s Guide to GraphRAG. You’ll learn how to: Prepare a knowledge graph for GraphRAG Combine a knowledge graph with native vector search Implement three GraphRAG retrieval patterns
Ā·neo4j.comĀ·
The Developer's Guide to GraphRAG
Add a Semantic Layer – a smart translator that sits between your data sources and your business applications
Add a Semantic Layer – a smart translator that sits between your data sources and your business applications
Tired of being told that silos are gone? The real value comes from connecting them. šŸ”„ The myth of data silos: why they never really disappear, and how to turn them into your biggest advantage. Even after heavy IT investment, data silos never truly go away, they simply evolve. In food production, I saw this first-hand: every system (ERP, quality, IoT, POS) stored data in its own format. Sometimes, the same product ended up with different IDs across systems, batch information was fragmented, and data was dispersed in each silo. People often say, ā€œBreak down the silos.ā€ But in reality, that’s nearly impossible. Businesses change, new tools appear, acquisitions happen, teams shift, new processes and production lines are launched. Silos are part of digital life. For years, I tried classic integrations. They helped a bit, but every change in one system caused more issues and even more integration work. I wish I had known then what I know now: Stop trying to destroy silos. Start connecting them. Here’s what makes the difference: Add a Semantic Layer – a smart translator that sits between your data sources and your business applications. It maps different formats and names into a common language, without changing your original systems. Put a Knowledge Graph on top and you don’t just translate – you connect. Suddenly, all your data sources, even legacy silos, become part of a single network. Products, ingredients, machines, partners, and customers are all logically linked and understood across your business. In practice, this means: - Production uses real sales and shelf-life data. - Sales sees live inventory, not outdated reports. - Forecasting is based on trustworthy, aligned data. That’s the real shift: Silos are not problems to kill, but assets to connect. With a Semantic Layer and a Knowledge Graph, data silos become trusted building blocks for your business intelligence. Better Data, Better ROI. If you’ve ever spent hours reconciling reports, you’ll recognise this recurring pain in companies that haven’t optimised their data integration with a semantic and KG approach. So: Do you still treat silos as problems, or could they be your next competitive advantage if you connect them the right way? Meaningfy #DataSilos #SemanticLayer #KnowledgeGraph #BusinessData #DigitalTransformation
Add a Semantic Layer – a smart translator that sits between your data sources and your business applications
Ā·linkedin.comĀ·
Add a Semantic Layer – a smart translator that sits between your data sources and your business applications
Cellosaurus is now available in RDF format
Cellosaurus is now available in RDF format
Cellosaurus is now available in RDF format, with a triple store that supports SPARQL queries If this sounds a bit abstract or unfamiliar… 1) RDF stands for Resource Description Framework. Think of RDF as a way to express knowledge using triplets: Subject – Predicate – Object. Example: HeLa (subject) – is_transformed_by (predicate) – Human papillomavirus type 18 (object) These triplets are like little facts that can be connected together to form a graph of knowledge. 2) A triple store is a database designed specifically to store and retrieve these RDF triplets. Unlike traditional databases (tables, rows), triple stores are optimized for linked data. They allow you to navigate connections between biological entities, like species, tissues, genes, diseases, etc. 3) SPARQL is a query language for RDF data. It lets you ask complex questions, such as: - Find all cell lines with a *RAS (HRAS, NRAS, KRAS) mutation in p.Gly12 - Find all Cell lines from animals belonging the order "carnivora" More specifically we now offer from the Tool - API submenu 6 new options: 1) SPARQL Editor (https://lnkd.in/eF2QMsYR). The SPARQL Editor is a tool designed to assist users in developing their SPARQL queries. 2) SPARQL Service (https://lnkd.in/eZ-iN7_e). The SPARQL service is the web service that accepts SPARQL queries over HTTP and returns results from the RDF dataset. 3) Cellosaurs Ontology (https://lnkd.in/eX5ExjMe). An RDF ontology is a formal, structured representation of knowledge. It explicitly defines domain-specific concepts - such as classes and properties - enabling data to be described with meaningful semantics that both humans and machines can interpret. The Cellosaurus ontology is expressed in OWL. 4) Cellosaurus Concept Hopper (https://lnkd.in/e7CH5nj4). The Concept Hopper, is a tool that provides an alternative view of the Cellosaurus ontology. It focuses on a single concept at a time - either a class or a property - and shows how that concept is linked to others within the ontology, as well as how it appears in the data. 5) Cellosaurus dereferencing service (https://lnkd.in/eSATMhGb). The RDF dereferencing service is the mechanism that, given a URI, returns an RDF description of the resource identified by that URI, enabling clients to retrieve structured, machine-readable data about the resource from the web in different formats. 6) Cellosaurus RDF files download (https://lnkd.in/emuEYnMD). This allows you to download the Cellosaurus RDF files in Turtle (ttl) format.
Cellosaurus is now available in RDF format
Ā·linkedin.comĀ·
Cellosaurus is now available in RDF format
How do you explain the difference between Semantic Layers and Ontologies?
How do you explain the difference between Semantic Layers and Ontologies?
How do you explain the difference between Semantic Layers and Ontologies? That’s the discussion I had yesterday with the CTO of a very large and well known organization. šŸ“Š Semantic Layers Today: The First Stepping Stone • Semantic layer is commonly used in data analytics/BI reporting, tied to modeling fact/dimension tables and defining measures • DataLakehouse/Data Cloud, transformation tools, BI tools and semantic layer vendors exemplify this usage • Provide descriptive metadata: definitions, calculations (e.g., revenue formulas), and human readable labels, to enhance the schema • Serve as a first step toward better data understanding and governance • Help in aligning glossary terms with tables and columns, improving metadata quality and documentation • Typically proprietary (even if expressed in YAML) and are not broadly interoperable • Enable ā€œchat with your dataā€ experiences over the warehouse When organizations need to integrate diverse data sources beyond the data warehouse/lakehouse model, they hit the limits of fact/dimension modeling. This is where ontologies and knowledge graphs come in. 🌐 Ontologies & Knowledge Graphs: Scaling Beyond BI • Represent complex relationships, hierarchies, synonyms, and taxonomies that go beyond rigid table structures • Knowledge graphs bridge the gap from technical metadata to business metadata and ultimately to core business concepts • Enable the integration of all types of data (structured, semi-structured, unstructured) because a graph is a common model • Through open web standards such as RDF, OWL and SPARQL you get interoperability without lock in Strategic Role in the Enterprise • Knowledge graphs enable the creation of an enterprise brain, connecting disparate data and semantics across all systems inside an organization • Represent the context and meaning that LLMs lack. Our research has proven this. • They lay the groundwork for digital twins and what-if scenario modeling, powering advanced analytics and decision-making. šŸ’” Key Takeaway The semantic layer is a first step, especially for BI use cases. Most organizations will start with them. This will eventually create semantic silos that are not inherently interoperable. Over time, they realize they need more than just local semantics for BI; they want to model real-world business assets and relationships across systems. Organizations will realize they want to define semantics once and reuse them across tools and platforms. This requires semantic interoperability, so the meaning behind data is not tied to one system. Large scale enterprises operate across multiple systems, so interoperability is not optional, it’s essential. To truly integrate and reason over enterprise data, you need ontologies and knowledge graphs with open standards. They form the foundation for enterprise-wide semantic reuse, providing the flexibility, connectivity, and context required for next-generation analytics, AI, and enterprise intelligence. | 102 comments on LinkedIn
How do you explain the difference between Semantic Layers and Ontologies?
Ā·linkedin.comĀ·
How do you explain the difference between Semantic Layers and Ontologies?
A New Map for Product Docs
A New Map for Product Docs
AI and knowledge graphs will transform product documentation, especially for complex, networked systems that require configuration…
Ā·medium.comĀ·
A New Map for Product Docs
the Ontology Pipeline
the Ontology Pipeline
It’s been a while since I have posted about the Ontology Pipeline. With parts borrowed from library science, the Ontology Pipeline is a simple framework for building rich knowledge infrastructures. Librarians are professional stewards of knowledge, and have valuable methodologies for building information and knowledge systems for human and machine information retrieval tasks. While LinkedIn conversations seem to be wrestling with defining ā€œwhat is the semantic layerā€, we are failing to see the root of semantics. Semantics matter because knowledge structures, not just layers, define semantics. Semantics are more than labels or concept maps. Semantics lend structure and meaning through relationships, disambiguation of concepts, definitions and context. The Ontology pipeline is an iterative build process that is focused upon ensuring data hygiene while minding domain data, information and knowledge. I share this framework because it is how I have successfully built information and knowledge ecosystems , with or without AI. #taxonomy #ontology #metadata #knowledgegraph #ia #ai Some friends focused on building knowledge infrastructures Andrew Padilla Nagim Ashufta Ole Olesen-Bagneux JĆ©rĆ©my Ravenel Paco Nathan Adriano Vlad-Starrabba Andrea Gioia | 10 comments on LinkedIn
the Ontology Pipeline
Ā·linkedin.comĀ·
the Ontology Pipeline
Alice enters the magical, branchy world of Graphs and Graph Neural Networks
Alice enters the magical, branchy world of Graphs and Graph Neural Networks
The first draft 'G' chapter of the geometric deep learning book is live! šŸš€ Alice enters the magical, branchy world of Graphs and Graph Neural Networks šŸ•øļø (Large Language Models are there too!) I've spent 7+ years studying, researching & talking about graphs -- This text is my best attempt at conveying everything i've learnt šŸ’Ž You may read this chapter in the usual place (link in comments!) Any and all feedback / thoughts / questions on the content, and/or words of encouragement for finishing this book (pretty please! šŸ˜‡) are warmly welcomed! Michael Bronstein Joan Bruna Taco Cohen | 18 comments on LinkedIn
Alice enters the magical, branchy world of Graphs and Graph Neural Networks
Ā·linkedin.comĀ·
Alice enters the magical, branchy world of Graphs and Graph Neural Networks
Integrating Knowledge Graphs with Symbolic AI: The Path to Interpretable Hybrid AI Systems in Medicine
Integrating Knowledge Graphs with Symbolic AI: The Path to Interpretable Hybrid AI Systems in Medicine
In this position paper "Integrating Knowledge Graphs with Symbolic AI: The Path to Interpretable Hybrid AI Systems in Medicine" my L3S Research Center and TIB – Leibniz-Informationszentrum Technik und Naturwissenschaften und UniversitƤtsbibliothek colleagues around Maria-Esther Vidal have nicely laid out some research challenges on the way to interpretable hybrid AI systems in medicine. However, I think the conceptual framework is broadly applicable way beyond medicine. For example, my former colleagues and PhD students atĀ eccencaĀ areĀ working on operationalizing Neuro-Symbolic AI for Enterprise Knowledge Management with eccenca's Corporate Memory. The paper outlines a compelling architecture for combining sub-symbolic models (e.g., deep learning) with symbolic reasoning systems to enable AI that is interpretable, robust, and aligned with human values. eccenca implements these principles at scale through its neuro-symbolic Enterprise Knowledge Graph platform, Corporate Memory for real-world industrial settings: 1. Symbolic Foundation via Semantic Web Standards - Corporate Memory is grounded in W3C standards (RDF, RDFS, OWL, SHACL, SPARQL), enabling formal knowledge representation, inferencing, and constraint validation. This allows to encode domain ontologies, business rules, and data governance policies in a machine-interpretable and human-verifiable manner. 2. Integration of Sub-symbolic Components - it integrates LLMs and ML models for tasks such as schema matching, natural language interpretation, entity resolution, and ontology population. These are linked to the symbolic layer via mappings and annotations, ensuring traceability and explainability. 3. Neuro-Symbolic Interfaces for Hybrid Reasoning - Hybrid workflows where symbolic constraints (e.g., SHACL shapes) guide LLM-based data enrichment. LLMs suggest schema alignments, which are verified against ontological axioms. Graph embeddings and path-based querying power semantic search and similarity. 4. Human-in-the-loop Interactions - Domain experts interact through low-code interfaces and semantic UIs that allow inspection, validation, and refinement of both the symbolic and neural outputs, promoting human oversight and continuous improvement. Such an approach can power Industrial Applications, e.g. in digital thread integration in manufacturing, compliance automation in pharma and finance and in general, cross-domain interoperability in data mesh architectures. Corporate Memory is a practical instantiation of neuro-symbolic AI that meets industrial-grade requirements for governance, scalability, and explainability – key tenets of Human-Centric AI. Check it out here: https://lnkd.in/evyarUsR #NeuroSymbolicAI #HumanCentricAI #KnowledgeGraphs #EnterpriseArchitecture #ExplainableAI #SemanticWeb #LinkedData #LLM #eccenca #CorporateMemory #OntologyDrivenAI #AI4Industry
Integrating Knowledge Graphs with Symbolic AI: The Path to Interpretable Hybrid AI Systems in Medicine
Ā·linkedin.comĀ·
Integrating Knowledge Graphs with Symbolic AI: The Path to Interpretable Hybrid AI Systems in Medicine
The Great Divide: Why Ontology and Data Architecture Teams Are Solving the Same Problems with Different Languages | LinkedIn
The Great Divide: Why Ontology and Data Architecture Teams Are Solving the Same Problems with Different Languages | LinkedIn
In enterprise organisations today, two important disciplines are working in parallel universes, tackling nearly identical challenges whilst speaking completely different languages. Ontology architects and data architects are both wrestling with ETL processes, data modelling, transformations, referen
Ā·linkedin.comĀ·
The Great Divide: Why Ontology and Data Architecture Teams Are Solving the Same Problems with Different Languages | LinkedIn
Everyone is talking about Semantic Layers, but what is a semantic layer?
Everyone is talking about Semantic Layers, but what is a semantic layer?
Everyone is talking about Semantic Layers, but what is a semantic layer? Some of the latest hot topics to get more out of your agents discuss topics such as knowledge graphs, vector search, semantics, and agent frameworks. A new and important area that encompasses the above is the notion that we need to have a stronger semantic layer on top of our data to provide structure, definitions, discoverability and more for our agents (human or other). While a lot of these concepts are not new, they have had to evolve to be relevant in today's world and this means that there is a fair bit of confusion surrounding this whole area. Depending on your background (AI, ML, Library Sciences) and focus (LLM-first or Knowledge Graph), you likely will emphasize different aspects as being key to a semantic layer. I come primarily from an AI/ML/LLM-first world, but have built and utilized knowledge graphs for most of my career. Given my background, I of course have my perspective on this and I tend to break things down to first principles and I like to simplify. Given this, preamble, here is what I think makes a semantic layer. WHAT MAKES A SEMANTIC LAYER: 🟤 Scope 🟢 You should not create a semantic layer that covers everything in the world, nor even everything in your company. You can tie semantic layers together, but focus on the job to be done. 🟤 You will need to have semantics, obviously. There are two particular types semantics that are important to include. 🟢 Vectors: These encapsulate semantics at a high-dimensional space so you can easily find similar concepts in your data 🟢 Ontology (including Taxonomy): Explicitly define meaning of your data in a structured and fact-based way, including appropriate vocabulary. This complements vectors superbly. 🟤 You need to respect the data and meet it where it is at. 🟢 Structured data: For most companies, their data reside in data lakes of some sort and most of it is structured. There is power in this structure, but also noise. The semantic layer needs to understand this and map it into the semantics above. 🟢 Unstructured data: Most data is unstructured and resides all over the place. Often this is stored in object stores or databases as part of structured tables, for example. However there is a lot of information in the unstructured data that the semantic layer needs to map -- and for that you need extraction, resolution, and a number of other techniques based on the modality of the data. 🟤 You need to index the data 🟢 You will need to index all of this to make your data discoverable and retrievable. And this needs to scale. 🟢 You need to have tight integration between vectors, ontology/knowledge graph and keywords to make this seamless. These are 4 key components that are all needed for you to have a true semantic layer. Thoughts? #knowledgegraph, #semanticlayer, #agent, #rag | 13 comments on LinkedIn
Everyone is talking about Semantic Layers, but what is a semantic layer?
Ā·linkedin.comĀ·
Everyone is talking about Semantic Layers, but what is a semantic layer?