Found 147 bookmarks
Newest
how both OWL and SHACL can be employed during the decision-making phase for AI Agents when using a knowledge graph instead of relying on an LLM that hallucinates
how both OWL and SHACL can be employed during the decision-making phase for AI Agents when using a knowledge graph instead of relying on an LLM that hallucinates
๐™๐™๐™ค๐™ช๐™œ๐™๐™ฉ ๐™›๐™ค๐™ง ๐™ฉ๐™๐™š ๐™™๐™–๐™ฎ: I've been mulling over how both OWL and SHACL can be employed during the decision-making phase for AI Agents when using a knowledge graph instead of relying on an LLM that hallucinates. In this way, the LLM can still be used for assessment and sensory feedback, but it augments the graph, not the other way around. OWL and SHACL serve different roles. SHACL is not just a preprocessing validator; it can play an active role in constraining, guiding, or triggering decisions, especially when integrated into AI pipelines. However, OWL is typically more central to inferencing and reasoning tasks. SHACL can actively participate in decision-making, especially when decisions require data integrity, constraint enforcement, or trigger-based logic. In complex agents, OWL provides the inferencing engine, while SHACL acts as the constraint gatekeeper and occasionally contributes to rule-based decision-making. For example, an AI agent processes RDF data describing an applicant's skills, degree, and experience. SHACL validates the data's structure, ensuring required fields are present and correctly formatted. OWL reasoning infers that the applicant is qualified for a technical role and matches the profile of a backend developer. SHACL is then used again to check policy compliance. With all checks passed, the applicant is shortlisted, and a follow-up email is triggered. In AI agent decision-making, OWL and SHACL often work together in complementary ways. SHACL is commonly used as a preprocessing step to validate incoming RDF data. If the data fails validation, it's flagged or excluded, ensuring only clean, structurally sound data reaches the OWL reasoner. In this role, SHACL acts as a gatekeeper. They can also operate in parallel or in an interleaved manner within a pipeline. As decisions evolve, SHACL shapes may be checked mid-process. Some AI agents even use SHACL as a rule engineโ€”to trigger alerts, detect actionable patterns, or constrain reasoning pathsโ€”while OWL continues to handle more complex semantic inferences, such as class hierarchies or property logic. Finally, SHACL can augment decision-making by confirming whether OWL-inferred actions comply with specific constraints. OWL may infer that โ€œA is a type of B, so do X,โ€ and SHACL then determines whether doing X adheres to a policy or requirement. Because SHACL supports closed-world assumptions (which OWL does not), it plays a valuable role in enforcing policies or compliance rules during decision execution. Illustrated:
how both OWL and SHACL can be employed during the decision-making phase for AI Agents when using a knowledge graph instead of relying on an LLM that hallucinates
ยทlinkedin.comยท
how both OWL and SHACL can be employed during the decision-making phase for AI Agents when using a knowledge graph instead of relying on an LLM that hallucinates
What makes the "๐’๐ž๐ฆ๐š๐ง๐ญ๐ข๐œ ๐ƒ๐š๐ญ๐š ๐๐ซ๐จ๐๐ฎ๐œ๐ญ" so valid in data conversations today?๐Ÿ’ก ๐๐จ๐ฎ๐ง๐๐ž๐ ๐‚๐จ๐ง๐ญ๐ž๐ฑ๐ญ and Right-to-Left Flow from consumers to raw materials.
What makes the "๐’๐ž๐ฆ๐š๐ง๐ญ๐ข๐œ ๐ƒ๐š๐ญ๐š ๐๐ซ๐จ๐๐ฎ๐œ๐ญ" so valid in data conversations today?๐Ÿ’ก ๐๐จ๐ฎ๐ง๐๐ž๐ ๐‚๐จ๐ง๐ญ๐ž๐ฑ๐ญ and Right-to-Left Flow from consumers to raw materials.
What makes the "๐’๐ž๐ฆ๐š๐ง๐ญ๐ข๐œ ๐ƒ๐š๐ญ๐š ๐๐ซ๐จ๐๐ฎ๐œ๐ญ" so valid in data conversations today?๐Ÿ’ก ๐๐จ๐ฎ๐ง๐๐ž๐ ๐‚๐จ๐ง๐ญ๐ž๐ฑ๐ญ and Right-to-Left Flow from consumers to raw materials. Tony Seale perfectly defines the value of bounded context. โ€ฆ๐˜ต๐˜ฐ ๐˜ด๐˜ถ๐˜ด๐˜ต๐˜ข๐˜ช๐˜ฏ ๐˜ช๐˜ต๐˜ด๐˜ฆ๐˜ญ๐˜ง, ๐˜ข ๐˜ด๐˜บ๐˜ด๐˜ต๐˜ฆ๐˜ฎ ๐˜ฎ๐˜ถ๐˜ด๐˜ต ๐˜ฎ๐˜ช๐˜ฏ๐˜ช๐˜ฎ๐˜ช๐˜ด๐˜ฆ ๐˜ช๐˜ต๐˜ด ๐˜ง๐˜ณ๐˜ฆ๐˜ฆ ๐˜ฆ๐˜ฏ๐˜ฆ๐˜ณ๐˜จ๐˜บ- ๐˜ข ๐˜ฎ๐˜ฆ๐˜ข๐˜ด๐˜ถ๐˜ณ๐˜ฆ ๐˜ฐ๐˜ง ๐˜ถ๐˜ฏ๐˜ค๐˜ฆ๐˜ณ๐˜ต๐˜ข๐˜ช๐˜ฏ๐˜ต๐˜บ. ๐˜”๐˜ช๐˜ฏ๐˜ช๐˜ฎ๐˜ช๐˜ด๐˜ช๐˜ฏ๐˜จ ๐˜ช๐˜ต ๐˜ฆ๐˜ฒ๐˜ถ๐˜ข๐˜ต๐˜ฆ๐˜ด ๐˜ต๐˜ฐ ๐˜ญ๐˜ฐ๐˜ธ ๐˜ช๐˜ฏ๐˜ต๐˜ฆ๐˜ณ๐˜ฏ๐˜ข๐˜ญ ๐˜ฆ๐˜ฏ๐˜ต๐˜ณ๐˜ฐ๐˜ฑ๐˜บ. ๐˜ˆ ๐˜ด๐˜บ๐˜ด๐˜ต๐˜ฆ๐˜ฎ ๐˜ข๐˜ค๐˜ฉ๐˜ช๐˜ฆ๐˜ท๐˜ฆ๐˜ด ๐˜ต๐˜ฉ๐˜ช๐˜ด ๐˜ฃ๐˜บ ๐˜ง๐˜ฐ๐˜ณ๐˜ฎ๐˜ช๐˜ฏ๐˜จ ๐˜ข๐˜ค๐˜ค๐˜ถ๐˜ณ๐˜ข๐˜ต๐˜ฆ ๐˜ฑ๐˜ณ๐˜ฆ๐˜ฅ๐˜ช๐˜ค๐˜ต๐˜ช๐˜ฐ๐˜ฏ๐˜ด ๐˜ข๐˜ฃ๐˜ฐ๐˜ถ๐˜ต ๐˜ต๐˜ฉ๐˜ฆ ๐˜ฆ๐˜น๐˜ต๐˜ฆ๐˜ณ๐˜ฏ๐˜ข๐˜ญ ๐˜ฆ๐˜ฏ๐˜ท ๐˜ข๐˜ฏ๐˜ฅ ๐˜ถ๐˜ฑ๐˜ฅ๐˜ข๐˜ต๐˜ช๐˜ฏ๐˜จ ๐˜ช๐˜ต๐˜ด ๐˜ช๐˜ฏ๐˜ต๐˜ฆ๐˜ณ๐˜ฏ๐˜ข๐˜ญ ๐˜ด๐˜ต๐˜ข๐˜ต๐˜ฆ๐˜ด ๐˜ข๐˜ค๐˜ค๐˜ฐ๐˜ณ๐˜ฅ๐˜ช๐˜ฏ๐˜จ๐˜ญ๐˜บ, ๐˜ข๐˜ญ๐˜ญ๐˜ฐ๐˜ธ๐˜ช๐˜ฏ๐˜จ ๐˜ง๐˜ฐ๐˜ณ ๐˜ข ๐˜ฅ๐˜บ๐˜ฏ๐˜ข๐˜ฎ๐˜ช๐˜ค ๐˜บ๐˜ฆ๐˜ต ๐˜ด๐˜ต๐˜ข๐˜ฃ๐˜ญ๐˜ฆ ๐˜ช๐˜ฏ๐˜ต๐˜ฆ๐˜ณ๐˜ข๐˜ค๐˜ต๐˜ช๐˜ฐ๐˜ฏ ๐˜ธ๐˜ช๐˜ต๐˜ฉ ๐˜ช๐˜ต๐˜ด ๐˜ด๐˜ถ๐˜ณ๐˜ณ๐˜ฐ๐˜ถ๐˜ฏ๐˜ฅ๐˜ช๐˜ฏ๐˜จ๐˜ด. ๐˜–๐˜ฏ๐˜ญ๐˜บ ๐˜ฑ๐˜ฐ๐˜ด๐˜ด๐˜ช๐˜ฃ๐˜ญ๐˜ฆ ๐˜ฐ๐˜ฏ ๐˜ฅ๐˜ฆ๐˜ญ๐˜ช๐˜ฏ๐˜ฆ๐˜ข๐˜ต๐˜ช๐˜ฏ๐˜จ ๐˜ข ๐˜ฃ๐˜ฐ๐˜ถ๐˜ฏ๐˜ฅ๐˜ข๐˜ณ๐˜บ ๐˜ฃ๐˜ฆ๐˜ต๐˜ธ๐˜ฆ๐˜ฆ๐˜ฏ ๐˜ช๐˜ฏ๐˜ต๐˜ฆ๐˜ณ๐˜ฏ๐˜ข๐˜ญ ๐˜ข๐˜ฏ๐˜ฅ ๐˜ฆ๐˜น๐˜ต๐˜ฆ๐˜ณ๐˜ฏ๐˜ข๐˜ญ ๐˜ด๐˜บ๐˜ด๐˜ต๐˜ฆ๐˜ฎ๐˜ด. ๐˜‹๐˜ช๐˜ด๐˜ค๐˜ฐ๐˜ฏ๐˜ฏ๐˜ฆ๐˜ค๐˜ต๐˜ฆ๐˜ฅ ๐˜ด๐˜บ๐˜ด๐˜ต๐˜ฆ๐˜ฎ๐˜ด ๐˜ด๐˜ช๐˜จ๐˜ฏ๐˜ข๐˜ญ ๐˜ธ๐˜ฆ๐˜ข๐˜ฌ ๐˜ฃ๐˜ฐ๐˜ถ๐˜ฏ๐˜ฅ๐˜ข๐˜ณ๐˜ช๐˜ฆ๐˜ด. Data Products enable a way to bind context to specific business purposes or use cases. This enables data to become: โœ…ย Purpose-driven โœ…ย Accurately Discoverable โœ…ย Easily Understandable & Addressable โœ…ย Valuable as an independent entity ๐“๐ก๐ž ๐’๐จ๐ฅ๐ฎ๐ญ๐ข๐จ๐ง: The Data Product Model. A conceptual model that precisely captures the business context through an interface operable by business users or domain experts. We have often referred to this as The Data Product Prototype, which is essentially a semantic model and captures information on: โžก๏ธ Popular Metrics the Business wants to drive โžก๏ธ Measures & Dimensions โžก๏ธ Relationships & formulas โžก๏ธ Further context with tags, descriptions, synonyms, & observability metrics โžก๏ธ Quality SLOs - or simply, conditions necessary โžก๏ธ Additional policy specs contributed by Governance Stewards Once the Prototype is validated and given a green flag, development efforts kickstart. Note how all data engineering efforts (left-hand side) are not looped in until this point, saving massive costs and time drainage. The DE teams, who only have a partial view of the business landscape, are now no longer held accountable for this lack in strong business understanding. ๐“๐ก๐ž ๐จ๐ฐ๐ง๐ž๐ซ๐ฌ๐ก๐ข๐ฉ ๐จ๐Ÿ ๐ญ๐ก๐ž ๐ƒ๐š๐ญ๐š ๐๐ซ๐จ๐๐ฎ๐œ๐ญ ๐ฆ๐จ๐๐ž๐ฅ ๐ข๐ฌ ๐ž๐ง๐ญ๐ข๐ซ๐ž๐ฅ๐ฒ ๐ฐ๐ข๐ญ๐ก ๐๐ฎ๐ฌ๐ข๐ง๐ž๐ฌ๐ฌ. ๐Ÿซ  DEs have a blueprint to refer and simply map sources or source data products to the prescribed Data Product Model. Any new request comes through this prototype itself, managed by Data Product Managers in collaboration with business users. Dissolving all bottlenecks from centralised data engineering teams. At this level, necessary transformations are delivered, ๐Ÿ”Œย that activate the SLOs ๐Ÿ”Œย enable interoperability with native tools and upstream data products, ๐Ÿ”Œย allow reusability of pre-existing transforms in the form of Source or Aggregate data products. #datamanagement #dataproducts
ยทlinkedin.comยท
What makes the "๐’๐ž๐ฆ๐š๐ง๐ญ๐ข๐œ ๐ƒ๐š๐ญ๐š ๐๐ซ๐จ๐๐ฎ๐œ๐ญ" so valid in data conversations today?๐Ÿ’ก ๐๐จ๐ฎ๐ง๐๐ž๐ ๐‚๐จ๐ง๐ญ๐ž๐ฑ๐ญ and Right-to-Left Flow from consumers to raw materials.
LLMs and Neurosymbolic reasoning
LLMs and Neurosymbolic reasoning
When people discuss how LLMS "reason," youโ€™ll often hear that they rely on transduction rather than abduction. It sounds technical, but the distinction matters - especially as we start wiring LLMs into systems that are supposed to think. ๐Ÿ”ต Transduction is case-to-case reasoning. It doesnโ€™t build theories; it draws fuzzy connections based on resemblance. Think: โ€œThis metal conducts electricity, and that one looks similar - so maybe it does too.โ€ ๐Ÿ”ต Abduction, by contrast, is about generating explanations. Itโ€™s what scientists (and detectives) do: โ€œThis metal is conducting - maybe it contains free electrons. That would explain it.โ€ The claim is that LLMs operate more like transducers - navigating high-dimensional spaces of statistical similarity, rather than forming crisp generalisations. But this isnโ€™t the whole picture. In practice, it seems to me that LLMs also perform a kind of induction - abstracting general patterns from oceans of text. They learn the shape of ideas and apply them in novel ways. Thatโ€™s closer to โ€œAll metals of this type have conducted in the past, so this one probably will.โ€ Now add tools to the mix - code execution, web search, Elon Musk's tweet history ๐Ÿ˜‰ - and LLMs start doing something even more interesting: program search and synthesis. It's messy, probabilistic, and not at all principled or rigorous. But itโ€™s inching toward a form of abductive reasoning. Which brings us to a more principled approach for reasoning within an enterprise domain: the neuro-symbolic loop - a collaboration between large language models and knowledge graphs. The graph provides structure: formal semantics, ontologies, logic, and depth. The LLM brings intuition: flexible inference, linguistic creativity, and breadth. One grounds. The other leaps. ๐Ÿ’ก The real breakthrough could come when the grounding isnโ€™t just factual, but conceptual - when the ontology encodes clean, meaningful generalisations. Thatโ€™s when the LLMโ€™s leaps wouldnโ€™t just reach further - theyโ€™d rise higher, landing on novel ideas that hold up under formal scrutiny. ๐Ÿ’ก So where do metals fit into this new framing? ๐Ÿ”ต Transduction: โ€œThis metal conducts. That one looks the same - it probably does too.โ€ ๐Ÿ”ต Induction: โ€œIโ€™ve tested ten of these. All conducted. Itโ€™s probably a rule.โ€ ๐Ÿ”ต Abduction: โ€œThis metal is conducting. It shares properties with the โ€˜conductive alloyโ€™ class - especially composition and crystal structure. The best explanation is a sea of free electrons.โ€ LLMs, in isolation, are limited in their ability to perform structured abduction. But when embedded in a system that includes a formal ontology, logical reasoning, and external tools, they can begin to participate in richer forms of reasoning. These hybrid systems are still far from principled scientific reasoners - but they hint at a path forward: a more integrated and disciplined neuro-symbolic architecture that moves beyond mere pattern completion.
ยทlinkedin.comยท
LLMs and Neurosymbolic reasoning
S&P Global Unlocks the Future of AI-driven insights with AI-Ready Metadata on S&P Global Marketplace
S&P Global Unlocks the Future of AI-driven insights with AI-Ready Metadata on S&P Global Marketplace
๐Ÿš€ When I shared our 2025 goals for the Enterprise Data Organization, one of the things I alluded to was machine-readable column-level metadata. Letโ€™s unpack what that meansโ€”and why it matters. ๐Ÿ” What: For datasets we deliver via modern cloud distribution, we now provide human - and machine - readable metadata at the column level. Each column has an immutable URL (no auth, no CAPTCHA) that hosts name/value metadata - synonyms, units of measure, descriptions, and more - in multiple human languages. Itโ€™s semantic context that goes far beyond what a traditional data dictionary can convey. We can't embed it, so we link to it. ๐Ÿ’ก Why: Metadata is foundational to agentic, precise consumption of structured data. Our customers are investing in semantic layers, data catalogs, and knowledge graphs - and they shouldnโ€™t have to copy-paste from a PDF to get there. Use curl, Python, Bash - whatever works - to automate ingestion. (We support content negotiation and conditional GETs.) ๐Ÿง  Under the hood? Itโ€™s RDF. Love it or hate it, you donโ€™t need to engage with the plumbing unless you want to. โœจ To our knowledge, this hasnโ€™t been done before. This is our MVP. Weโ€™re putting it out there to learn what works - and what doesnโ€™t. Itโ€™s vendor-neutral, web-based, and designed to scale across: ๐Ÿ“Š Breadth of datasets across S&P ๐Ÿงฌ Depth of metadata ๐Ÿ”— Choice of linking venue ๐Ÿ™ It took a village to make this happen. I canโ€™t name everyone without writing a book, but I want to thank our executive leadership for the trust and support to go build this. Let us know what you think! ๐Ÿ”— https://lnkd.in/gbe3NApH Martina Cheung, Saugata Saha, Swamy Kocherlakota, Dave Ernsberger, Mark Eramo, Frank Tarsillo, Warren Breakstone, Hamish B., Erica Robeen, Laura Miller, Justine S Iverson, | 17 comments on LinkedIn
ยทlinkedin.comยท
S&P Global Unlocks the Future of AI-driven insights with AI-Ready Metadata on S&P Global Marketplace
Should ontologies be treated as organizational resources for semantic capabilities?
Should ontologies be treated as organizational resources for semantic capabilities?
๐Ÿ’ก Should ontologies be treated as organizational resources for semantic capabilities? More and more organizations are investing in data platforms, modeling tools, and integration frameworks. But one key capability is often underused or misunderstood: ontologies as semantic infrastructure. While databases handle facts and BI platforms handle queries, ontologies structure meaning. They define what things are, not just what data says. When treated as living organizational resources, ontologies can bring: ๐Ÿ”น Shared understanding across silos ๐Ÿ”น Reasoning and inference beyond data queries ๐Ÿ”น Semantic integration of diverse systems ๐Ÿ”น Clarity and coherence in enterprise models But hereโ€™s the challenge โ€” ontologies donโ€™t operate in isolation. They must be positioned alongside: ๐Ÿ”ธ Data-oriented technologies (RDF, RDF-star, quad stores) that track facts and provenance ๐Ÿ”ธ Enterprise modeling tools (e.g., ArchiMate) that describe systems and views ๐Ÿ”ธ Exploratory approaches (like semantic cartography) that support emergence over constraint These layers each come with their own logic โ€” epistemic vs. ontologic, structural vs. operational, contextual vs. formal. โœ… Building semantic capabilities requires aligning all these dimensions. โœ… It demands governance, tooling, and a culture of collaboration between ontologists, data managers, architects, and domain experts. โœ… And it opens the door to richer insight, smarter automation, and more agile knowledge flows. ๐Ÿ” With projects like ArchiCG (semantic interactive cartography), I aim to explore how we can visually navigate this landscape โ€” not constrained by predefined viewpoints, but guided by logic, meaning, and emergent perspectives. What do you think? Are ontologies ready to take their place as core infrastructure in your organization? | 16 comments on LinkedIn
ยทlinkedin.comยท
Should ontologies be treated as organizational resources for semantic capabilities?
Semantics in use (part 3): an interview with Saritha V.Kuriakose, VP Research Data Management at Novo Nordisk | LinkedIn
Semantics in use (part 3): an interview with Saritha V.Kuriakose, VP Research Data Management at Novo Nordisk | LinkedIn
We continue our series of examples of the use of semantics and ontologies across organizations with an interview with Saritha V. Kuriakose from Novo Nordisk, talking about the pervasive and foundational use of ontologies in pharmaceutical R&D.
ยทlinkedin.comยท
Semantics in use (part 3): an interview with Saritha V.Kuriakose, VP Research Data Management at Novo Nordisk | LinkedIn
how Knowledge Graphs could be used to provide context
how Knowledge Graphs could be used to provide context
๐Ÿ“š Definition number 0๏ธโƒฃ0๏ธโƒฃ0๏ธโƒฃ0๏ธโƒฃ0๏ธโƒฃ1๏ธโƒฃ0๏ธโƒฃ1๏ธโƒฃ ๐ŸŒŠ It is pretty easy to see how context is making really big waves recently. Not long time ago, there were announcements about Model Context Protocol (MCP).ย There is even saying that Prompt Engineers changed their job titles to Context Engineers. ๐Ÿ˜… ๐Ÿ”” In my recent few posts about definitions I tried to show how Knowledge Graphs could be used to provide context as they are built with two types of real definitions expressed in a formalised language. Next, I explained how objective and linguistic nominal definitions in Natural Language can be linked to the models of external things encoded in the formal way to increase human-machine semantic interoperability. ๐Ÿ”„ Quick recap: in KGs objective definitions define objects external to the language and linguistic definitions relate words to other expressions of that language. This is regardless of the nature of the language under consideration - formalised or natural. Objective definitions are real definitions when they uniquely specify certain objects via their characteristics - this is also regardless of the language nature. Not all objective definitions are real definitions and none of linguistic definitions are real definitions. ๐Ÿ’ก Classical objective definitions are an example of clear definitions. Another type of real definitions that could be encountered either in formalised or Natural Language are contextual definitions. An example of such definition is โ€˜Logarithm of a number A with base B is such a number C that B to the power of C is equal to Aโ€™. Obviously this familiar mathematical definition could be expressed in formalised language as well. This makes Knowledge Graphs capable of providing context via contextual definitions apart from other types of definitions covered so far. ๐Ÿคท๐Ÿผโ€โ™‚๏ธ At the same time another question appears. How is it possible to keep track of all those different types of definitions and always be able to know which one is which for a given modelled object? In my previous posts, I have shown how definitions could be linked via โ€˜rdfs:commentโ€™ and โ€˜skos:definitionโ€™. However, that is still pretty generic. It is still possible to extend base vocabulary provided by SKOS and add custom properties for this purpose. Quick reminder: property in KG corresponds to relation between two other objects. Properties allowing to add multiple types of definitions in Natural Language can be created as instances of owl:AnnotationProperty as follows: namespace:contextualDefiniton a owl:AnnotationProperty . After that this new annotation property instance could be used in the same way as more generic linking definitions to objects in KGs. ๐Ÿค“ ๐Ÿ„โ€โ™‚๏ธ The above shows that getting context right way can be tricky endeavour indeed. In my next posts, I will try to describe some other types of definitions, so they can also be added to KGs. If you'd like to level up your KG in this way, please stay tuned. ๐ŸŽธ๐Ÿ˜Ž๐Ÿค™๐Ÿป #ai #knowledgegraphs #definitions
how Knowledge Graphs could be used to provide context
ยทlinkedin.comยท
how Knowledge Graphs could be used to provide context
Unlocking Transparency: Semantics in Ride-Hailing for Consumers | LinkedIn
Unlocking Transparency: Semantics in Ride-Hailing for Consumers | LinkedIn
by Timothy Coleman A recent Guardian report drew attention to a key issue in the ride-hailing industry, spotlighting Uberโ€™s use of sophisticated algorithms to enhance profits while prompting questions about clarity for drivers and passengers. Studies from Columbia Business School and the University
ยทlinkedin.comยท
Unlocking Transparency: Semantics in Ride-Hailing for Consumers | LinkedIn
what is a semantic layer?
what is a semantic layer?
Thereโ€™s a lot of buzz about #semanticlayers on LinkedIn these days. So what is a semantic layer? According to AtScale, โ€œThe semantic layer is a metadata and abstraction layer built on the source data (eg.. data warehouse, data lake, or data mart). The metadata is defined so that the data model gets enriched and becomes simple enough for the business user to understand.โ€ Itโ€™s a metadata layer. Which can be taken a step further. A metadata layer is best implemented using metadata standards that support interoperability and extensibility. There are open standards such as Dublin Core Metadata Initiative and there are home-grown standards, established within organizations and domains. If you want to design and build semantic layers, build from metadata standards or build a metadata standard, according to #FAIR principles (findable, accessible, interoperable, reusable). Some interesting and BRILLIANT โœจfolks to check out in the metadata domain space: Ole Olesen-Bagneux (O2)โ€™s (check out his upcoming book about the #metagrid) Lisa N. Cao Robin Fay Jenna Jordan Larry Swanson Resources in comments ๐Ÿ‘‡๐Ÿ‘‡๐Ÿ‘‡ | 29 comments on LinkedIn
what is a semantic layer?
ยทlinkedin.comยท
what is a semantic layer?
The Question That Changes Everything: "But This Doesn't Look Like an Ontology" | LinkedIn
The Question That Changes Everything: "But This Doesn't Look Like an Ontology" | LinkedIn
After publishing my article on the Missing Semantic Center, a brilliant colleague asked me a question that gets to the heart of our technology stack: "But Tavi - this doesn't look like an OWL 2 DL ontology. What's going on here?" This question highlights a profound aspect of why systems have struggl
ยทlinkedin.comยท
The Question That Changes Everything: "But This Doesn't Look Like an Ontology" | LinkedIn
When Is an Ontological Approach Not the Right Fit for Sharing and Reusing System Knowledge in Design and Development?
When Is an Ontological Approach Not the Right Fit for Sharing and Reusing System Knowledge in Design and Development?
๐Ÿง  When Is an Ontological Approach Not the Right Fit for Sharing and Reusing System Knowledge in Design and Development? Ontologies promise knowledge integration, traceability, reuse, and machine reasoning across the full engineering system lifecycle. From functional models to field failures, ontologies offer a way to encode and connect it all. ๐Ÿ’ฅ However, ontologies are not a silver bullet. There are plenty of scenarios where an ontology is not just unnecessary, it might actually slow you down, confuse your team, or waste resources. So when exactly does the ontological approach become more burden than benefit? Based on my understanding and current work in this space, ๐Ÿš€ For engineering design, it's important to recognise situations where adopting a semantic model is not the most effective approach: 1. When tasks are highly localised and routine If you're just tweaking part drawings, running standard FEA simulations, or updating well-established design details, then the knowledge already lives in your tools and practices. Adding an ontology might feel like installing a satellite dish to tune a local radio station. 2. When terminology is unstable or fragmented Ontologies depend on consistent language. If every department speaks its own dialect, and no one agrees on terms, you can't build shared meaning. Youโ€™ll end up formalising confusion instead of clarifying it. 3. When speed matters more than structure In prototyping labs, testing grounds, or urgent production lines, agility rules. Engineers solve problems fast, often through direct collaboration. Taking time to define formal semantics? Not always practical. Sometimes the best model is a whiteboard and a sharp marker. 4. When the knowledge wonโ€™t be reused Not all projects aim for longevity or cross-team learning. If you're building something once, for one purpose, with no intention of scaling or sharing, skip the ontology. Itโ€™s like building a library catalog for a single book. 5. When the infrastructure isn't there Ontological engineering isnโ€™t magic. It needs tools, training, and people who understand the stack. If your team lacks the skills or platforms, even the best-designed ontology will gather dust in a forgotten folder. Use the Right Tool for the Real Problem Ontologies are powerful, but not sacred. They shine when you need to connect knowledge across domains, ensure long-term traceability, or enable intelligent automation. But theyโ€™re not a requirement for every task just because theyโ€™re clever. The real challenge is not whether to use ontologies, but knowing when they genuinely improve clarity, consistency, and collaboration, and when they just complicate the obvious. ๐Ÿง  Feedback and critique are welcome; this is a living conversation. Felician Campean #KnowledgeManagement #SystemsEngineering #Ontology #MBSE #DigitalEngineering #RiskAnalysis #AIinEngineering #OntologyEngineering #SemanticInteroperability #SystemReliability #FailureAnalysis #KnowledgeIntegration | 11 comments on LinkedIn
When Is an Ontological Approach Not the Right Fit for Sharing and Reusing System Knowledge in Design and Development?
ยทlinkedin.comยท
When Is an Ontological Approach Not the Right Fit for Sharing and Reusing System Knowledge in Design and Development?
Foundation Models Know Enough
Foundation Models Know Enough
LLMs already contain overlapping world models. You just have to ask them right. Ontologists reply to an LLM output,ย โ€œThatโ€™s not a real ontologyโ€”itโ€™s not a formal conceptualization.โ€ But thatโ€™s just the No True Scotsman fallacy dressed up in OWL. Boring. Not growth-oriented. Look forward, angel. A foundation model is a compression of human knowledge. The real problem isn't that we "lack a conceptualization". The real problem with an FM is that they contain too many. FMs contain conceptualizationsโ€”plural. Messy? Sure. But usable. At Stardog, weโ€™re turning this latent structure intoย real ontologiesย usingย symbolic knowledge distillation. Prompt orchestration โ†’ structure extraction โ†’ formal encoding. OWL, SHACL, and friends. Shake till mixed. Rinse. Repeat. Secret sauce simmered and reduced. This isn't theoretical hard. We avoid that. Itโ€™s merely engineering hard. We LTF into that! But the payoff means bootstrapping rich, new ontologies at scale: faster, cheaper, with lineage. It's the intersection of FM latent space, formal ontology, and user intent expressed via CQs. We call it the Symbolic Latent Layer (SLL). Cute eh? The future of enterprise AI isnโ€™t just documents. Itโ€™s distillingย structured symbolic knowledgeย from LLMs and plugging it into agents, workflows, and reasoning engines. You donโ€™t need a priesthood to get a formal ontology anymore. You need a good prompt and a smarter pipeline and the right EKG platform. There's a lot more to say about this so I said it at Stardog Labs https://lnkd.in/eY5Sibed | 17 comments on LinkedIn
ยทlinkedin.comยท
Foundation Models Know Enough
Add a Semantic Layer โ€“ a smart translator that sits between your data sources and your business applications
Add a Semantic Layer โ€“ a smart translator that sits between your data sources and your business applications
Tired of being told that silos are gone? The real value comes from connecting them. ๐Ÿ”„ The myth of data silos: why they never really disappear, and how to turn them into your biggest advantage. Even after heavy IT investment, data silos never truly go away, they simply evolve. In food production, I saw this first-hand: every system (ERP, quality, IoT, POS) stored data in its own format. Sometimes, the same product ended up with different IDs across systems, batch information was fragmented, and data was dispersed in each silo. People often say, โ€œBreak down the silos.โ€ But in reality, thatโ€™s nearly impossible. Businesses change, new tools appear, acquisitions happen, teams shift, new processes and production lines are launched. Silos are part of digital life. For years, I tried classic integrations. They helped a bit, but every change in one system caused more issues and even more integration work. I wish I had known then what I know now: Stop trying to destroy silos. Start connecting them. Hereโ€™s what makes the difference: Add a Semantic Layer โ€“ a smart translator that sits between your data sources and your business applications. It maps different formats and names into a common language, without changing your original systems. Put a Knowledge Graph on top and you donโ€™t just translate โ€“ you connect. Suddenly, all your data sources, even legacy silos, become part of a single network. Products, ingredients, machines, partners, and customers are all logically linked and understood across your business. In practice, this means: - Production uses real sales and shelf-life data. - Sales sees live inventory, not outdated reports. - Forecasting is based on trustworthy, aligned data. Thatโ€™s the real shift: Silos are not problems to kill, but assets to connect. With a Semantic Layer and a Knowledge Graph, data silos become trusted building blocks for your business intelligence. Better Data, Better ROI. If youโ€™ve ever spent hours reconciling reports, youโ€™ll recognise this recurring pain in companies that havenโ€™t optimised their data integration with a semantic and KG approach. So: Do you still treat silos as problems, or could they be your next competitive advantage if you connect them the right way? Meaningfy #DataSilos #SemanticLayer #KnowledgeGraph #BusinessData #DigitalTransformation
Add a Semantic Layer โ€“ a smart translator that sits between your data sources and your business applications
ยทlinkedin.comยท
Add a Semantic Layer โ€“ a smart translator that sits between your data sources and your business applications
Cellosaurus is now available in RDF format
Cellosaurus is now available in RDF format
Cellosaurus is now available in RDF format, with a triple store that supports SPARQL queries If this sounds a bit abstract or unfamiliarโ€ฆ 1) RDF stands for Resource Description Framework. Think of RDF as a way to express knowledge using triplets: Subject โ€“ Predicate โ€“ Object. Example: HeLa (subject) โ€“ is_transformed_by (predicate) โ€“ Human papillomavirus type 18 (object) These triplets are like little facts that can be connected together to form a graph of knowledge. 2) A triple store is a database designed specifically to store and retrieve these RDF triplets. Unlike traditional databases (tables, rows), triple stores are optimized for linked data. They allow you to navigate connections between biological entities, like species, tissues, genes, diseases, etc. 3) SPARQL is a query language for RDF data. It lets you ask complex questions, such as: - Find all cell lines with a *RAS (HRAS, NRAS, KRAS) mutation in p.Gly12 - Find all Cell lines from animals belonging the order "carnivora" More specifically we now offer from the Tool - API submenu 6 new options: 1) SPARQL Editor (https://lnkd.in/eF2QMsYR). The SPARQL Editor is a tool designed to assist users in developing their SPARQL queries. 2) SPARQL Service (https://lnkd.in/eZ-iN7_e). The SPARQL service is the web service that accepts SPARQL queries over HTTP and returns results from the RDF dataset. 3) Cellosaurs Ontology (https://lnkd.in/eX5ExjMe). An RDF ontology is a formal, structured representation of knowledge. It explicitly defines domain-specific concepts - such as classes and properties - enabling data to be described with meaningful semantics that both humans and machines can interpret. The Cellosaurus ontology is expressed in OWL. 4) Cellosaurus Concept Hopper (https://lnkd.in/e7CH5nj4). The Concept Hopper, is a tool that provides an alternative view of the Cellosaurus ontology. It focuses on a single concept at a time - either a class or a property - and shows how that concept is linked to others within the ontology, as well as how it appears in the data. 5) Cellosaurus dereferencing service (https://lnkd.in/eSATMhGb). The RDF dereferencing service is the mechanism that, given a URI, returns an RDF description of the resource identified by that URI, enabling clients to retrieve structured, machine-readable data about the resource from the web in different formats. 6) Cellosaurus RDF files download (https://lnkd.in/emuEYnMD). This allows you to download the Cellosaurus RDF files in Turtle (ttl) format.
Cellosaurus is now available in RDF format
ยทlinkedin.comยท
Cellosaurus is now available in RDF format
How do you explain the difference between Semantic Layers and Ontologies?
How do you explain the difference between Semantic Layers and Ontologies?
How do you explain the difference between Semantic Layers and Ontologies? Thatโ€™s the discussion I had yesterday with the CTO of a very large and well known organization. ๐Ÿ“Š Semantic Layers Today: The First Stepping Stone โ€ข Semantic layer is commonly used in data analytics/BI reporting, tied to modeling fact/dimension tables and defining measures โ€ข DataLakehouse/Data Cloud, transformation tools, BI tools and semantic layer vendors exemplify this usage โ€ข Provide descriptive metadata: definitions, calculations (e.g., revenue formulas), and human readable labels, to enhance the schema โ€ข Serve as a first step toward better data understanding and governance โ€ข Help in aligning glossary terms with tables and columns, improving metadata quality and documentation โ€ข Typically proprietary (even if expressed in YAML) and are not broadly interoperable โ€ข Enable โ€œchat with your dataโ€ experiences over the warehouse When organizations need to integrate diverse data sources beyond the data warehouse/lakehouse model, they hit the limits of fact/dimension modeling. This is where ontologies and knowledge graphs come in. ๐ŸŒ Ontologies & Knowledge Graphs: Scaling Beyond BI โ€ข Represent complex relationships, hierarchies, synonyms, and taxonomies that go beyond rigid table structures โ€ข Knowledge graphs bridge the gap from technical metadata to business metadata and ultimately to core business concepts โ€ข Enable the integration of all types of data (structured, semi-structured, unstructured) because a graph is a common model โ€ข Through open web standards such as RDF, OWL and SPARQL you get interoperability without lock in Strategic Role in the Enterprise โ€ข Knowledge graphs enable the creation of an enterprise brain, connecting disparate data and semantics across all systems inside an organization โ€ข Represent the context and meaning that LLMs lack. Our research has proven this. โ€ข They lay the groundwork for digital twins and what-if scenario modeling, powering advanced analytics and decision-making. ๐Ÿ’ก Key Takeaway The semantic layer is a first step, especially for BI use cases. Most organizations will start with them. This will eventually create semantic silos that are not inherently interoperable. Over time, they realize they need more than just local semantics for BI; they want to model real-world business assets and relationships across systems. Organizations will realize they want to define semantics once and reuse them across tools and platforms. This requires semantic interoperability, so the meaning behind data is not tied to one system. Large scale enterprises operate across multiple systems, so interoperability is not optional, itโ€™s essential. To truly integrate and reason over enterprise data, you need ontologies and knowledge graphs with open standards. They form the foundation for enterprise-wide semantic reuse, providing the flexibility, connectivity, and context required for next-generation analytics, AI, and enterprise intelligence. | 102 comments on LinkedIn
How do you explain the difference between Semantic Layers and Ontologies?
ยทlinkedin.comยท
How do you explain the difference between Semantic Layers and Ontologies?
the Ontology Pipeline
the Ontology Pipeline
Itโ€™s been a while since I have posted about the Ontology Pipeline. With parts borrowed from library science, the Ontology Pipeline is a simple framework for building rich knowledge infrastructures. Librarians are professional stewards of knowledge, and have valuable methodologies for building information and knowledge systems for human and machine information retrieval tasks. While LinkedIn conversations seem to be wrestling with defining โ€œwhat is the semantic layerโ€, we are failing to see the root of semantics. Semantics matter because knowledge structures, not just layers, define semantics. Semantics are more than labels or concept maps. Semantics lend structure and meaning through relationships, disambiguation of concepts, definitions and context. The Ontology pipeline is an iterative build process that is focused upon ensuring data hygiene while minding domain data, information and knowledge. I share this framework because it is how I have successfully built information and knowledge ecosystems , with or without AI. #taxonomy #ontology #metadata #knowledgegraph #ia #ai Some friends focused on building knowledge infrastructures Andrew Padilla Nagim Ashufta Ole Olesen-Bagneux Jรฉrรฉmy Ravenel Paco Nathan Adriano Vlad-Starrabba Andrea Gioia | 10 comments on LinkedIn
the Ontology Pipeline
ยทlinkedin.comยท
the Ontology Pipeline
Integrating Knowledge Graphs with Symbolic AI: The Path to Interpretable Hybrid AI Systems in Medicine
Integrating Knowledge Graphs with Symbolic AI: The Path to Interpretable Hybrid AI Systems in Medicine
In this position paper "Integrating Knowledge Graphs with Symbolic AI: The Path to Interpretable Hybrid AI Systems in Medicine" my L3S Research Center and TIB โ€“ Leibniz-Informationszentrum Technik und Naturwissenschaften und Universitรคtsbibliothek colleagues around Maria-Esther Vidal have nicely laid out some research challenges on the way to interpretable hybrid AI systems in medicine. However, I think the conceptual framework is broadly applicable way beyond medicine. For example, my former colleagues and PhD students atย eccencaย areย working on operationalizing Neuro-Symbolic AI for Enterprise Knowledge Management with eccenca's Corporate Memory. The paper outlines a compelling architecture for combining sub-symbolic models (e.g., deep learning) with symbolic reasoning systems to enable AI that is interpretable, robust, and aligned with human values. eccenca implements these principles at scale through its neuro-symbolic Enterprise Knowledge Graph platform, Corporate Memory for real-world industrial settings: 1. Symbolic Foundation via Semantic Web Standards - Corporate Memory is grounded in W3C standards (RDF, RDFS, OWL, SHACL, SPARQL), enabling formal knowledge representation, inferencing, and constraint validation. This allows to encode domain ontologies, business rules, and data governance policies in a machine-interpretable and human-verifiable manner. 2. Integration of Sub-symbolic Components - it integrates LLMs and ML models for tasks such as schema matching, natural language interpretation, entity resolution, and ontology population. These are linked to the symbolic layer via mappings and annotations, ensuring traceability and explainability. 3. Neuro-Symbolic Interfaces for Hybrid Reasoning - Hybrid workflows where symbolic constraints (e.g., SHACL shapes) guide LLM-based data enrichment. LLMs suggest schema alignments, which are verified against ontological axioms. Graph embeddings and path-based querying power semantic search and similarity. 4. Human-in-the-loop Interactions - Domain experts interact through low-code interfaces and semantic UIs that allow inspection, validation, and refinement of both the symbolic and neural outputs, promoting human oversight and continuous improvement. Such an approach can power Industrial Applications, e.g. in digital thread integration in manufacturing, compliance automation in pharma and finance and in general, cross-domain interoperability in data mesh architectures. Corporate Memory is a practical instantiation of neuro-symbolic AI that meets industrial-grade requirements for governance, scalability, and explainability โ€“ key tenets of Human-Centric AI. Check it out here: https://lnkd.in/evyarUsR #NeuroSymbolicAI #HumanCentricAI #KnowledgeGraphs #EnterpriseArchitecture #ExplainableAI #SemanticWeb #LinkedData #LLM #eccenca #CorporateMemory #OntologyDrivenAI #AI4Industry
Integrating Knowledge Graphs with Symbolic AI: The Path to Interpretable Hybrid AI Systems in Medicine
ยทlinkedin.comยท
Integrating Knowledge Graphs with Symbolic AI: The Path to Interpretable Hybrid AI Systems in Medicine
The Great Divide: Why Ontology and Data Architecture Teams Are Solving the Same Problems with Different Languages | LinkedIn
The Great Divide: Why Ontology and Data Architecture Teams Are Solving the Same Problems with Different Languages | LinkedIn
In enterprise organisations today, two important disciplines are working in parallel universes, tackling nearly identical challenges whilst speaking completely different languages. Ontology architects and data architects are both wrestling with ETL processes, data modelling, transformations, referen
ยทlinkedin.comยท
The Great Divide: Why Ontology and Data Architecture Teams Are Solving the Same Problems with Different Languages | LinkedIn
Everyone is talking about Semantic Layers, but what is a semantic layer?
Everyone is talking about Semantic Layers, but what is a semantic layer?
Everyone is talking about Semantic Layers, but what is a semantic layer? Some of the latest hot topics to get more out of your agents discuss topics such as knowledge graphs, vector search, semantics, and agent frameworks. A new and important area that encompasses the above is the notion that we need to have a stronger semantic layer on top of our data to provide structure, definitions, discoverability and more for our agents (human or other). While a lot of these concepts are not new, they have had to evolve to be relevant in today's world and this means that there is a fair bit of confusion surrounding this whole area. Depending on your background (AI, ML, Library Sciences) and focus (LLM-first or Knowledge Graph), you likely will emphasize different aspects as being key to a semantic layer. I come primarily from an AI/ML/LLM-first world, but have built and utilized knowledge graphs for most of my career. Given my background, I of course have my perspective on this and I tend to break things down to first principles and I like to simplify. Given this, preamble, here is what I think makes a semantic layer. WHAT MAKES A SEMANTIC LAYER: ๐ŸŸค Scope ๐ŸŸข You should not create a semantic layer that covers everything in the world, nor even everything in your company. You can tie semantic layers together, but focus on the job to be done. ๐ŸŸค You will need to have semantics, obviously. There are two particular types semantics that are important to include. ๐ŸŸข Vectors: These encapsulate semantics at a high-dimensional space so you can easily find similar concepts in your data ๐ŸŸข Ontology (including Taxonomy): Explicitly define meaning of your data in a structured and fact-based way, including appropriate vocabulary. This complements vectors superbly. ๐ŸŸค You need to respect the data and meet it where it is at. ๐ŸŸข Structured data: For most companies, their data reside in data lakes of some sort and most of it is structured. There is power in this structure, but also noise. The semantic layer needs to understand this and map it into the semantics above. ๐ŸŸข Unstructured data: Most data is unstructured and resides all over the place. Often this is stored in object stores or databases as part of structured tables, for example. However there is a lot of information in the unstructured data that the semantic layer needs to map -- and for that you need extraction, resolution, and a number of other techniques based on the modality of the data. ๐ŸŸค You need to index the data ๐ŸŸข You will need to index all of this to make your data discoverable and retrievable. And this needs to scale. ๐ŸŸข You need to have tight integration between vectors, ontology/knowledge graph and keywords to make this seamless. These are 4 key components that are all needed for you to have a true semantic layer. Thoughts? #knowledgegraph, #semanticlayer, #agent, #rag | 13 comments on LinkedIn
Everyone is talking about Semantic Layers, but what is a semantic layer?
ยทlinkedin.comยท
Everyone is talking about Semantic Layers, but what is a semantic layer?