Found 48 bookmarks
Newest
how both OWL and SHACL can be employed during the decision-making phase for AI Agents when using a knowledge graph instead of relying on an LLM that hallucinates
how both OWL and SHACL can be employed during the decision-making phase for AI Agents when using a knowledge graph instead of relying on an LLM that hallucinates
๐™๐™๐™ค๐™ช๐™œ๐™๐™ฉ ๐™›๐™ค๐™ง ๐™ฉ๐™๐™š ๐™™๐™–๐™ฎ: I've been mulling over how both OWL and SHACL can be employed during the decision-making phase for AI Agents when using a knowledge graph instead of relying on an LLM that hallucinates. In this way, the LLM can still be used for assessment and sensory feedback, but it augments the graph, not the other way around. OWL and SHACL serve different roles. SHACL is not just a preprocessing validator; it can play an active role in constraining, guiding, or triggering decisions, especially when integrated into AI pipelines. However, OWL is typically more central to inferencing and reasoning tasks. SHACL can actively participate in decision-making, especially when decisions require data integrity, constraint enforcement, or trigger-based logic. In complex agents, OWL provides the inferencing engine, while SHACL acts as the constraint gatekeeper and occasionally contributes to rule-based decision-making. For example, an AI agent processes RDF data describing an applicant's skills, degree, and experience. SHACL validates the data's structure, ensuring required fields are present and correctly formatted. OWL reasoning infers that the applicant is qualified for a technical role and matches the profile of a backend developer. SHACL is then used again to check policy compliance. With all checks passed, the applicant is shortlisted, and a follow-up email is triggered. In AI agent decision-making, OWL and SHACL often work together in complementary ways. SHACL is commonly used as a preprocessing step to validate incoming RDF data. If the data fails validation, it's flagged or excluded, ensuring only clean, structurally sound data reaches the OWL reasoner. In this role, SHACL acts as a gatekeeper. They can also operate in parallel or in an interleaved manner within a pipeline. As decisions evolve, SHACL shapes may be checked mid-process. Some AI agents even use SHACL as a rule engineโ€”to trigger alerts, detect actionable patterns, or constrain reasoning pathsโ€”while OWL continues to handle more complex semantic inferences, such as class hierarchies or property logic. Finally, SHACL can augment decision-making by confirming whether OWL-inferred actions comply with specific constraints. OWL may infer that โ€œA is a type of B, so do X,โ€ and SHACL then determines whether doing X adheres to a policy or requirement. Because SHACL supports closed-world assumptions (which OWL does not), it plays a valuable role in enforcing policies or compliance rules during decision execution. Illustrated:
how both OWL and SHACL can be employed during the decision-making phase for AI Agents when using a knowledge graph instead of relying on an LLM that hallucinates
ยทlinkedin.comยท
how both OWL and SHACL can be employed during the decision-making phase for AI Agents when using a knowledge graph instead of relying on an LLM that hallucinates
What makes the "๐’๐ž๐ฆ๐š๐ง๐ญ๐ข๐œ ๐ƒ๐š๐ญ๐š ๐๐ซ๐จ๐๐ฎ๐œ๐ญ" so valid in data conversations today?๐Ÿ’ก ๐๐จ๐ฎ๐ง๐๐ž๐ ๐‚๐จ๐ง๐ญ๐ž๐ฑ๐ญ and Right-to-Left Flow from consumers to raw materials.
What makes the "๐’๐ž๐ฆ๐š๐ง๐ญ๐ข๐œ ๐ƒ๐š๐ญ๐š ๐๐ซ๐จ๐๐ฎ๐œ๐ญ" so valid in data conversations today?๐Ÿ’ก ๐๐จ๐ฎ๐ง๐๐ž๐ ๐‚๐จ๐ง๐ญ๐ž๐ฑ๐ญ and Right-to-Left Flow from consumers to raw materials.
What makes the "๐’๐ž๐ฆ๐š๐ง๐ญ๐ข๐œ ๐ƒ๐š๐ญ๐š ๐๐ซ๐จ๐๐ฎ๐œ๐ญ" so valid in data conversations today?๐Ÿ’ก ๐๐จ๐ฎ๐ง๐๐ž๐ ๐‚๐จ๐ง๐ญ๐ž๐ฑ๐ญ and Right-to-Left Flow from consumers to raw materials. Tony Seale perfectly defines the value of bounded context. โ€ฆ๐˜ต๐˜ฐ ๐˜ด๐˜ถ๐˜ด๐˜ต๐˜ข๐˜ช๐˜ฏ ๐˜ช๐˜ต๐˜ด๐˜ฆ๐˜ญ๐˜ง, ๐˜ข ๐˜ด๐˜บ๐˜ด๐˜ต๐˜ฆ๐˜ฎ ๐˜ฎ๐˜ถ๐˜ด๐˜ต ๐˜ฎ๐˜ช๐˜ฏ๐˜ช๐˜ฎ๐˜ช๐˜ด๐˜ฆ ๐˜ช๐˜ต๐˜ด ๐˜ง๐˜ณ๐˜ฆ๐˜ฆ ๐˜ฆ๐˜ฏ๐˜ฆ๐˜ณ๐˜จ๐˜บ- ๐˜ข ๐˜ฎ๐˜ฆ๐˜ข๐˜ด๐˜ถ๐˜ณ๐˜ฆ ๐˜ฐ๐˜ง ๐˜ถ๐˜ฏ๐˜ค๐˜ฆ๐˜ณ๐˜ต๐˜ข๐˜ช๐˜ฏ๐˜ต๐˜บ. ๐˜”๐˜ช๐˜ฏ๐˜ช๐˜ฎ๐˜ช๐˜ด๐˜ช๐˜ฏ๐˜จ ๐˜ช๐˜ต ๐˜ฆ๐˜ฒ๐˜ถ๐˜ข๐˜ต๐˜ฆ๐˜ด ๐˜ต๐˜ฐ ๐˜ญ๐˜ฐ๐˜ธ ๐˜ช๐˜ฏ๐˜ต๐˜ฆ๐˜ณ๐˜ฏ๐˜ข๐˜ญ ๐˜ฆ๐˜ฏ๐˜ต๐˜ณ๐˜ฐ๐˜ฑ๐˜บ. ๐˜ˆ ๐˜ด๐˜บ๐˜ด๐˜ต๐˜ฆ๐˜ฎ ๐˜ข๐˜ค๐˜ฉ๐˜ช๐˜ฆ๐˜ท๐˜ฆ๐˜ด ๐˜ต๐˜ฉ๐˜ช๐˜ด ๐˜ฃ๐˜บ ๐˜ง๐˜ฐ๐˜ณ๐˜ฎ๐˜ช๐˜ฏ๐˜จ ๐˜ข๐˜ค๐˜ค๐˜ถ๐˜ณ๐˜ข๐˜ต๐˜ฆ ๐˜ฑ๐˜ณ๐˜ฆ๐˜ฅ๐˜ช๐˜ค๐˜ต๐˜ช๐˜ฐ๐˜ฏ๐˜ด ๐˜ข๐˜ฃ๐˜ฐ๐˜ถ๐˜ต ๐˜ต๐˜ฉ๐˜ฆ ๐˜ฆ๐˜น๐˜ต๐˜ฆ๐˜ณ๐˜ฏ๐˜ข๐˜ญ ๐˜ฆ๐˜ฏ๐˜ท ๐˜ข๐˜ฏ๐˜ฅ ๐˜ถ๐˜ฑ๐˜ฅ๐˜ข๐˜ต๐˜ช๐˜ฏ๐˜จ ๐˜ช๐˜ต๐˜ด ๐˜ช๐˜ฏ๐˜ต๐˜ฆ๐˜ณ๐˜ฏ๐˜ข๐˜ญ ๐˜ด๐˜ต๐˜ข๐˜ต๐˜ฆ๐˜ด ๐˜ข๐˜ค๐˜ค๐˜ฐ๐˜ณ๐˜ฅ๐˜ช๐˜ฏ๐˜จ๐˜ญ๐˜บ, ๐˜ข๐˜ญ๐˜ญ๐˜ฐ๐˜ธ๐˜ช๐˜ฏ๐˜จ ๐˜ง๐˜ฐ๐˜ณ ๐˜ข ๐˜ฅ๐˜บ๐˜ฏ๐˜ข๐˜ฎ๐˜ช๐˜ค ๐˜บ๐˜ฆ๐˜ต ๐˜ด๐˜ต๐˜ข๐˜ฃ๐˜ญ๐˜ฆ ๐˜ช๐˜ฏ๐˜ต๐˜ฆ๐˜ณ๐˜ข๐˜ค๐˜ต๐˜ช๐˜ฐ๐˜ฏ ๐˜ธ๐˜ช๐˜ต๐˜ฉ ๐˜ช๐˜ต๐˜ด ๐˜ด๐˜ถ๐˜ณ๐˜ณ๐˜ฐ๐˜ถ๐˜ฏ๐˜ฅ๐˜ช๐˜ฏ๐˜จ๐˜ด. ๐˜–๐˜ฏ๐˜ญ๐˜บ ๐˜ฑ๐˜ฐ๐˜ด๐˜ด๐˜ช๐˜ฃ๐˜ญ๐˜ฆ ๐˜ฐ๐˜ฏ ๐˜ฅ๐˜ฆ๐˜ญ๐˜ช๐˜ฏ๐˜ฆ๐˜ข๐˜ต๐˜ช๐˜ฏ๐˜จ ๐˜ข ๐˜ฃ๐˜ฐ๐˜ถ๐˜ฏ๐˜ฅ๐˜ข๐˜ณ๐˜บ ๐˜ฃ๐˜ฆ๐˜ต๐˜ธ๐˜ฆ๐˜ฆ๐˜ฏ ๐˜ช๐˜ฏ๐˜ต๐˜ฆ๐˜ณ๐˜ฏ๐˜ข๐˜ญ ๐˜ข๐˜ฏ๐˜ฅ ๐˜ฆ๐˜น๐˜ต๐˜ฆ๐˜ณ๐˜ฏ๐˜ข๐˜ญ ๐˜ด๐˜บ๐˜ด๐˜ต๐˜ฆ๐˜ฎ๐˜ด. ๐˜‹๐˜ช๐˜ด๐˜ค๐˜ฐ๐˜ฏ๐˜ฏ๐˜ฆ๐˜ค๐˜ต๐˜ฆ๐˜ฅ ๐˜ด๐˜บ๐˜ด๐˜ต๐˜ฆ๐˜ฎ๐˜ด ๐˜ด๐˜ช๐˜จ๐˜ฏ๐˜ข๐˜ญ ๐˜ธ๐˜ฆ๐˜ข๐˜ฌ ๐˜ฃ๐˜ฐ๐˜ถ๐˜ฏ๐˜ฅ๐˜ข๐˜ณ๐˜ช๐˜ฆ๐˜ด. Data Products enable a way to bind context to specific business purposes or use cases. This enables data to become: โœ…ย Purpose-driven โœ…ย Accurately Discoverable โœ…ย Easily Understandable & Addressable โœ…ย Valuable as an independent entity ๐“๐ก๐ž ๐’๐จ๐ฅ๐ฎ๐ญ๐ข๐จ๐ง: The Data Product Model. A conceptual model that precisely captures the business context through an interface operable by business users or domain experts. We have often referred to this as The Data Product Prototype, which is essentially a semantic model and captures information on: โžก๏ธ Popular Metrics the Business wants to drive โžก๏ธ Measures & Dimensions โžก๏ธ Relationships & formulas โžก๏ธ Further context with tags, descriptions, synonyms, & observability metrics โžก๏ธ Quality SLOs - or simply, conditions necessary โžก๏ธ Additional policy specs contributed by Governance Stewards Once the Prototype is validated and given a green flag, development efforts kickstart. Note how all data engineering efforts (left-hand side) are not looped in until this point, saving massive costs and time drainage. The DE teams, who only have a partial view of the business landscape, are now no longer held accountable for this lack in strong business understanding. ๐“๐ก๐ž ๐จ๐ฐ๐ง๐ž๐ซ๐ฌ๐ก๐ข๐ฉ ๐จ๐Ÿ ๐ญ๐ก๐ž ๐ƒ๐š๐ญ๐š ๐๐ซ๐จ๐๐ฎ๐œ๐ญ ๐ฆ๐จ๐๐ž๐ฅ ๐ข๐ฌ ๐ž๐ง๐ญ๐ข๐ซ๐ž๐ฅ๐ฒ ๐ฐ๐ข๐ญ๐ก ๐๐ฎ๐ฌ๐ข๐ง๐ž๐ฌ๐ฌ. ๐Ÿซ  DEs have a blueprint to refer and simply map sources or source data products to the prescribed Data Product Model. Any new request comes through this prototype itself, managed by Data Product Managers in collaboration with business users. Dissolving all bottlenecks from centralised data engineering teams. At this level, necessary transformations are delivered, ๐Ÿ”Œย that activate the SLOs ๐Ÿ”Œย enable interoperability with native tools and upstream data products, ๐Ÿ”Œย allow reusability of pre-existing transforms in the form of Source or Aggregate data products. #datamanagement #dataproducts
ยทlinkedin.comยท
What makes the "๐’๐ž๐ฆ๐š๐ง๐ญ๐ข๐œ ๐ƒ๐š๐ญ๐š ๐๐ซ๐จ๐๐ฎ๐œ๐ญ" so valid in data conversations today?๐Ÿ’ก ๐๐จ๐ฎ๐ง๐๐ž๐ ๐‚๐จ๐ง๐ญ๐ž๐ฑ๐ญ and Right-to-Left Flow from consumers to raw materials.
LLMs and Neurosymbolic reasoning
LLMs and Neurosymbolic reasoning
When people discuss how LLMS "reason," youโ€™ll often hear that they rely on transduction rather than abduction. It sounds technical, but the distinction matters - especially as we start wiring LLMs into systems that are supposed to think. ๐Ÿ”ต Transduction is case-to-case reasoning. It doesnโ€™t build theories; it draws fuzzy connections based on resemblance. Think: โ€œThis metal conducts electricity, and that one looks similar - so maybe it does too.โ€ ๐Ÿ”ต Abduction, by contrast, is about generating explanations. Itโ€™s what scientists (and detectives) do: โ€œThis metal is conducting - maybe it contains free electrons. That would explain it.โ€ The claim is that LLMs operate more like transducers - navigating high-dimensional spaces of statistical similarity, rather than forming crisp generalisations. But this isnโ€™t the whole picture. In practice, it seems to me that LLMs also perform a kind of induction - abstracting general patterns from oceans of text. They learn the shape of ideas and apply them in novel ways. Thatโ€™s closer to โ€œAll metals of this type have conducted in the past, so this one probably will.โ€ Now add tools to the mix - code execution, web search, Elon Musk's tweet history ๐Ÿ˜‰ - and LLMs start doing something even more interesting: program search and synthesis. It's messy, probabilistic, and not at all principled or rigorous. But itโ€™s inching toward a form of abductive reasoning. Which brings us to a more principled approach for reasoning within an enterprise domain: the neuro-symbolic loop - a collaboration between large language models and knowledge graphs. The graph provides structure: formal semantics, ontologies, logic, and depth. The LLM brings intuition: flexible inference, linguistic creativity, and breadth. One grounds. The other leaps. ๐Ÿ’ก The real breakthrough could come when the grounding isnโ€™t just factual, but conceptual - when the ontology encodes clean, meaningful generalisations. Thatโ€™s when the LLMโ€™s leaps wouldnโ€™t just reach further - theyโ€™d rise higher, landing on novel ideas that hold up under formal scrutiny. ๐Ÿ’ก So where do metals fit into this new framing? ๐Ÿ”ต Transduction: โ€œThis metal conducts. That one looks the same - it probably does too.โ€ ๐Ÿ”ต Induction: โ€œIโ€™ve tested ten of these. All conducted. Itโ€™s probably a rule.โ€ ๐Ÿ”ต Abduction: โ€œThis metal is conducting. It shares properties with the โ€˜conductive alloyโ€™ class - especially composition and crystal structure. The best explanation is a sea of free electrons.โ€ LLMs, in isolation, are limited in their ability to perform structured abduction. But when embedded in a system that includes a formal ontology, logical reasoning, and external tools, they can begin to participate in richer forms of reasoning. These hybrid systems are still far from principled scientific reasoners - but they hint at a path forward: a more integrated and disciplined neuro-symbolic architecture that moves beyond mere pattern completion.
ยทlinkedin.comยท
LLMs and Neurosymbolic reasoning
Should ontologies be treated as organizational resources for semantic capabilities?
Should ontologies be treated as organizational resources for semantic capabilities?
๐Ÿ’ก Should ontologies be treated as organizational resources for semantic capabilities? More and more organizations are investing in data platforms, modeling tools, and integration frameworks. But one key capability is often underused or misunderstood: ontologies as semantic infrastructure. While databases handle facts and BI platforms handle queries, ontologies structure meaning. They define what things are, not just what data says. When treated as living organizational resources, ontologies can bring: ๐Ÿ”น Shared understanding across silos ๐Ÿ”น Reasoning and inference beyond data queries ๐Ÿ”น Semantic integration of diverse systems ๐Ÿ”น Clarity and coherence in enterprise models But hereโ€™s the challenge โ€” ontologies donโ€™t operate in isolation. They must be positioned alongside: ๐Ÿ”ธ Data-oriented technologies (RDF, RDF-star, quad stores) that track facts and provenance ๐Ÿ”ธ Enterprise modeling tools (e.g., ArchiMate) that describe systems and views ๐Ÿ”ธ Exploratory approaches (like semantic cartography) that support emergence over constraint These layers each come with their own logic โ€” epistemic vs. ontologic, structural vs. operational, contextual vs. formal. โœ… Building semantic capabilities requires aligning all these dimensions. โœ… It demands governance, tooling, and a culture of collaboration between ontologists, data managers, architects, and domain experts. โœ… And it opens the door to richer insight, smarter automation, and more agile knowledge flows. ๐Ÿ” With projects like ArchiCG (semantic interactive cartography), I aim to explore how we can visually navigate this landscape โ€” not constrained by predefined viewpoints, but guided by logic, meaning, and emergent perspectives. What do you think? Are ontologies ready to take their place as core infrastructure in your organization? | 16 comments on LinkedIn
ยทlinkedin.comยท
Should ontologies be treated as organizational resources for semantic capabilities?
The Question That Changes Everything: "But This Doesn't Look Like an Ontology" | LinkedIn
The Question That Changes Everything: "But This Doesn't Look Like an Ontology" | LinkedIn
After publishing my article on the Missing Semantic Center, a brilliant colleague asked me a question that gets to the heart of our technology stack: "But Tavi - this doesn't look like an OWL 2 DL ontology. What's going on here?" This question highlights a profound aspect of why systems have struggl
ยทlinkedin.comยท
The Question That Changes Everything: "But This Doesn't Look Like an Ontology" | LinkedIn
When Is an Ontological Approach Not the Right Fit for Sharing and Reusing System Knowledge in Design and Development?
When Is an Ontological Approach Not the Right Fit for Sharing and Reusing System Knowledge in Design and Development?
๐Ÿง  When Is an Ontological Approach Not the Right Fit for Sharing and Reusing System Knowledge in Design and Development? Ontologies promise knowledge integration, traceability, reuse, and machine reasoning across the full engineering system lifecycle. From functional models to field failures, ontologies offer a way to encode and connect it all. ๐Ÿ’ฅ However, ontologies are not a silver bullet. There are plenty of scenarios where an ontology is not just unnecessary, it might actually slow you down, confuse your team, or waste resources. So when exactly does the ontological approach become more burden than benefit? Based on my understanding and current work in this space, ๐Ÿš€ For engineering design, it's important to recognise situations where adopting a semantic model is not the most effective approach: 1. When tasks are highly localised and routine If you're just tweaking part drawings, running standard FEA simulations, or updating well-established design details, then the knowledge already lives in your tools and practices. Adding an ontology might feel like installing a satellite dish to tune a local radio station. 2. When terminology is unstable or fragmented Ontologies depend on consistent language. If every department speaks its own dialect, and no one agrees on terms, you can't build shared meaning. Youโ€™ll end up formalising confusion instead of clarifying it. 3. When speed matters more than structure In prototyping labs, testing grounds, or urgent production lines, agility rules. Engineers solve problems fast, often through direct collaboration. Taking time to define formal semantics? Not always practical. Sometimes the best model is a whiteboard and a sharp marker. 4. When the knowledge wonโ€™t be reused Not all projects aim for longevity or cross-team learning. If you're building something once, for one purpose, with no intention of scaling or sharing, skip the ontology. Itโ€™s like building a library catalog for a single book. 5. When the infrastructure isn't there Ontological engineering isnโ€™t magic. It needs tools, training, and people who understand the stack. If your team lacks the skills or platforms, even the best-designed ontology will gather dust in a forgotten folder. Use the Right Tool for the Real Problem Ontologies are powerful, but not sacred. They shine when you need to connect knowledge across domains, ensure long-term traceability, or enable intelligent automation. But theyโ€™re not a requirement for every task just because theyโ€™re clever. The real challenge is not whether to use ontologies, but knowing when they genuinely improve clarity, consistency, and collaboration, and when they just complicate the obvious. ๐Ÿง  Feedback and critique are welcome; this is a living conversation. Felician Campean #KnowledgeManagement #SystemsEngineering #Ontology #MBSE #DigitalEngineering #RiskAnalysis #AIinEngineering #OntologyEngineering #SemanticInteroperability #SystemReliability #FailureAnalysis #KnowledgeIntegration | 11 comments on LinkedIn
When Is an Ontological Approach Not the Right Fit for Sharing and Reusing System Knowledge in Design and Development?
ยทlinkedin.comยท
When Is an Ontological Approach Not the Right Fit for Sharing and Reusing System Knowledge in Design and Development?
Foundation Models Know Enough
Foundation Models Know Enough
LLMs already contain overlapping world models. You just have to ask them right. Ontologists reply to an LLM output,ย โ€œThatโ€™s not a real ontologyโ€”itโ€™s not a formal conceptualization.โ€ But thatโ€™s just the No True Scotsman fallacy dressed up in OWL. Boring. Not growth-oriented. Look forward, angel. A foundation model is a compression of human knowledge. The real problem isn't that we "lack a conceptualization". The real problem with an FM is that they contain too many. FMs contain conceptualizationsโ€”plural. Messy? Sure. But usable. At Stardog, weโ€™re turning this latent structure intoย real ontologiesย usingย symbolic knowledge distillation. Prompt orchestration โ†’ structure extraction โ†’ formal encoding. OWL, SHACL, and friends. Shake till mixed. Rinse. Repeat. Secret sauce simmered and reduced. This isn't theoretical hard. We avoid that. Itโ€™s merely engineering hard. We LTF into that! But the payoff means bootstrapping rich, new ontologies at scale: faster, cheaper, with lineage. It's the intersection of FM latent space, formal ontology, and user intent expressed via CQs. We call it the Symbolic Latent Layer (SLL). Cute eh? The future of enterprise AI isnโ€™t just documents. Itโ€™s distillingย structured symbolic knowledgeย from LLMs and plugging it into agents, workflows, and reasoning engines. You donโ€™t need a priesthood to get a formal ontology anymore. You need a good prompt and a smarter pipeline and the right EKG platform. There's a lot more to say about this so I said it at Stardog Labs https://lnkd.in/eY5Sibed | 17 comments on LinkedIn
ยทlinkedin.comยท
Foundation Models Know Enough
Add a Semantic Layer โ€“ a smart translator that sits between your data sources and your business applications
Add a Semantic Layer โ€“ a smart translator that sits between your data sources and your business applications
Tired of being told that silos are gone? The real value comes from connecting them. ๐Ÿ”„ The myth of data silos: why they never really disappear, and how to turn them into your biggest advantage. Even after heavy IT investment, data silos never truly go away, they simply evolve. In food production, I saw this first-hand: every system (ERP, quality, IoT, POS) stored data in its own format. Sometimes, the same product ended up with different IDs across systems, batch information was fragmented, and data was dispersed in each silo. People often say, โ€œBreak down the silos.โ€ But in reality, thatโ€™s nearly impossible. Businesses change, new tools appear, acquisitions happen, teams shift, new processes and production lines are launched. Silos are part of digital life. For years, I tried classic integrations. They helped a bit, but every change in one system caused more issues and even more integration work. I wish I had known then what I know now: Stop trying to destroy silos. Start connecting them. Hereโ€™s what makes the difference: Add a Semantic Layer โ€“ a smart translator that sits between your data sources and your business applications. It maps different formats and names into a common language, without changing your original systems. Put a Knowledge Graph on top and you donโ€™t just translate โ€“ you connect. Suddenly, all your data sources, even legacy silos, become part of a single network. Products, ingredients, machines, partners, and customers are all logically linked and understood across your business. In practice, this means: - Production uses real sales and shelf-life data. - Sales sees live inventory, not outdated reports. - Forecasting is based on trustworthy, aligned data. Thatโ€™s the real shift: Silos are not problems to kill, but assets to connect. With a Semantic Layer and a Knowledge Graph, data silos become trusted building blocks for your business intelligence. Better Data, Better ROI. If youโ€™ve ever spent hours reconciling reports, youโ€™ll recognise this recurring pain in companies that havenโ€™t optimised their data integration with a semantic and KG approach. So: Do you still treat silos as problems, or could they be your next competitive advantage if you connect them the right way? Meaningfy #DataSilos #SemanticLayer #KnowledgeGraph #BusinessData #DigitalTransformation
Add a Semantic Layer โ€“ a smart translator that sits between your data sources and your business applications
ยทlinkedin.comยท
Add a Semantic Layer โ€“ a smart translator that sits between your data sources and your business applications
Cellosaurus is now available in RDF format
Cellosaurus is now available in RDF format
Cellosaurus is now available in RDF format, with a triple store that supports SPARQL queries If this sounds a bit abstract or unfamiliarโ€ฆ 1) RDF stands for Resource Description Framework. Think of RDF as a way to express knowledge using triplets: Subject โ€“ Predicate โ€“ Object. Example: HeLa (subject) โ€“ is_transformed_by (predicate) โ€“ Human papillomavirus type 18 (object) These triplets are like little facts that can be connected together to form a graph of knowledge. 2) A triple store is a database designed specifically to store and retrieve these RDF triplets. Unlike traditional databases (tables, rows), triple stores are optimized for linked data. They allow you to navigate connections between biological entities, like species, tissues, genes, diseases, etc. 3) SPARQL is a query language for RDF data. It lets you ask complex questions, such as: - Find all cell lines with a *RAS (HRAS, NRAS, KRAS) mutation in p.Gly12 - Find all Cell lines from animals belonging the order "carnivora" More specifically we now offer from the Tool - API submenu 6 new options: 1) SPARQL Editor (https://lnkd.in/eF2QMsYR). The SPARQL Editor is a tool designed to assist users in developing their SPARQL queries. 2) SPARQL Service (https://lnkd.in/eZ-iN7_e). The SPARQL service is the web service that accepts SPARQL queries over HTTP and returns results from the RDF dataset. 3) Cellosaurs Ontology (https://lnkd.in/eX5ExjMe). An RDF ontology is a formal, structured representation of knowledge. It explicitly defines domain-specific concepts - such as classes and properties - enabling data to be described with meaningful semantics that both humans and machines can interpret. The Cellosaurus ontology is expressed in OWL. 4) Cellosaurus Concept Hopper (https://lnkd.in/e7CH5nj4). The Concept Hopper, is a tool that provides an alternative view of the Cellosaurus ontology. It focuses on a single concept at a time - either a class or a property - and shows how that concept is linked to others within the ontology, as well as how it appears in the data. 5) Cellosaurus dereferencing service (https://lnkd.in/eSATMhGb). The RDF dereferencing service is the mechanism that, given a URI, returns an RDF description of the resource identified by that URI, enabling clients to retrieve structured, machine-readable data about the resource from the web in different formats. 6) Cellosaurus RDF files download (https://lnkd.in/emuEYnMD). This allows you to download the Cellosaurus RDF files in Turtle (ttl) format.
Cellosaurus is now available in RDF format
ยทlinkedin.comยท
Cellosaurus is now available in RDF format
How do you explain the difference between Semantic Layers and Ontologies?
How do you explain the difference between Semantic Layers and Ontologies?
How do you explain the difference between Semantic Layers and Ontologies? Thatโ€™s the discussion I had yesterday with the CTO of a very large and well known organization. ๐Ÿ“Š Semantic Layers Today: The First Stepping Stone โ€ข Semantic layer is commonly used in data analytics/BI reporting, tied to modeling fact/dimension tables and defining measures โ€ข DataLakehouse/Data Cloud, transformation tools, BI tools and semantic layer vendors exemplify this usage โ€ข Provide descriptive metadata: definitions, calculations (e.g., revenue formulas), and human readable labels, to enhance the schema โ€ข Serve as a first step toward better data understanding and governance โ€ข Help in aligning glossary terms with tables and columns, improving metadata quality and documentation โ€ข Typically proprietary (even if expressed in YAML) and are not broadly interoperable โ€ข Enable โ€œchat with your dataโ€ experiences over the warehouse When organizations need to integrate diverse data sources beyond the data warehouse/lakehouse model, they hit the limits of fact/dimension modeling. This is where ontologies and knowledge graphs come in. ๐ŸŒ Ontologies & Knowledge Graphs: Scaling Beyond BI โ€ข Represent complex relationships, hierarchies, synonyms, and taxonomies that go beyond rigid table structures โ€ข Knowledge graphs bridge the gap from technical metadata to business metadata and ultimately to core business concepts โ€ข Enable the integration of all types of data (structured, semi-structured, unstructured) because a graph is a common model โ€ข Through open web standards such as RDF, OWL and SPARQL you get interoperability without lock in Strategic Role in the Enterprise โ€ข Knowledge graphs enable the creation of an enterprise brain, connecting disparate data and semantics across all systems inside an organization โ€ข Represent the context and meaning that LLMs lack. Our research has proven this. โ€ข They lay the groundwork for digital twins and what-if scenario modeling, powering advanced analytics and decision-making. ๐Ÿ’ก Key Takeaway The semantic layer is a first step, especially for BI use cases. Most organizations will start with them. This will eventually create semantic silos that are not inherently interoperable. Over time, they realize they need more than just local semantics for BI; they want to model real-world business assets and relationships across systems. Organizations will realize they want to define semantics once and reuse them across tools and platforms. This requires semantic interoperability, so the meaning behind data is not tied to one system. Large scale enterprises operate across multiple systems, so interoperability is not optional, itโ€™s essential. To truly integrate and reason over enterprise data, you need ontologies and knowledge graphs with open standards. They form the foundation for enterprise-wide semantic reuse, providing the flexibility, connectivity, and context required for next-generation analytics, AI, and enterprise intelligence. | 102 comments on LinkedIn
How do you explain the difference between Semantic Layers and Ontologies?
ยทlinkedin.comยท
How do you explain the difference between Semantic Layers and Ontologies?
The Great Divide: Why Ontology and Data Architecture Teams Are Solving the Same Problems with Different Languages | LinkedIn
The Great Divide: Why Ontology and Data Architecture Teams Are Solving the Same Problems with Different Languages | LinkedIn
In enterprise organisations today, two important disciplines are working in parallel universes, tackling nearly identical challenges whilst speaking completely different languages. Ontology architects and data architects are both wrestling with ETL processes, data modelling, transformations, referen
ยทlinkedin.comยท
The Great Divide: Why Ontology and Data Architecture Teams Are Solving the Same Problems with Different Languages | LinkedIn
Everyone is talking about Semantic Layers, but what is a semantic layer?
Everyone is talking about Semantic Layers, but what is a semantic layer?
Everyone is talking about Semantic Layers, but what is a semantic layer? Some of the latest hot topics to get more out of your agents discuss topics such as knowledge graphs, vector search, semantics, and agent frameworks. A new and important area that encompasses the above is the notion that we need to have a stronger semantic layer on top of our data to provide structure, definitions, discoverability and more for our agents (human or other). While a lot of these concepts are not new, they have had to evolve to be relevant in today's world and this means that there is a fair bit of confusion surrounding this whole area. Depending on your background (AI, ML, Library Sciences) and focus (LLM-first or Knowledge Graph), you likely will emphasize different aspects as being key to a semantic layer. I come primarily from an AI/ML/LLM-first world, but have built and utilized knowledge graphs for most of my career. Given my background, I of course have my perspective on this and I tend to break things down to first principles and I like to simplify. Given this, preamble, here is what I think makes a semantic layer. WHAT MAKES A SEMANTIC LAYER: ๐ŸŸค Scope ๐ŸŸข You should not create a semantic layer that covers everything in the world, nor even everything in your company. You can tie semantic layers together, but focus on the job to be done. ๐ŸŸค You will need to have semantics, obviously. There are two particular types semantics that are important to include. ๐ŸŸข Vectors: These encapsulate semantics at a high-dimensional space so you can easily find similar concepts in your data ๐ŸŸข Ontology (including Taxonomy): Explicitly define meaning of your data in a structured and fact-based way, including appropriate vocabulary. This complements vectors superbly. ๐ŸŸค You need to respect the data and meet it where it is at. ๐ŸŸข Structured data: For most companies, their data reside in data lakes of some sort and most of it is structured. There is power in this structure, but also noise. The semantic layer needs to understand this and map it into the semantics above. ๐ŸŸข Unstructured data: Most data is unstructured and resides all over the place. Often this is stored in object stores or databases as part of structured tables, for example. However there is a lot of information in the unstructured data that the semantic layer needs to map -- and for that you need extraction, resolution, and a number of other techniques based on the modality of the data. ๐ŸŸค You need to index the data ๐ŸŸข You will need to index all of this to make your data discoverable and retrievable. And this needs to scale. ๐ŸŸข You need to have tight integration between vectors, ontology/knowledge graph and keywords to make this seamless. These are 4 key components that are all needed for you to have a true semantic layer. Thoughts? #knowledgegraph, #semanticlayer, #agent, #rag | 13 comments on LinkedIn
Everyone is talking about Semantic Layers, but what is a semantic layer?
ยทlinkedin.comยท
Everyone is talking about Semantic Layers, but what is a semantic layer?
Why AI Hallucinates: The Shallow Semantics Problem | LinkedIn
Why AI Hallucinates: The Shallow Semantics Problem | LinkedIn
By J Bittner Part 1 in our 5-part series: From Hallucination to Reasoningโ€”The Case for Ontology-Driven AI Welcome to โ€œSemantically Speakingโ€โ€”a new series on what makes AI systems genuinely trustworthy, explainable, and future-proof. This is Part 1 in a 5-part journey, exploring why so many AI system
ยทlinkedin.comยท
Why AI Hallucinates: The Shallow Semantics Problem | LinkedIn
Want to Fix LLM Hallucination? Neurosymbolic Alone Wonโ€™t Cut It
Want to Fix LLM Hallucination? Neurosymbolic Alone Wonโ€™t Cut It
Want to Fix LLM Hallucination? Neurosymbolic Alone Wonโ€™t Cut It The Conversationโ€™s new piece makes a clear case for neurosymbolic AIโ€”integrating symbolic logic with statistical learningโ€”as the long-term fix for LLM hallucinations. Itโ€™s a timely and necessary argument: โ€œNo matter how large a language model gets, it canโ€™t escape its fundamental lack of grounding in rules, logic, or real-world structure. Hallucination isnโ€™t a bug, itโ€™s the default.โ€ But whatโ€™s crucialโ€”and often glossed overโ€”is that symbolic logic alone isnโ€™t enough. The real leap comes from adding formal ontologies and semantic constraints that make meaning machine-computable. OWL, Shapes Constraint Language (SHACL), and frameworks like BFO, Descriptive Ontology for Linguistic and Cognitive Engineering (DOLCE), the Suggested Upper Merged Ontology (SUMO), and the Common Core Ontologies (CCO) donโ€™t just โ€œrepresent rulesโ€โ€”they define what exists, what can relate, and under what conditions inference is valid. Thatโ€™s the difference between โ€œdecoratingโ€ a knowledge graph and engineering one that can detect, explain, and prevent hallucinations in practice. Iโ€™d go further: โ€ข Most enterprise LLM hallucinations are just semantic errorsโ€”mislabeling, misattribution, or class confusion that only formal ontologies can prevent. โ€ข Neurosymbolic systems only deliver if their symbolic half is grounded in ontological reality, not just handcrafted rules or taxonomies. The upshot: We need to move beyond mere integration of symbols and neurons. We need semantic scaffoldingโ€”ontologies as infrastructureโ€”to ensure AI isnโ€™t just fluent, but actually right. Curious if others are layering formal ontologies (BFO, DOLCE, SUMO) into their AI stacks yet? Or are we still hoping that more compute and prompt engineering will do the trick? #NeuroSymbolicAI #SemanticAI #Ontology #LLMs #AIHallucination #KnowledgeGraphs #AITrust #AIReasoning
Want to Fix LLM Hallucination? Neurosymbolic Alone Wonโ€™t Cut It
ยทlinkedin.comยท
Want to Fix LLM Hallucination? Neurosymbolic Alone Wonโ€™t Cut It
From Dictionaries to Ontologies: Bridging Human Understanding and Machine Reasoning | LinkedIn
From Dictionaries to Ontologies: Bridging Human Understanding and Machine Reasoning | LinkedIn
In the long tradition of dictionaries, the essence of meaning has always relied on two elements: a symbol (usually a word or a phrase) and a definitionโ€”an intelligible explanation composed using other known terms. This recursive practice builds a web of meanings, where each term is explained using o
ยทlinkedin.comยท
From Dictionaries to Ontologies: Bridging Human Understanding and Machine Reasoning | LinkedIn
Semantically Composable Architectures
Semantically Composable Architectures
I'm happy to share the draft of the "Semantically Composable Architectures" mini-paper. It is the culmination of approximately four years' work, which began with Coreless Architectures and has now evolved into something much bigger. LLMs are impressive, but a real breakthrough will occur once we surpass the cognitive capabilities of a single human brain. Enabling autonomous large-scale system reverse engineering and large-scale autonomous transformation with minimal to no human involvement, while still making it understandable to humans if they choose to, is a central pillar of making truly groundbreaking changes. We hope the ideas we shared will be beneficial to humanity and advance our civilization further. It is not final and will require some clarification and improvements, but the key concepts are present. Happy to hear your thoughts and feedback. Some of these concepts underpin the design of the Product X system. Part of the core team + external contribution: Andrew Barsukov Andrey Kolodnitsky Sapta Girisa N Keith E. Glendon Gurpreet Sachdeva Saurav Chandra Mike Diachenko Oleh Sinkevych | 13 comments on LinkedIn
Semantically Composable Architectures
ยทlinkedin.comยท
Semantically Composable Architectures
Standing on Giants' Shoulders: What Happens When Formal Ontology Meets Modern Verification? ๐Ÿš€ | LinkedIn
Standing on Giants' Shoulders: What Happens When Formal Ontology Meets Modern Verification? ๐Ÿš€ | LinkedIn
Building on Decades of Foundational Research The formal ontology community has given us incredible foundations - Barry Smith's BFO framework, Alan Ruttenberg's CLIF axiomatizations, and Microsoft Research's Z3 theorem prover. What happens when we combine these mature technologies with modern graph d
ยทlinkedin.comยท
Standing on Giants' Shoulders: What Happens When Formal Ontology Meets Modern Verification? ๐Ÿš€ | LinkedIn
Unified Foundational Ontology
Unified Foundational Ontology
On request, this is the complete slide deck I used in my course at the C-FORS summer school on Foundational Ontologies (see https://lnkd.in/e9Af5JZF) at the University of Oslo, Norway. If you want to know more, here are some papers related to the talk: On the ontology itself: a) for a gentle introduction to UFO: https://lnkd.in/egS5FsQ b) to understand the UFO history and ecosystem (including OntoUML): https://lnkd.in/emCaX5pF c) a more formal paper on the axiomatization of UFO but also with examples (in OntoUML): https://lnkd.in/e_bUuTMa d) focusing on UFO's theory of Types and Taxonomic Structures: https://lnkd.in/eGPXHeh e) focusing on its Theory of Relations (including relationship reification): https://lnkd.in/eTFFRBy8 and https://lnkd.in/eMNmi7-B f) focusing on Qualities and Modes (aspect reification): https://lnkd.in/eNXbrKrW and https://lnkd.in/eQtNC9GH g) focusing on events and processes: https://lnkd.in/e3Z8UrCD, https://lnkd.in/ePZEaJh9, https://lnkd.in/eYnirFv6, https://lnkd.in/ev-cb7_e, https://lnkd.in/e_nTwBc7 On the tools: a) Model Auto-repair and Constraint Learning: https://lnkd.in/esuYSU9i b) Model Validation and Anti-Pattern Detection: https://lnkd.in/e2SxvVzS c) Ontological Patterns and Pattern Grammars: https://lnkd.in/exMFMgpT and https://lnkd.in/eCeRtMNz d) Multi-Level Modeling: https://lnkd.in/eVavvURk and https://lnkd.in/e8t3sMdU e) Complexity Management: https://lnkd.in/eq3xWp-U f) FAIR catalog of models and Pattern Mining: https://lnkd.in/eaN5d3QR and https://lnkd.in/ecjhfp8e g) Anti-Patterns on Wikidata: https://lnkd.in/eap37SSU h) Model Transformation/implementation: https://lnkd.in/eh93u5Hg, https://lnkd.in/e9bU_9NC, https://lnkd.in/eQtNC9GH, https://lnkd.in/esGS8ZTb #ontology #UFO #ontologies #foundationalontology #toplevelontology #TLO Semantics, Cybersecurity, and Services (SCS)/University of Twente
ยทlinkedin.comยท
Unified Foundational Ontology
Personal Knowledge Domain
Personal Knowledge Domain
๐™๐™๐™ค๐™ช๐™œ๐™๐™ฉ ๐™›๐™ค๐™ง ๐™ฉ๐™๐™š ๐˜ฟ๐™–๐™ฎ: What if we could encapsulate everything a person knowsโ€”their entire bubble of knowledge, what Iโ€™d call a Personal Knowledge Domain or better, our ๐™Ž๐™š๐™ข๐™–๐™ฃ๐™ฉ๐™ž๐™˜ ๐™Ž๐™š๐™ก๐™›, and represent it in an RDF graph? From that foundation, we could create Personal Agents that act on our behalf. Each of us would own our agent, with the ability to share or lease it for collaboration with other agents. If we could make these agents secure, continuously updatable, and interoperable, what kind of power might we unlock for the human race? Is this idea so far-fetched? It has solid grounding in knowledge representation, identity theory, and agent-based systems. It fits right in with current trends: AI assistants, the semantic web, Web3 identity, and digital twins. Yes, the technical and ethical hurdles are significant, but this could become the backbone of a future architecture for personalized AI and cooperative knowledge ecosystems. Pieces of the puzzle already exist: Tim Berners-Leeโ€™s Solid Project, digital twins for individuals, Personal AI platforms like personal.ai, Retrieval-Augmented Language Model agents (ReALM), and Web3 identity efforts such as SpruceID, architectures such as MCP and inter-agent protocols such as A2A. We see movement in human-centric knowledge graphs like FOAF and SIOC, learning analytics, personal learning environments, and LLM-graph hybrids. What we still need is a unified architecture that: * Employs RDF or similar for semantic richness * Ensures user ownership and true portability * Enables secure agent-to-agent collaboration * Supports continuous updates and trust mechanisms * Integrates with LLMs for natural, contextual reasoning These are certainly not novel notions, for example: * MyPDDL (My Personal Digital Life) and the PDS (Personal Data Store) concept from MIT and the EUโ€™s DECODE project. * The Human-Centric AI Group at Stanford and the Augmented Social Cognition group at PARC have also published research around lifelong personal agents and social memory systems. However, one wonders if anyone is working on combining all of the ingredients into a fully baked cake - after which we can enjoy dessert while our personal agents do our bidding. | 21 comments on LinkedIn
Personal Knowledge Domain
ยทlinkedin.comยท
Personal Knowledge Domain