Found 282 bookmarks
Custom sorting
Enabling Industrial AI: How Siemens and AIT Leverage TDengine and Ontop to Help TCG UNITECH Boost Productivity and Efficiency
Enabling Industrial AI: How Siemens and AIT Leverage TDengine and Ontop to Help TCG UNITECH Boost Productivity and Efficiency
I'm extremely excited to announce that Siemens and AIT Austrian Institute of Technology—two leaders in industrial innovation—chose TDengine as the time-series backbone for a groundbreaking project at TCG Unitech GmbH! Here’s the magic: Imagine stitching together over a thousand time-series signals per machine with domain knowledge, and connecting it all through an intelligent semantic layer. With TDengine capturing high-frequency sensor data, PostgreSQL holding production context, and Ontopic virtualizing everything into a cohesive knowledge graph—this isn’t just data collection. It’s an orchestration that reveals hidden patterns, powers real-time anomaly and defect detection, supports traceability, and enables explainable root-cause analysis. And none of this works without good semantics. The system understands the relationships—between sensors, machines, processes, and defects—which means both AI and humans can ask the right questions and get meaningful, actionable answers. For me, this is the future of smart manufacturing: when data, infrastructure, and domain expertise come together, you get proactive, explainable, and scalable insights that keep factories running at peak performance. It's a true pleasure working with Stefan B. from Siemens AG Österreich, Stephan Strommer and David Gruber from AIT, Peter Hopfgartner from Ontopic and our friends Klaus Neubauer, Herbert Kerbl, Bernhard Schmiedinger from TCG on this technical blog! We hope this will bring some good insights into how time-series data and semantics can transform the operations of modern manufacturing! Read the full case study: https://lnkd.in/gtuf8KzU
·linkedin.com·
Enabling Industrial AI: How Siemens and AIT Leverage TDengine and Ontop to Help TCG UNITECH Boost Productivity and Efficiency
Semantic Data in Medallion Architecture: Enterprise Knowledge Graphs at Scale | LinkedIn
Semantic Data in Medallion Architecture: Enterprise Knowledge Graphs at Scale | LinkedIn
Building Enterprise Knowledge Graphs Within Modern Data Platforms - Version 26 Louie Franco III Enterprise Architect - Knowledge Graph Architect - Semantics Architect August 3, 2025 In my previous article on Data Vault Medallion Architecture, I outlined how structured data flows through Landing, Bro
·linkedin.com·
Semantic Data in Medallion Architecture: Enterprise Knowledge Graphs at Scale | LinkedIn
Baking π and Building Better AI | LinkedIn
Baking π and Building Better AI | LinkedIn
I've spent long, hard years learning how to talk about knowledge graphs and semantics with software engineers who have little training in linguistics. I feel quite fluent at this point, after investing huge amounts of effort into understanding statistics (I was a humanities undergrad) and into unpac
·linkedin.com·
Baking π and Building Better AI | LinkedIn
how both OWL and SHACL can be employed during the decision-making phase for AI Agents when using a knowledge graph instead of relying on an LLM that hallucinates
how both OWL and SHACL can be employed during the decision-making phase for AI Agents when using a knowledge graph instead of relying on an LLM that hallucinates
𝙏𝙝𝙤𝙪𝙜𝙝𝙩 𝙛𝙤𝙧 𝙩𝙝𝙚 𝙙𝙖𝙮: I've been mulling over how both OWL and SHACL can be employed during the decision-making phase for AI Agents when using a knowledge graph instead of relying on an LLM that hallucinates. In this way, the LLM can still be used for assessment and sensory feedback, but it augments the graph, not the other way around. OWL and SHACL serve different roles. SHACL is not just a preprocessing validator; it can play an active role in constraining, guiding, or triggering decisions, especially when integrated into AI pipelines. However, OWL is typically more central to inferencing and reasoning tasks. SHACL can actively participate in decision-making, especially when decisions require data integrity, constraint enforcement, or trigger-based logic. In complex agents, OWL provides the inferencing engine, while SHACL acts as the constraint gatekeeper and occasionally contributes to rule-based decision-making. For example, an AI agent processes RDF data describing an applicant's skills, degree, and experience. SHACL validates the data's structure, ensuring required fields are present and correctly formatted. OWL reasoning infers that the applicant is qualified for a technical role and matches the profile of a backend developer. SHACL is then used again to check policy compliance. With all checks passed, the applicant is shortlisted, and a follow-up email is triggered. In AI agent decision-making, OWL and SHACL often work together in complementary ways. SHACL is commonly used as a preprocessing step to validate incoming RDF data. If the data fails validation, it's flagged or excluded, ensuring only clean, structurally sound data reaches the OWL reasoner. In this role, SHACL acts as a gatekeeper. They can also operate in parallel or in an interleaved manner within a pipeline. As decisions evolve, SHACL shapes may be checked mid-process. Some AI agents even use SHACL as a rule engine—to trigger alerts, detect actionable patterns, or constrain reasoning paths—while OWL continues to handle more complex semantic inferences, such as class hierarchies or property logic. Finally, SHACL can augment decision-making by confirming whether OWL-inferred actions comply with specific constraints. OWL may infer that “A is a type of B, so do X,” and SHACL then determines whether doing X adheres to a policy or requirement. Because SHACL supports closed-world assumptions (which OWL does not), it plays a valuable role in enforcing policies or compliance rules during decision execution. Illustrated:
how both OWL and SHACL can be employed during the decision-making phase for AI Agents when using a knowledge graph instead of relying on an LLM that hallucinates
·linkedin.com·
how both OWL and SHACL can be employed during the decision-making phase for AI Agents when using a knowledge graph instead of relying on an LLM that hallucinates
What makes the "𝐒𝐞𝐦𝐚𝐧𝐭𝐢𝐜 𝐃𝐚𝐭𝐚 𝐏𝐫𝐨𝐝𝐮𝐜𝐭" so valid in data conversations today?💡 𝐁𝐨𝐮𝐧𝐝𝐞𝐝 𝐂𝐨𝐧𝐭𝐞𝐱𝐭 and Right-to-Left Flow from consumers to raw materials.
What makes the "𝐒𝐞𝐦𝐚𝐧𝐭𝐢𝐜 𝐃𝐚𝐭𝐚 𝐏𝐫𝐨𝐝𝐮𝐜𝐭" so valid in data conversations today?💡 𝐁𝐨𝐮𝐧𝐝𝐞𝐝 𝐂𝐨𝐧𝐭𝐞𝐱𝐭 and Right-to-Left Flow from consumers to raw materials.
What makes the "𝐒𝐞𝐦𝐚𝐧𝐭𝐢𝐜 𝐃𝐚𝐭𝐚 𝐏𝐫𝐨𝐝𝐮𝐜𝐭" so valid in data conversations today?💡 𝐁𝐨𝐮𝐧𝐝𝐞𝐝 𝐂𝐨𝐧𝐭𝐞𝐱𝐭 and Right-to-Left Flow from consumers to raw materials. Tony Seale perfectly defines the value of bounded context. …𝘵𝘰 𝘴𝘶𝘴𝘵𝘢𝘪𝘯 𝘪𝘵𝘴𝘦𝘭𝘧, 𝘢 𝘴𝘺𝘴𝘵𝘦𝘮 𝘮𝘶𝘴𝘵 𝘮𝘪𝘯𝘪𝘮𝘪𝘴𝘦 𝘪𝘵𝘴 𝘧𝘳𝘦𝘦 𝘦𝘯𝘦𝘳𝘨𝘺- 𝘢 𝘮𝘦𝘢𝘴𝘶𝘳𝘦 𝘰𝘧 𝘶𝘯𝘤𝘦𝘳𝘵𝘢𝘪𝘯𝘵𝘺. 𝘔𝘪𝘯𝘪𝘮𝘪𝘴𝘪𝘯𝘨 𝘪𝘵 𝘦𝘲𝘶𝘢𝘵𝘦𝘴 𝘵𝘰 𝘭𝘰𝘸 𝘪𝘯𝘵𝘦𝘳𝘯𝘢𝘭 𝘦𝘯𝘵𝘳𝘰𝘱𝘺. 𝘈 𝘴𝘺𝘴𝘵𝘦𝘮 𝘢𝘤𝘩𝘪𝘦𝘷𝘦𝘴 𝘵𝘩𝘪𝘴 𝘣𝘺 𝘧𝘰𝘳𝘮𝘪𝘯𝘨 𝘢𝘤𝘤𝘶𝘳𝘢𝘵𝘦 𝘱𝘳𝘦𝘥𝘪𝘤𝘵𝘪𝘰𝘯𝘴 𝘢𝘣𝘰𝘶𝘵 𝘵𝘩𝘦 𝘦𝘹𝘵𝘦𝘳𝘯𝘢𝘭 𝘦𝘯𝘷 𝘢𝘯𝘥 𝘶𝘱𝘥𝘢𝘵𝘪𝘯𝘨 𝘪𝘵𝘴 𝘪𝘯𝘵𝘦𝘳𝘯𝘢𝘭 𝘴𝘵𝘢𝘵𝘦𝘴 𝘢𝘤𝘤𝘰𝘳𝘥𝘪𝘯𝘨𝘭𝘺, 𝘢𝘭𝘭𝘰𝘸𝘪𝘯𝘨 𝘧𝘰𝘳 𝘢 𝘥𝘺𝘯𝘢𝘮𝘪𝘤 𝘺𝘦𝘵 𝘴𝘵𝘢𝘣𝘭𝘦 𝘪𝘯𝘵𝘦𝘳𝘢𝘤𝘵𝘪𝘰𝘯 𝘸𝘪𝘵𝘩 𝘪𝘵𝘴 𝘴𝘶𝘳𝘳𝘰𝘶𝘯𝘥𝘪𝘯𝘨𝘴. 𝘖𝘯𝘭𝘺 𝘱𝘰𝘴𝘴𝘪𝘣𝘭𝘦 𝘰𝘯 𝘥𝘦𝘭𝘪𝘯𝘦𝘢𝘵𝘪𝘯𝘨 𝘢 𝘣𝘰𝘶𝘯𝘥𝘢𝘳𝘺 𝘣𝘦𝘵𝘸𝘦𝘦𝘯 𝘪𝘯𝘵𝘦𝘳𝘯𝘢𝘭 𝘢𝘯𝘥 𝘦𝘹𝘵𝘦𝘳𝘯𝘢𝘭 𝘴𝘺𝘴𝘵𝘦𝘮𝘴. 𝘋𝘪𝘴𝘤𝘰𝘯𝘯𝘦𝘤𝘵𝘦𝘥 𝘴𝘺𝘴𝘵𝘦𝘮𝘴 𝘴𝘪𝘨𝘯𝘢𝘭 𝘸𝘦𝘢𝘬 𝘣𝘰𝘶𝘯𝘥𝘢𝘳𝘪𝘦𝘴. Data Products enable a way to bind context to specific business purposes or use cases. This enables data to become: ✅ Purpose-driven ✅ Accurately Discoverable ✅ Easily Understandable & Addressable ✅ Valuable as an independent entity 𝐓𝐡𝐞 𝐒𝐨𝐥𝐮𝐭𝐢𝐨𝐧: The Data Product Model. A conceptual model that precisely captures the business context through an interface operable by business users or domain experts. We have often referred to this as The Data Product Prototype, which is essentially a semantic model and captures information on: ➡️ Popular Metrics the Business wants to drive ➡️ Measures & Dimensions ➡️ Relationships & formulas ➡️ Further context with tags, descriptions, synonyms, & observability metrics ➡️ Quality SLOs - or simply, conditions necessary ➡️ Additional policy specs contributed by Governance Stewards Once the Prototype is validated and given a green flag, development efforts kickstart. Note how all data engineering efforts (left-hand side) are not looped in until this point, saving massive costs and time drainage. The DE teams, who only have a partial view of the business landscape, are now no longer held accountable for this lack in strong business understanding. 𝐓𝐡𝐞 𝐨𝐰𝐧𝐞𝐫𝐬𝐡𝐢𝐩 𝐨𝐟 𝐭𝐡𝐞 𝐃𝐚𝐭𝐚 𝐏𝐫𝐨𝐝𝐮𝐜𝐭 𝐦𝐨𝐝𝐞𝐥 𝐢𝐬 𝐞𝐧𝐭𝐢𝐫𝐞𝐥𝐲 𝐰𝐢𝐭𝐡 𝐁𝐮𝐬𝐢𝐧𝐞𝐬𝐬. 🫠 DEs have a blueprint to refer and simply map sources or source data products to the prescribed Data Product Model. Any new request comes through this prototype itself, managed by Data Product Managers in collaboration with business users. Dissolving all bottlenecks from centralised data engineering teams. At this level, necessary transformations are delivered, 🔌 that activate the SLOs 🔌 enable interoperability with native tools and upstream data products, 🔌 allow reusability of pre-existing transforms in the form of Source or Aggregate data products. #datamanagement #dataproducts
·linkedin.com·
What makes the "𝐒𝐞𝐦𝐚𝐧𝐭𝐢𝐜 𝐃𝐚𝐭𝐚 𝐏𝐫𝐨𝐝𝐮𝐜𝐭" so valid in data conversations today?💡 𝐁𝐨𝐮𝐧𝐝𝐞𝐝 𝐂𝐨𝐧𝐭𝐞𝐱𝐭 and Right-to-Left Flow from consumers to raw materials.
LLMs and Neurosymbolic reasoning
LLMs and Neurosymbolic reasoning
When people discuss how LLMS "reason," you’ll often hear that they rely on transduction rather than abduction. It sounds technical, but the distinction matters - especially as we start wiring LLMs into systems that are supposed to think. 🔵 Transduction is case-to-case reasoning. It doesn’t build theories; it draws fuzzy connections based on resemblance. Think: “This metal conducts electricity, and that one looks similar - so maybe it does too.” 🔵 Abduction, by contrast, is about generating explanations. It’s what scientists (and detectives) do: “This metal is conducting - maybe it contains free electrons. That would explain it.” The claim is that LLMs operate more like transducers - navigating high-dimensional spaces of statistical similarity, rather than forming crisp generalisations. But this isn’t the whole picture. In practice, it seems to me that LLMs also perform a kind of induction - abstracting general patterns from oceans of text. They learn the shape of ideas and apply them in novel ways. That’s closer to “All metals of this type have conducted in the past, so this one probably will.” Now add tools to the mix - code execution, web search, Elon Musk's tweet history 😉 - and LLMs start doing something even more interesting: program search and synthesis. It's messy, probabilistic, and not at all principled or rigorous. But it’s inching toward a form of abductive reasoning. Which brings us to a more principled approach for reasoning within an enterprise domain: the neuro-symbolic loop - a collaboration between large language models and knowledge graphs. The graph provides structure: formal semantics, ontologies, logic, and depth. The LLM brings intuition: flexible inference, linguistic creativity, and breadth. One grounds. The other leaps. 💡 The real breakthrough could come when the grounding isn’t just factual, but conceptual - when the ontology encodes clean, meaningful generalisations. That’s when the LLM’s leaps wouldn’t just reach further - they’d rise higher, landing on novel ideas that hold up under formal scrutiny. 💡 So where do metals fit into this new framing? 🔵 Transduction: “This metal conducts. That one looks the same - it probably does too.” 🔵 Induction: “I’ve tested ten of these. All conducted. It’s probably a rule.” 🔵 Abduction: “This metal is conducting. It shares properties with the ‘conductive alloy’ class - especially composition and crystal structure. The best explanation is a sea of free electrons.” LLMs, in isolation, are limited in their ability to perform structured abduction. But when embedded in a system that includes a formal ontology, logical reasoning, and external tools, they can begin to participate in richer forms of reasoning. These hybrid systems are still far from principled scientific reasoners - but they hint at a path forward: a more integrated and disciplined neuro-symbolic architecture that moves beyond mere pattern completion.
·linkedin.com·
LLMs and Neurosymbolic reasoning
S&P Global Unlocks the Future of AI-driven insights with AI-Ready Metadata on S&P Global Marketplace
S&P Global Unlocks the Future of AI-driven insights with AI-Ready Metadata on S&P Global Marketplace
🚀 When I shared our 2025 goals for the Enterprise Data Organization, one of the things I alluded to was machine-readable column-level metadata. Let’s unpack what that means—and why it matters. 🔍 What: For datasets we deliver via modern cloud distribution, we now provide human - and machine - readable metadata at the column level. Each column has an immutable URL (no auth, no CAPTCHA) that hosts name/value metadata - synonyms, units of measure, descriptions, and more - in multiple human languages. It’s semantic context that goes far beyond what a traditional data dictionary can convey. We can't embed it, so we link to it. 💡 Why: Metadata is foundational to agentic, precise consumption of structured data. Our customers are investing in semantic layers, data catalogs, and knowledge graphs - and they shouldn’t have to copy-paste from a PDF to get there. Use curl, Python, Bash - whatever works - to automate ingestion. (We support content negotiation and conditional GETs.) 🧠 Under the hood? It’s RDF. Love it or hate it, you don’t need to engage with the plumbing unless you want to. ✨ To our knowledge, this hasn’t been done before. This is our MVP. We’re putting it out there to learn what works - and what doesn’t. It’s vendor-neutral, web-based, and designed to scale across: 📊 Breadth of datasets across S&P 🧬 Depth of metadata 🔗 Choice of linking venue 🙏 It took a village to make this happen. I can’t name everyone without writing a book, but I want to thank our executive leadership for the trust and support to go build this. Let us know what you think! 🔗 https://lnkd.in/gbe3NApH Martina Cheung, Saugata Saha, Swamy Kocherlakota, Dave Ernsberger, Mark Eramo, Frank Tarsillo, Warren Breakstone, Hamish B., Erica Robeen, Laura Miller, Justine S Iverson, | 17 comments on LinkedIn
·linkedin.com·
S&P Global Unlocks the Future of AI-driven insights with AI-Ready Metadata on S&P Global Marketplace
Should ontologies be treated as organizational resources for semantic capabilities?
Should ontologies be treated as organizational resources for semantic capabilities?
💡 Should ontologies be treated as organizational resources for semantic capabilities? More and more organizations are investing in data platforms, modeling tools, and integration frameworks. But one key capability is often underused or misunderstood: ontologies as semantic infrastructure. While databases handle facts and BI platforms handle queries, ontologies structure meaning. They define what things are, not just what data says. When treated as living organizational resources, ontologies can bring: 🔹 Shared understanding across silos 🔹 Reasoning and inference beyond data queries 🔹 Semantic integration of diverse systems 🔹 Clarity and coherence in enterprise models But here’s the challenge — ontologies don’t operate in isolation. They must be positioned alongside: 🔸 Data-oriented technologies (RDF, RDF-star, quad stores) that track facts and provenance 🔸 Enterprise modeling tools (e.g., ArchiMate) that describe systems and views 🔸 Exploratory approaches (like semantic cartography) that support emergence over constraint These layers each come with their own logic — epistemic vs. ontologic, structural vs. operational, contextual vs. formal. ✅ Building semantic capabilities requires aligning all these dimensions. ✅ It demands governance, tooling, and a culture of collaboration between ontologists, data managers, architects, and domain experts. ✅ And it opens the door to richer insight, smarter automation, and more agile knowledge flows. 🔍 With projects like ArchiCG (semantic interactive cartography), I aim to explore how we can visually navigate this landscape — not constrained by predefined viewpoints, but guided by logic, meaning, and emergent perspectives. What do you think? Are ontologies ready to take their place as core infrastructure in your organization? | 16 comments on LinkedIn
·linkedin.com·
Should ontologies be treated as organizational resources for semantic capabilities?
Semantics in use (part 3): an interview with Saritha V.Kuriakose, VP Research Data Management at Novo Nordisk | LinkedIn
Semantics in use (part 3): an interview with Saritha V.Kuriakose, VP Research Data Management at Novo Nordisk | LinkedIn
We continue our series of examples of the use of semantics and ontologies across organizations with an interview with Saritha V. Kuriakose from Novo Nordisk, talking about the pervasive and foundational use of ontologies in pharmaceutical R&D.
·linkedin.com·
Semantics in use (part 3): an interview with Saritha V.Kuriakose, VP Research Data Management at Novo Nordisk | LinkedIn
how Knowledge Graphs could be used to provide context
how Knowledge Graphs could be used to provide context
📚 Definition number 0️⃣0️⃣0️⃣0️⃣0️⃣1️⃣0️⃣1️⃣ 🌊 It is pretty easy to see how context is making really big waves recently. Not long time ago, there were announcements about Model Context Protocol (MCP). There is even saying that Prompt Engineers changed their job titles to Context Engineers. 😅 🔔 In my recent few posts about definitions I tried to show how Knowledge Graphs could be used to provide context as they are built with two types of real definitions expressed in a formalised language. Next, I explained how objective and linguistic nominal definitions in Natural Language can be linked to the models of external things encoded in the formal way to increase human-machine semantic interoperability. 🔄 Quick recap: in KGs objective definitions define objects external to the language and linguistic definitions relate words to other expressions of that language. This is regardless of the nature of the language under consideration - formalised or natural. Objective definitions are real definitions when they uniquely specify certain objects via their characteristics - this is also regardless of the language nature. Not all objective definitions are real definitions and none of linguistic definitions are real definitions. 💡 Classical objective definitions are an example of clear definitions. Another type of real definitions that could be encountered either in formalised or Natural Language are contextual definitions. An example of such definition is ‘Logarithm of a number A with base B is such a number C that B to the power of C is equal to A’. Obviously this familiar mathematical definition could be expressed in formalised language as well. This makes Knowledge Graphs capable of providing context via contextual definitions apart from other types of definitions covered so far. 🤷🏼‍♂️ At the same time another question appears. How is it possible to keep track of all those different types of definitions and always be able to know which one is which for a given modelled object? In my previous posts, I have shown how definitions could be linked via ‘rdfs:comment’ and ‘skos:definition’. However, that is still pretty generic. It is still possible to extend base vocabulary provided by SKOS and add custom properties for this purpose. Quick reminder: property in KG corresponds to relation between two other objects. Properties allowing to add multiple types of definitions in Natural Language can be created as instances of owl:AnnotationProperty as follows: namespace:contextualDefiniton a owl:AnnotationProperty . After that this new annotation property instance could be used in the same way as more generic linking definitions to objects in KGs. 🤓 🏄‍♂️ The above shows that getting context right way can be tricky endeavour indeed. In my next posts, I will try to describe some other types of definitions, so they can also be added to KGs. If you'd like to level up your KG in this way, please stay tuned. 🎸😎🤙🏻 #ai #knowledgegraphs #definitions
how Knowledge Graphs could be used to provide context
·linkedin.com·
how Knowledge Graphs could be used to provide context
Unlocking Transparency: Semantics in Ride-Hailing for Consumers | LinkedIn
Unlocking Transparency: Semantics in Ride-Hailing for Consumers | LinkedIn
by Timothy Coleman A recent Guardian report drew attention to a key issue in the ride-hailing industry, spotlighting Uber’s use of sophisticated algorithms to enhance profits while prompting questions about clarity for drivers and passengers. Studies from Columbia Business School and the University
·linkedin.com·
Unlocking Transparency: Semantics in Ride-Hailing for Consumers | LinkedIn
what is a semantic layer?
what is a semantic layer?
There’s a lot of buzz about #semanticlayers on LinkedIn these days. So what is a semantic layer? According to AtScale, “The semantic layer is a metadata and abstraction layer built on the source data (eg.. data warehouse, data lake, or data mart). The metadata is defined so that the data model gets enriched and becomes simple enough for the business user to understand.” It’s a metadata layer. Which can be taken a step further. A metadata layer is best implemented using metadata standards that support interoperability and extensibility. There are open standards such as Dublin Core Metadata Initiative and there are home-grown standards, established within organizations and domains. If you want to design and build semantic layers, build from metadata standards or build a metadata standard, according to #FAIR principles (findable, accessible, interoperable, reusable). Some interesting and BRILLIANT ✨folks to check out in the metadata domain space: Ole Olesen-Bagneux (O2)’s (check out his upcoming book about the #metagrid) Lisa N. Cao Robin Fay Jenna Jordan Larry Swanson Resources in comments 👇👇👇 | 29 comments on LinkedIn
what is a semantic layer?
·linkedin.com·
what is a semantic layer?