Found 282 bookmarks
Custom sorting
Standing on Giants' Shoulders: What Happens When Formal Ontology Meets Modern Verification? 🚀 | LinkedIn
Standing on Giants' Shoulders: What Happens When Formal Ontology Meets Modern Verification? 🚀 | LinkedIn
Building on Decades of Foundational Research The formal ontology community has given us incredible foundations - Barry Smith's BFO framework, Alan Ruttenberg's CLIF axiomatizations, and Microsoft Research's Z3 theorem prover. What happens when we combine these mature technologies with modern graph d
·linkedin.com·
Standing on Giants' Shoulders: What Happens When Formal Ontology Meets Modern Verification? 🚀 | LinkedIn
Unified Foundational Ontology
Unified Foundational Ontology
On request, this is the complete slide deck I used in my course at the C-FORS summer school on Foundational Ontologies (see https://lnkd.in/e9Af5JZF) at the University of Oslo, Norway. If you want to know more, here are some papers related to the talk: On the ontology itself: a) for a gentle introduction to UFO: https://lnkd.in/egS5FsQ b) to understand the UFO history and ecosystem (including OntoUML): https://lnkd.in/emCaX5pF c) a more formal paper on the axiomatization of UFO but also with examples (in OntoUML): https://lnkd.in/e_bUuTMa d) focusing on UFO's theory of Types and Taxonomic Structures: https://lnkd.in/eGPXHeh e) focusing on its Theory of Relations (including relationship reification): https://lnkd.in/eTFFRBy8 and https://lnkd.in/eMNmi7-B f) focusing on Qualities and Modes (aspect reification): https://lnkd.in/eNXbrKrW and https://lnkd.in/eQtNC9GH g) focusing on events and processes: https://lnkd.in/e3Z8UrCD, https://lnkd.in/ePZEaJh9, https://lnkd.in/eYnirFv6, https://lnkd.in/ev-cb7_e, https://lnkd.in/e_nTwBc7 On the tools: a) Model Auto-repair and Constraint Learning: https://lnkd.in/esuYSU9i b) Model Validation and Anti-Pattern Detection: https://lnkd.in/e2SxvVzS c) Ontological Patterns and Pattern Grammars: https://lnkd.in/exMFMgpT and https://lnkd.in/eCeRtMNz d) Multi-Level Modeling: https://lnkd.in/eVavvURk and https://lnkd.in/e8t3sMdU e) Complexity Management: https://lnkd.in/eq3xWp-U f) FAIR catalog of models and Pattern Mining: https://lnkd.in/eaN5d3QR and https://lnkd.in/ecjhfp8e g) Anti-Patterns on Wikidata: https://lnkd.in/eap37SSU h) Model Transformation/implementation: https://lnkd.in/eh93u5Hg, https://lnkd.in/e9bU_9NC, https://lnkd.in/eQtNC9GH, https://lnkd.in/esGS8ZTb #ontology #UFO #ontologies #foundationalontology #toplevelontology #TLO Semantics, Cybersecurity, and Services (SCS)/University of Twente
·linkedin.com·
Unified Foundational Ontology
Personal Knowledge Domain
Personal Knowledge Domain
𝙏𝙝𝙤𝙪𝙜𝙝𝙩 𝙛𝙤𝙧 𝙩𝙝𝙚 𝘿𝙖𝙮: What if we could encapsulate everything a person knows—their entire bubble of knowledge, what I’d call a Personal Knowledge Domain or better, our 𝙎𝙚𝙢𝙖𝙣𝙩𝙞𝙘 𝙎𝙚𝙡𝙛, and represent it in an RDF graph? From that foundation, we could create Personal Agents that act on our behalf. Each of us would own our agent, with the ability to share or lease it for collaboration with other agents. If we could make these agents secure, continuously updatable, and interoperable, what kind of power might we unlock for the human race? Is this idea so far-fetched? It has solid grounding in knowledge representation, identity theory, and agent-based systems. It fits right in with current trends: AI assistants, the semantic web, Web3 identity, and digital twins. Yes, the technical and ethical hurdles are significant, but this could become the backbone of a future architecture for personalized AI and cooperative knowledge ecosystems. Pieces of the puzzle already exist: Tim Berners-Lee’s Solid Project, digital twins for individuals, Personal AI platforms like personal.ai, Retrieval-Augmented Language Model agents (ReALM), and Web3 identity efforts such as SpruceID, architectures such as MCP and inter-agent protocols such as A2A. We see movement in human-centric knowledge graphs like FOAF and SIOC, learning analytics, personal learning environments, and LLM-graph hybrids. What we still need is a unified architecture that: * Employs RDF or similar for semantic richness * Ensures user ownership and true portability * Enables secure agent-to-agent collaboration * Supports continuous updates and trust mechanisms * Integrates with LLMs for natural, contextual reasoning These are certainly not novel notions, for example: * MyPDDL (My Personal Digital Life) and the PDS (Personal Data Store) concept from MIT and the EU’s DECODE project. * The Human-Centric AI Group at Stanford and the Augmented Social Cognition group at PARC have also published research around lifelong personal agents and social memory systems. However, one wonders if anyone is working on combining all of the ingredients into a fully baked cake - after which we can enjoy dessert while our personal agents do our bidding. | 21 comments on LinkedIn
Personal Knowledge Domain
·linkedin.com·
Personal Knowledge Domain
knowledge infrastructure
knowledge infrastructure
We talk about knowledge management and systems for knowledge (like knowledge graphs) a lot these days. Especially with the rising interest in #semantics, #metadata, #taxonomies and #ontologies, thanks to AI. But what makes for knowledge that is operational and actionable? Less often discussed is knowledge infrastructure. Fundamental to knowledge management and knowledge repositories, as derived from the field of library and information science, is a service—oriented approach. Knowledge infrastructure is focused on creating systems that deliver information and knowledge that is accurate and satisfies the requirements of: ⚪️ Creators: those who generate knowledge (researchers, experts, content authors, data producers) ⚪️ Products: the formal outputs of knowledge (e.g., documents, datasets, models, applications, platforms, chatbots/AI assistants) ⚪️ Distributors: systems and platforms that make knowledge available (repositories, databases, APIs) ⚪️ Disseminators: communicators and interpreters (educators, marketers, dashboards, wikis) ⚪️ Users: individuals or systems that apply the knowledge (decisionmakers, AI agents, learners, stakeholders) Let’s put this into perspective. Without supporting knowledge infrastructures, knowledge becomes a one-off, relegated to silos or single use instances. We see this with products. When we manage knowledge as a product, we fail to cast a wider net, assuming successes based on metrics that are localized to the product rather than distributed to be inclusive of all signals, input and output. If knowledge is not managed as infrastructure, we create anti-patterns for the business and AI systems. A recognizable symptom of these anti-patterns are silos. I’ll be publishing an article soon about knowledge infrastructure, and what it takes to build and manage a knowledge infrastructure program. #ai #ia #knowledgeinfrastructure For reference, excerpt from Richard E Rubin’s MLS textbook, Foundations of Information and Library Science in comments 👇👇👇 | 42 comments on LinkedIn
knowledge infrastructure
·linkedin.com·
knowledge infrastructure
The new AI powered Anayltics stack is here…says Gartner’s Afraz Jaffri ! A key element of that stack is an ontology powered Semantic Layer
The new AI powered Anayltics stack is here…says Gartner’s Afraz Jaffri ! A key element of that stack is an ontology powered Semantic Layer
The new AI powered Anayltics stack is here…says Gartner’s Afraz Jaffri ! A key element of that stack is an ontology powered Semantic Layer that serves as the brain for AI agents to act on knowledge of your internal data and deliver timely, accurate and hallucination-free insights! #semanticlayer #knowledgegraphs #genai #decisionintelligence
The new AI powered Anayltics stack is here…says Gartner’s Afraz Jaffri ! A key element of that stack is an ontology powered Semantic Layer
·linkedin.com·
The new AI powered Anayltics stack is here…says Gartner’s Afraz Jaffri ! A key element of that stack is an ontology powered Semantic Layer
Trends from KGC 2025
Trends from KGC 2025
Last week I was fortunate to attend the Knowledge Graph Conference in NYC! Here are a few trends that span multiple presentations and conversations. - AI and LLM Integration: A major focus [again this year] was how LLMs can be used to enrich knowledge graphs and how knowledge graphs, in turn, can improve LLM outputs. This included using LLMs for entity extraction, verification, inference, and query generation. Many presentations demonstrated how grounding LLMs in knowledge graphs leads to more accurate, contextual, and explainable AI responses. - Semantic Layers and Enterprise Knowledge: There was a strong emphasis on building semantic layers that act as gateways to structured, connected enterprise data. These layers facilitate data integration, governance, and more intelligent AI agents. Decentralized semantic data products (DPROD) were discussed as a framework for internal enterprise data ecosystems. - From Data to Knowledge: Many speakers highlighted that AI is just the “tip of the iceberg” and the true power lies in the data beneath. Converting raw data into structured, connected knowledge was seen as crucial. The hidden costs of ignoring semantics were also discussed, emphasizing the need for consistent data preparation, cleansing, and governance. - Ontology Management and Change: Managing changes and governance in ontologies was a recurring theme. Strategies such as modularization, version control, and semantic testing were recommended. The concept of “SemOps” (Semantic Operations) was discussed, paralleling DevOps for software development. - Practical Tools and Demos: The conference included numerous demos of tools and platforms for building, querying, and visualizing knowledge graphs. These ranged from embedded databases like KuzuDB and RDFox to conversational AI interfaces for KGs, such as those from Metaphacts and Stardog. I especially enjoyed catching up with the Semantic Arts team (Mark Wallace, Dave McComb and Steve Case), talking Gist Ontology and SemOps. I also appreciated the detailed Neptune Q&A I had with Brian O'Keefe, the vision of Ora Lassila and then a chance meeting Adrian Gschwend for the first time, where we connected on LinkML and Elmo as a means to help with bidirectional dataflows. I was so excited by these conversations that I planned to have two team members join me in June at the Data Centric Architecture Workshop Forum, https://www.dcaforum.com/
trends
·linkedin.com·
Trends from KGC 2025
Ontologies, OWL, SHACL - a baseline | LinkedIn
Ontologies, OWL, SHACL - a baseline | LinkedIn
Ontologies, OWL, SHACL: I was going to react to a comment that came through my feed and turned it into this post as it resonated with questions I am often asked about the mentioned technologies, their uses, and their relations and, more broadly, it concerns a key architectural discussion that I've h
·linkedin.com·
Ontologies, OWL, SHACL - a baseline | LinkedIn
SousLesensVocables is a set of tools developed to manage Thesaurus and Ontologies resources through SKOS , OWL and RDF standards and graph visualisation approaches
SousLesensVocables is a set of tools developed to manage Thesaurus and Ontologies resources through SKOS , OWL and RDF standards and graph visualisation approaches
SousLesensVocables is a set of tools developed to manage Thesaurus and Ontologies resources through SKOS , OWL and RDF standards and graph visualisation approaches
·souslesens.github.io·
SousLesensVocables is a set of tools developed to manage Thesaurus and Ontologies resources through SKOS , OWL and RDF standards and graph visualisation approaches
The new AI Risk “ontology”: A Map with No Rules
The new AI Risk “ontology”: A Map with No Rules
A Map with No Rules The new AI Risk “ontology” (AIRO) maps regulatory concepts from the EU AI Act, ISO/IEC 23894, and ISO 31000. But without formal constraints or ontological grounding in a top-level ontology, it reads more like a map with no rules. At first glance, AIRO seems well-structured. It defines entities like “AI Provider,” “AI Subject,” and “Capability,” linking them to legal clauses and decision workflows. But it lacks the logical scaffolding that makes semantic models computable. There are no disjointness constraints, no domain or range restrictions, no axioms to enforce identity or prevent contradiction. For example, if “Provider” and “Subject” are just two nodes in a graph, the system has no way to infer that they must be distinct. There’s nothing stopping an implementation from assigning both roles to the same agent. That’s not an edge case. It’s a missing foundation. This is where formal ontologies matter. Logic is not a luxury. It’s what makes it possible to validate, reason, and automate oversight. Without constraints and grounding in a TLO, semantic structures become decorative. They document language, but not the conditions that govern responsible behavior. If we want regulations that adapts with AI instead of chasing it, we need more than a vocabulary. We need logic, constraints, and ontological structure. #AIRegulation #ResponsibleAI #SemanticGovernance #AIAudits #AIAct #Ontologies #LogicMatters
A Map with No RulesThe new AI Risk “ontology”
·linkedin.com·
The new AI Risk “ontology”: A Map with No Rules
European Parliament Open Data Portal : a SHACL-powered knowledge graph - Sparna Blog
European Parliament Open Data Portal : a SHACL-powered knowledge graph - Sparna Blog
A second usecase Thomas wrote for Veronika Heimsbakk’s SHACL for the Practitioner upcoming book is about Sparna’s work for the European Parliament. From validation of the data in the knowledge graph to further projects of data integration and dissemination, many different usages of SHACL specifications were explored… … and more exploratory usages of SHACL are foreseen ! “…
·blog.sparna.fr·
European Parliament Open Data Portal : a SHACL-powered knowledge graph - Sparna Blog
Is developing an ontology from an LLM really feasible?
Is developing an ontology from an LLM really feasible?
It seems the answer on whether an LMM would be able to replace the whole text-to-ontology pipeline is a resounding ‘no’. If you’re one of those who think that should be (or even is?) a ‘yes’: why, and did you do the experiments that show it’s as good as the alternatives (with the results available)? And I mean a proper ontology, not a knowledge graph with numerous duplications and contradictions and lacking constraints. For a few gentle considerations (and pointers to longer arguments) and a summary figure of processes the LLM supposedly would be replacing: see https://lnkd.in/dG_Xsv_6 | 43 comments on LinkedIn
Maria KeetMaria Keet
·linkedin.com·
Is developing an ontology from an LLM really feasible?
Digital evolution: Novo Nordisk’s shift to ontology-based data management - Journal of Biomedical Semantics
Digital evolution: Novo Nordisk’s shift to ontology-based data management - Journal of Biomedical Semantics
The amount of biomedical data is growing, and managing it is increasingly challenging. While Findable, Accessible, Interoperable and Reusable (FAIR) data principles provide guidance, their adoption has proven difficult, especially in larger enterprises like pharmaceutical companies. In this manuscript, we describe how we leverage an Ontology-Based Data Management (OBDM) strategy for digital transformation in Novo Nordisk Research & Early Development. Here, we include both our technical blueprint and our approach for organizational change management. We further discuss how such an OBDM ecosystem plays a pivotal role in the organization’s digital aspirations for data federation and discovery fuelled by artificial intelligence. Our aim for this paper is to share the lessons learned in order to foster dialogue with parties navigating similar waters while collectively advancing the efforts in the fields of data management, semantics and data driven drug discovery.
·jbiomedsem.biomedcentral.com·
Digital evolution: Novo Nordisk’s shift to ontology-based data management - Journal of Biomedical Semantics
The SECI model for knowledge creation, collection, and distribution within the organization
The SECI model for knowledge creation, collection, and distribution within the organization
💫 An 𝗲𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲 𝗼𝗻𝘁𝗼𝗹𝗼𝗴𝘆 is just a means, not an end. 👉 Transforming 𝘁𝗮𝗰𝗶𝘁 𝗸𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 into 𝗲𝘅𝗽𝗹𝗶𝗰𝗶𝘁 𝗸𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 through an enterprise ontology is a self-contained exercise if not framed within a broader process of knowledge creation, collection, and distribution within the organization. 👇 The 𝗦𝗘𝗖𝗜 𝗠𝗼𝗱𝗲𝗹 effectively describes the various steps of this process, going beyond mere collection and formalization. The SECI model outlines the following four phases that must be executed iteratively and continuously to properly manage organizational knowledge: 1️⃣ 𝗦𝗼𝗰𝗶𝗮𝗹𝗶𝘇𝗮𝘁𝗶𝗼𝗻: In this phase, tacit knowledge is shared through direct interaction, observation, or experiences. It emphasizes the transfer of personal knowledge between individuals and fosters mutual understanding through collaboration (tacit ➡️ tacit). 2️⃣ 𝗘𝘅𝘁𝗲𝗿𝗻𝗮𝗹𝗶𝘇𝗮𝘁𝗶𝗼𝗻: In this phase, tacit knowledge is articulated into explicit forms, such as an enterprise ontology. It helps to codify and communicate the personal knowledge that might otherwise remain unspoken or difficult to share (tacit ➡️ explicit). 3️⃣ 𝗖𝗼𝗺𝗯𝗶𝗻𝗮𝘁𝗶𝗼𝗻: In this phase, explicit knowledge is gathered from different sources, categorized, and synthesized to form new sets of knowledge. It involves the aggregation and reorganization of existing knowledge to create more structured and accessible forms (explicit ➡️ explicit). 4️⃣ 𝗜𝗻𝘁𝗲𝗿𝗻𝗮𝗹𝗶𝘇𝗮𝘁𝗶𝗼𝗻: In this phase, individuals internalize explicit knowledge, turning it back into tacit knowledge through practice, experience, and learning. It emphasizes the transformation of formalized knowledge into personal, actionable knowledge (explicit ➡️ tacit). 🎯 In a world where the only constant is change, it is no longer enough for an organization to know something; what matters most is how fast it learns by creating and redistributing new knowledge internally. 🧑‍🎓 To quote Nadella, organizations and the people within them should not be 𝘒𝘯𝘰𝘸-𝘐𝘵-𝘈𝘭𝘭𝘴 but rather 𝘓𝘦𝘢𝘳𝘯-𝘐𝘵-𝘈𝘭𝘭𝘴. #TheDataJoy #KnowledgeMesh #KnowledgeManagement #Ontologies
Transforming 𝘁𝗮𝗰𝗶𝘁 𝗸𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 into 𝗲𝘅𝗽𝗹𝗶𝗰𝗶𝘁 𝗸𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 through an enterprise ontology is a self-contained exercise if not framed within a broader process of knowledge creation, collection, and distribution within the organization.
·linkedin.com·
The SECI model for knowledge creation, collection, and distribution within the organization
What makes an ontology fail? 9 reasons
What makes an ontology fail? 9 reasons
What makes an ontology fail? 9 reasons. At the inauguration of SCOR (Swiss Center for Ontological Research), I had the opportunity to speak alongside Barry… | 154 comments on LinkedIn
What makes an ontology fail? 9 reasons
·linkedin.com·
What makes an ontology fail? 9 reasons