GraphNews

Labeled Meta Property Graphs (LMPG): A Property-Centric Approach to Graph Database Architecture
Labeled Meta Property Graphs (LMPG): A Property-Centric Approach to Graph Database Architecture
Discover how LMPG transforms graph databases by treating properties as first-class citizens rather than simple node attributes. This comprehensive technical guide explores RushDB's groundbreaking architecture that enables automatic schema evolution, property-first queries, and cross-domain analytics impossible in traditional property graphs or RDF systems.
·rushdb.com·
Labeled Meta Property Graphs (LMPG): A Property-Centric Approach to Graph Database Architecture
Simplify graph emebeddings
Simplify graph emebeddings
Simplify graph emebeddings ↙️↙️↙️ Developing a fast vector indexing datastore engine 🚂 at `arrowspace` led me into defining a fast way for doing graph embeddings. What I came up with is a process that is categorised as inductive graph embeddings, aka infer the embedding of an added node without retraining on the graph. `arrowspace` work similarly to Laplacian Eigenmaps with some relevant tweaks to achieve performance as described in https://lnkd.in/eGgeKbdM This method is a sequence of linear operations, compared to similar algorithms it uses spectral properties instead of random walks so to achieve faster training speed 🚄 How faster will be the object of a future blogpost. Practical comparison summary: * Inductiveness: `arrowspace` (spectral operator on features) and GraphSAGE are inductive; DeepWalk/node2vec are typically transductive * Online cost: `arrowspace`’s operator application is lightweight; GraphSAGE requires model inference; node2vec/DeepWalk usually require rerunning or approximations to add nodes * Quality: Laplacian embeddings benchmark strongly against node2vec and are competitive with deep methods (VGAE) depending on graph properties and metrics, suggesting `arrowspace`’s embeddings will be solid baselines or better for community-structured retrieval tasks * Integration: `arrowspace` emphasizes Rust/native vector indexing with spectral augmentation, complementing external training stacks rather than replacing them. This simplifies this kind of processes compared to Deep Learning and random walks approaches. Please follow for more updates. #graphembeddings #graphs #embeddings #search #algorithm
Simplify graph emebeddings
·linkedin.com·
Simplify graph emebeddings
Graphlytic Text2graph
Graphlytic Text2graph
Text2graph is and online service for transforming free text into a knowledge graph form (nodes and relationships). The graph can be also exported using Cypher or Gremlin statements for quick import into your favourite database.
·graphlytic.com·
Graphlytic Text2graph
painter-network-exploration: Construction of a large painter network with ~3000 painters using the PainterPalette dataset, connecting painters if they lived at the same place for long enough time.
painter-network-exploration: Construction of a large painter network with ~3000 painters using the PainterPalette dataset, connecting painters if they lived at the same place for long enough time.
Construction of a large painter network with ~3000 painters using the PainterPalette dataset, connecting painters if they lived at the same place for long enough time. - me9hanics/painter-network-e...
·github.com·
painter-network-exploration: Construction of a large painter network with ~3000 painters using the PainterPalette dataset, connecting painters if they lived at the same place for long enough time.
city2graph is a Python library that converts geospatial datasets into graphs (networks).
city2graph is a Python library that converts geospatial datasets into graphs (networks).
🚀 𝗰𝗶𝘁𝘆𝟮𝗴𝗿𝗮𝗽𝗵 𝘃𝟬.𝟭.𝟲 𝗶𝘀 𝗻𝗼𝘄 𝗹𝗶𝘃𝗲! 🚀 city2graph is a Python library that converts geospatial datasets into graphs (networks). 🔗 GitHub https://lnkd.in/gmu6bsKR What's New: 🛣️ 𝐌𝐞𝐭𝐚𝐩𝐚𝐭𝐡𝐬 𝐟𝐨𝐫 𝐇𝐞𝐭𝐞𝐫𝐨𝐠𝐞𝐧𝐞𝐨𝐮𝐬 𝐆𝐫𝐚𝐩𝐡𝐬 - Generate node connections by a variety of relations (e.g. amenity → street → street → amenity) 🗺️ 𝐂𝐨𝐧𝐭𝐢𝐠𝐮𝐢𝐭𝐲 𝐆𝐫𝐚𝐩𝐡 - Analyse spatial adjacency and neighborhood relationships with the new contiguity graph support 🔄 𝐎𝐃 𝐌𝐚𝐭𝐫𝐢𝐱 - Work seamlessly with OD matrices for migration and mobility flow analysis You can now install the latest version via pip and conda. For more examples, please see the document: https://city2graph.net/ As always, contributors are most welcome! #UrbanAnalytics #GraphAnalysis #OpenSource #DataScience #GeoSpatial #NetworkScience #UrbanPlanning #Python #SpatialAnalysis | 25 comments on LinkedIn
city2graph is a Python library that converts geospatial datasets into graphs (networks).
·linkedin.com·
city2graph is a Python library that converts geospatial datasets into graphs (networks).
Transforming SHACL Shape Graphs into HTML Applications for Populating Knowledge Graphs
Transforming SHACL Shape Graphs into HTML Applications for Populating Knowledge Graphs
Creating applications to manually populate and modify knowledge graphs is a complex task. In this paper, we propose a novel approach for designing user interfaces for this purpose, based on existing SHACL constraint files. Our method consists of taking SHACL constraints and creating multi-form web applications. The novelty of the approach is to treat the editing of knowledge graphs via multi-form application interaction as a business process. This enables user interface modeling, such as modeling of application control flows by integrating ontology-based business process management components. Additionally, because our application models are themselves knowledge graphs, we demonstrate how they can leverage OWL reasoning to verify logical consistency and improve the user experience.
·mdpi.com·
Transforming SHACL Shape Graphs into HTML Applications for Populating Knowledge Graphs
Ladybug: The Next Chapter for Embedded Graph Databases | LinkedIn
Ladybug: The Next Chapter for Embedded Graph Databases | LinkedIn
It's with deep gratitude for the amazing product the #KuzuDB team created, and a mix of necessity and excitement, that I announce the launch of Ladybug. This is a new open-source project and a community-driven fork of the popular embedded graph database.
happy to add support for LadybugDB on G.V() - Graph Database Client & Visualization Tooling, picking right up where we left off with our KuzuDB integration.
·linkedin.com·
Ladybug: The Next Chapter for Embedded Graph Databases | LinkedIn
How to achieve logical inference performantly on huge data volumes
How to achieve logical inference performantly on huge data volumes
Lots of people talking about semantic layers. Okay, welcome to the party! The big question in our space, is how to achieve logical inference performantly on huge data volumes, given the inherent problems of combinatorial explosion that search algorithms (on which inference algorithms are based) have always confronted. After all, semantic layers are about offering inference services, the services that Edgar Codd envisioned DBMSes on the relational model eventually supporting in the very first paper on the relational model. So what are the leading approaches in terms of performance? 1. GPU Datalog 2. High-speed OWL reasoners like RDFox 3. Rete networks like Sparkling Logic's Rete-NT 4. High-speed FOL provers like Vampire Let's get down to brass tacks. RDFox posts some impressive benchmarks, but they aren't exactly obsoleting GPU Datalog, and I haven't seen any good data on RDFox vs Relational AI. If you have benchmarks on that, I'd love to see them. Rete-NT and RDFox are heavily proprietary, so understanding how the performance has been achieved is not really possible for the broader community beyond these vendors' consultants. And RDFox is now owned by Samsung, further complicating the picture. That leaves us with the open-source GPU Datalogs and high-speed FOL provers. That's what's worth studying right now in semantic layers, not engaging in dogmatic debates between relational model, property graph model, RDF, and "name your emerging data model." Performance has ALWAYS been the name of the game in automated theorem proving. We still struggle to handle inference on large datasets. We need to quit focusing on non-issues and work to streamline existing high-speed inference methods for business usage. GPU Datalog on CUDA seems promising. I imagine the future will bring further optimizations. | 11 comments on LinkedIn
how to achieve logical inference performantly on huge data volumes
·linkedin.com·
How to achieve logical inference performantly on huge data volumes
Efficient and Transferable Agentic Knowledge Graph RAG via Reinforcement Learning
Efficient and Transferable Agentic Knowledge Graph RAG via Reinforcement Learning
KG-R1, Why Knowledge Graph RAG Systems Are Too Expensive to Deploy (And How One Team Fixed It) ... What if I told you that most knowledge graph systems require multiple AI models just to answer a single question? That's exactly the problem plaguing current KG-RAG deployments. 👉 The Cost Problem Traditional knowledge graph retrieval systems use a pipeline approach: one model for planning, another for reasoning, a third for reviewing, and a fourth for responding. Each step burns through tokens and compute resources, making deployment prohibitively expensive for most organizations. Even worse? These systems are built for specific knowledge graphs. Change your data source, and you need to retrain everything. 👉 A Single-Agent Solution Researchers from MIT and IBM just published KG-R1, which replaces this entire multi-model pipeline with one lightweight agent that learns through reinforcement learning. Here's the clever part: instead of hardcoding domain-specific logic, the system uses four simple, universal operations: - Get relations from an entity - Get entities from a relation - Navigate forward or backward through connections These operations work on any knowledge graph without modification. 👉 The Results Are Striking Using just a 3B parameter model, KG-R1: - Matches accuracy of much larger foundation models - Uses 60% fewer tokens per query than existing methods - Transfers across different knowledge graphs without retraining - Processes queries in under 7 seconds on a single GPU The system learned to retrieve information strategically through multi-turn interactions, optimized end-to-end rather than stage-by-stage. This matters because knowledge graphs contain some of our most valuable structured data - from scientific databases to legal documents. Making them accessible and affordable could unlock entirely new applications.
https://arxiv.org/abs/2509.26383v1 Efficient and Transferable Agentic Knowledge Graph RAG via Reinforcement Learning
·linkedin.com·
Efficient and Transferable Agentic Knowledge Graph RAG via Reinforcement Learning
GraphLand: Evaluating Graph Machine Learning Models on Diverse...
GraphLand: Evaluating Graph Machine Learning Models on Diverse...

Recently, there has been a lot of criticism of existing popular graph ML benchmark datasets concerning such aspects as lacking practical relevance, low structural diversity that leaves most of the possible graph structure space not represented, low application domain diversity, graph structure not being beneficial for the considered tasks, and potential bugs in the data collection processes. Some of these criticisms previously appeared on this channel.

To provide the community with better benchmarks, we present GraphLand: a collection of 14 graph datasets for node property prediction coming from diverse real-world industrial applications of graph ML. What makes this benchmark stand out?

Diverse application domains: social networks, web graphs, road networks, and more. Importantly, half of the datasets feature node-level regression tasks that are currently underrepresented in graph ML benchmarks, but are often encountered in real-world applications.

Range of sizes: from thousands to millions of nodes, providing opportunities for researchers with different computational resources.

Rich node attributes that contain numerical and categorical features — these are more typical for industrial applications than textual descriptions that are standard for current benchmarks.

Different learning scenarios. For all datasets, we provide two random data splits with low and high label rate. Further, many of our networks are evolving over time, and for them we additionally provide more challenging temporal data splits and an opportunity to evaluate models in the inductive setting where only an early snapshot of the evolving network is available at train time.

We evaluated a range of models on our datasets and found that, while GNNs achieve strong performance on industrial datasets, they can sometimes be rivaled by popular in the industry gradient boosted decision trees which are provided with additional graph-based input features.

Further, we evaluated several graph foundation models (GFMs). Despite much attention being paid to GFMs recently, we found that there are currently only a few GFMs that can handle arbitrary node features (which is required for true generalization between different graphs) and that these GFMs produce very weak results on our benchmark. So it seemed like the problem of developing general-purpose graph foundation models was far from being solved, which motivated our research in this direction (see the next post).

·arxiv.org·
GraphLand: Evaluating Graph Machine Learning Models on Diverse...
Digital Products Based on Large Language Models for the Exploration of Graph-Databases in Materials Science and Manufacturing
Digital Products Based on Large Language Models for the Exploration of Graph-Databases in Materials Science and Manufacturing
Knowledge graphs offer many advantages in the fields of materials science and manufacturing technology. But how can we explore knowledge graphs in a meaningful way? The current article “Digital Products Based on Large Language Models for the Exploration of Graph-Databases in Materials Science and Manufacturing”shows what such a solution could look like: https://lnkd.in/ehiK5php. Special thanks to Matthias Büschelberger as main author and many thanks to all co-authors Konstantinos Tsitseklis, Lukas Morand, Anastasios Zafeiropoulos, Yoav Nahshon, Symeon Papavassiliou, and Dirk Helm for the great collaboration as part of our wunderful DiMAT project. Would you like to see such a chatbot acting on a knowledge graph in action? Take a look at the video. #datamanagement #FAIRData #dataspace #ontology #knowledgegraph #AI #materials #sustainability #digitalisation #InsideMaterial Fraunhofer IWM, National Technical University of Athens
Digital Products Based on Large Language Models for the Exploration of Graph-Databases in Materials Science and Manufacturing
·linkedin.com·
Digital Products Based on Large Language Models for the Exploration of Graph-Databases in Materials Science and Manufacturing
Kuzu is no more
Kuzu is no more
Kuzu is no more The project was archived last night with one last major release. The communication has not been very clear, but I can bet Semih Salihoğlu is under a lot of pressure and I am looking forward to hearing the full story someday. We liked the product and will fork it and continue supporting it as a way for our users to run local memory workloads on their machines. We'll not support it in production anymore though, since we are not database developers and don't plan to be. You can only get that far without the need to grow a mighty Unix beard. Instead, we'll be going with Neo4j for larger loads and our partner Qdrant for embeddings + extend our FalkorDB and Postgres support. It does feel a bit strange when your default DB disappears overnight. That is why cognee is database agnostic and all features that were Kuzu specific will be migrated in about 2 weeks. This time we were just too fast for our own good. | 47 comments on LinkedIn
Kuzu is no more
·linkedin.com·
Kuzu is no more
Unlock GPU Power with GFQL
Unlock GPU Power with GFQL
Rough news on #kuzu being archived - startups are hard and Semih Salihoğlu + Prashanth Rao did so much in ways I value, and the same architectural principles we've been quietly tackling in GFQL. For those left in the lurch for an embeddable compute-tier solution to graphs, #GFQL should be pretty fascinating yet also familiar (ex: Apache Arrow-native graph queries for modern OSS ecosystems), and hopefully less stress due to a sustainable governance model. Likewise, as an oss deeptech community, we add interesting new bits like the optional record-breaking GPU mode with NVIDIA #RAPIDSAI . If you're new to it and seeing this: #GFQL, the graph dataframe-native query language, is increasingly how Graphistry, Inc. and our community work with graphs at the compute tier. Whether the data comes from a tabular ETL pipeline, a file, SQL, nosql, or a graph storage DB, GFQL makes it easy to do on-the-fly graph transforms and queries at the compute tier at sub-second speeds for graphs anywhere from 100 edges to 1,000,000,000 . Currently, we support arrow/pandas, and arrow / #nvidia #RAPIDS as the main engine modes. While we're not marketing it much yet, GFQL is already used daily by every single Graphistry user behind-the-scenes, and directly by analysts & developers at banks, startups, etc around the world. We built it because we needed an OSS compute-tier graph solution for working with modern data systems that separate storage from compute. Likewise, data is a team sport, so it is used by folks on teams who have to rapidly wrangle graphs, whether for analysis, data science, ETL, visualization, or AI. Imagine an ETL pipeline or notebook flow or web app where data comes from files, elastic search, databricks, and neo4j, and you need to do more on-the-fly graph stuff with it. We started building what became GFQL *before* Kuzu because it solves real architectural & graph productivity problems that have been challenging our team, our users, and the broader graph community for years now. Likewise, by going dataframe-native & GPU-mode from day 1, it's now a large part of how we approach GPU graph deep tech investments throughout our stack, and means it's a sustainably funded system. We are looking at bigger R&D and commercial support contracts with organizations needing to do subsecond billion+-scale with us so we can build even more, faster (hit me up if that's you!), but overall, most of our users are just like ourselves, and the day-to-day is wanting an easy OSS way to wrangle graphs in our apps & notebooks. As we continue to smooth it out (ex: we'll be adding a familiar Cypher syntax), we'll be writing about it a lot more. 4 links below: ReadTheDocs, pip install, SOTA GPU benchmarks, and original aha moment + Russell Jurney Ben Lorica 罗瑞卡 Taurean Dyer Bradley Rees
·linkedin.com·
Unlock GPU Power with GFQL
Discontinued graph database systems
Discontinued graph database systems
Last week, the Kùzu Inc team announced that they will no longer actively support the open-source KuzuDB project. I've been a fan of KuzuDB and think its discontinuation leaves a big gap in the graph ecosystem. This is especially the case for open-source solutions – over the last few years, many open-source graph database systems were forked, relicensed or discontinued. Currently, users looking for an OSS graph database are left to pick from: - community editions of systems with enterprise/cloud offerings (Neo4j, Dgraph) - variants of a heavily-forked system (ArcadeDB / YouTrackDB, HugeGraph) - projects under non-OSI approved licenses - experimental systems (e.g., DuckPGQ) I'm wondering whether this trends continues or someone steps up to maintain KuzuDB or create a new OSS system.
·linkedin.com·
Discontinued graph database systems
For years, I considered graph databases “interesting but niche.”
For years, I considered graph databases “interesting but niche.”
For years, I considered graph databases “interesting but niche.” Relevant commercially for social networks, supply chain and academically for biotech, maybe some knowledge management. Basically, not something most companies would ever need. I stand corrected. With AI, they’re having a very big moment! Working with graphs the first time feels unusual but also just right. The best analogy I have is that feeling we get when we try to visualize a higher dimension when all we have ever known are three (+ time for the purists). (or is it just me?) Two use-cases that I have been riffing on: * Knowledge management: For me it started as a personal project for personal knowledge management. For enterprises, this is where RAG shines. But I also wonder if there are other applications within Enterprise Knowledge Management that we aren’t thinking of yet.  * Master Data Management (MDM): Potentially a subset of above, but explicitly about attributes and relationships that columnar databases might handle too rigidly. I am a lifetime subscriber for relational and SQL till they exist. Not saying they will go away. Graphs still feel intuitive and unusual at the same time. They are still complex to build (although companies like Neo4j simplify them really well), and difficult to traverse / interpret. I believe there is a stronger convergence of these 2 systems coming. Graphs will augment relational before replacing in some of these use-cases. But they have to be way more simplified first for greater adoption. Would love to hear more from graph experts and/or from those who share this feeling of “just right” for graphs. Are you seeing use-cases where graph databases are picking up? #AI #DataStrategy #Graphs #KnowledgeManagement #MDM | 37 comments on LinkedIn
For years, I considered graph databases “interesting but niche.”
·linkedin.com·
For years, I considered graph databases “interesting but niche.”
Your agents NEED a semantic layer
Your agents NEED a semantic layer
Your agents NEED a semantic layer 🫵 Traditional RAG systems embed documents, retrieve similar chunks, and feed them to LLMs. This works for simple Q&A. It fails catastrophically for agents that need to reason across systems. Why? Because semantic similarity doesn't capture relationships. Your vector database can tell you that two documents are "about bonds." It can't tell you that Document A contains the official pricing methodology, Document B is a customer complaint referencing that methodology, and Document C is an assembly guide that superseded both. These relationships are invisible to embeddings. What semantic layers provide: Entity resolution across data silos. When "John Smith" in your CRM, "J. Smith" in email, and "john.smith@company.com" in logs all map to the same person node, agents can traverse the complete context. Cross-domain entity linking through knowledge graphs. Products in your database connect to assembly guides, which link to customer reviews, which reference support tickets. Single-query traversal instead of application-level joins. Provenance-tracked derivations. Every extracted entity, inferred relationship, and generated embedding maintains lineage to source data. Critical for regulatory compliance and debugging agent behavior. Ontology-grounded reasoning. Financial instruments mapped to FIBO standards. Products mapped to domain taxonomies. Agents reason with structured vocabulary, not statistical word associations. The technical implementation pattern: Layer 1: Unified graph database supporting vector, structured, and semi-structured data types in single queries. Layer 2: Entity extraction pipeline with coreference resolution and deduplication across sources. Layer 3: Relationship inference and cross-domain linking using both explicit identifiers and contextual signals. Layer 4: Separation of first-party data from derived artifacts with clear tagging for safe regeneration. The result: Agents can traverse "Product → described_in → AssemblyGuide → improved_by → CommunityTip → authored_by → Expert" in a single graph query instead of five API calls with application-level joins. Model Context Protocol is emerging as the open standard for semantic tool modeling. Not just describing APIs, but encoding what tools do, when to use them, and how outputs compose. This enables agents to discover and reason about capabilities dynamically. The competitive moat isn't your model choice. The moat is your knowledge graph architecture and the accumulated entity relationships that took years to build. | 28 comments on LinkedIn
Your agents NEED a semantic layer
·linkedin.com·
Your agents NEED a semantic layer
Can LLMs Really Build Knowledge Graphs We Can Trust?
Can LLMs Really Build Knowledge Graphs We Can Trust?
🕸️ Can LLMs Really Build Knowledge Graphs We Can Trust? There’s a growing trend: “Let’s use LLMs to build knowledge graphs.” It sounds like the perfect shortcut - take unstructured data, prompt an LLM, and get a ready-to-use graph. But… are we sure those graphs are trustworthy? Before that, let’s pause for a second: 💡 Why build knowledge graphs at all? Because they solve one of AI’s biggest weaknesses - lack of structure and reasoning. Graphs let us connect facts, entities, and relationships in a way that’s transparent, queryable, and explainable. They give context, memory, and logic - everything that raw text or embeddings alone can’t provide. Yet, here’s the catch when using LLMs to build them: 🔹 Short context window - LLMs can only “see” a limited amount of data at once, losing consistency across larger corpora. 🔹 Hallucinations - when context runs out or ambiguity appears, models confidently invent facts or relations that never existed. 🔹 Lack of provenance - LLM outputs don’t preserve why or how a link was made. Without traceability, you can’t audit or explain your graph. 🔹 Temporal instability - the same prompt can yield different graphs tomorrow, because stochastic generation ≠ deterministic structure. 🔹 Scalability & cost - large-scale graph construction requires persistent context and reasoning, which LLMs weren’t designed for. Building knowledge graphs isn’t just data extraction - it’s engineering meaning. It demands consistency, provenance, and explainability, not just text generation. LLMs can assist in this process, but they shouldn’t be the architect. The next step is finding a way to make graphs both trustworthy and instant - without compromising one for the other. | 11 comments on LinkedIn
Can LLMs Really Build Knowledge Graphs We Can Trust?
·linkedin.com·
Can LLMs Really Build Knowledge Graphs We Can Trust?
Where is GraphRAG actually working in production?
Where is GraphRAG actually working in production?
"GraphRAG chatter is louder than its footprint in production." That line from Ben Lorica's piece on Gradient Flow stopped me in my tracks: https://lnkd.in/dmC-ykAu I was reading it because of my deep interest in graph-based reasoning, and while the content is excellent, I was genuinely surprised by the assessment of GraphRAG adoption. The article suggests that a year after the initial buzz, GraphRAG remains mostly confined to graph vendors and specialists, with little traction in mainstream AI engineering teams. Here's the thing: at GraphAware, we have GraphRAG running in production: our AskTheDocs conversational interface in Hume uses this approach to help customers query documentation, and the feedback has been consistently positive. It's not an experiment—it's a production feature our users rely on daily. So I have a question for my network (yes, I know you're a bit biased—many of you are graph experts, after all 😊): Where is GraphRAG actually working in production? I'm not looking for POCs, experiments, or "we're exploring it." I want to hear about real, deployed systems serving actual users. Success stories. Production use cases. The implementations that are quietly delivering value while the tech commentary wonders if anyone is using this stuff. If you have direct or indirect experience with GraphRAG in production, I'd love to hear from you: - Drop a comment below - Send me a DM - Email me directly I want to give these cases a voice and learn from what's actually working out there. Who's building with GraphRAG beyond the buzz? #GraphRAG #KnowledgeGraphs #AI #ProductionAI #RAG
Where is GraphRAG actually working in production?
·linkedin.com·
Where is GraphRAG actually working in production?
Let's talk ontologies. They are all the rage.
Let's talk ontologies. They are all the rage.
Let's talk ontologies. They are all the rage. I've been drawing what I now know is a 'triple' on whiteboards for years. It's one of the standard ways I know to start to understand a business. A triple is: subject, predicate, object I cannot overstate how useful this practice has been. Understanding how everything links together is useful, for people and AI. I'm now stuck on what that gets stored in. I'm reading about triplestores and am unclear on the action needed. Years ago some colleagues and I used Neo4j to do this. I liked the visual interaction of the output but I'm not sure that is the best path here. Who can help me understand how to move from whiteboarding to something more formal? Where to actually store all these triples? At what point does it become a 'knowledge graph'? Are there tools or products that help with this? Or is there a new language to learn to store it properly? (I think yes) #ontology #help | 40 comments on LinkedIn
Let's talk ontologies. They are all the rage.
·linkedin.com·
Let's talk ontologies. They are all the rage.