GraphNews

4794 bookmarks
Custom sorting
Cosmograph graph visualization tool
Cosmograph graph visualization tool
Huge news for Cosmograph 🪐 While everyone was on Thanksgiving break, I was polishing up the next big Cosmograph update, which I'm finally ready to share! More than three years after the initial release, Cosmograph remains the only single-node web-based tool capable of visualizing graphs with 1 million points and way more than a million links due to its unique GPU Force Layout and Rendering engine cosmos.gl. However, it also had a few major weaknesses like poor memory management and limited analytical capabilities. Version 2.0 of Cosmograph solves these problems by incorporating: - DuckDB (the best in-memory analytics database); - Mosaic (the fastest cross-filtering and visual analytics framework for the web); - SQLRooms (an open-source React toolkit for human and agent collaborative analytics apps by Ilya Boyandin) as its foundation; - The latest version of cosmos.gl (our core force simulation and rendering engine that recently joined OpenJS) to give you even faster performance, more forces, and the long-awaited point-dragging functionality! What does this mean in practice? - Work with larger datasets and use SQL (thanks to WebAssembly and DuckDB); - Much better performance (filtering, timeline, changing visual properties of the graph, etc.); - Open Parquet files natively; - Save your graphs to the cloud and share them with the world easily. And if you work with ML embeddings and love Apple's Embedding Atlas (https://lnkd.in/gsWt6CNT), you'll love Cosmograph too since they have a lot in common. If all the above has excited you, go check out Cosmograph's new beautiful website, and share the news with the world 🙏 https://cosmograph.app | 41 comments on LinkedIn
Cosmograph
·linkedin.com·
Cosmograph graph visualization tool
Graph Database Market Size, Share, Industry Report 2032
Graph Database Market Size, Share, Industry Report 2032
The global graph database market size is projected to grow from $2.85 billion in 2025 to $15.32 billion by 2032, exhibiting a CAGR of 27.1%
Graph Database Market Size
·fortunebusinessinsights.com·
Graph Database Market Size, Share, Industry Report 2032
Launching The Knowledge Graph Academy
Launching The Knowledge Graph Academy
I just stepped off the stage at Connected Data London, where we talked about the black box of AI and the critical role of ontologies. We ended that talk by saying that we, as a community, have a responsibility to cut through the noise. To offer clarity. That is why today, we're launching The Knowledge Graph Academy. For too long, education in semantic technology has tended to sit at one of two extremes: highly abstract academic theory, or tool-focused training that fails to teach the underlying principles. We are building something different. Where educational rigour meets real-world practice. And I’m not doing this alone. If we are going to define the field, we need the leaders who are actually out there building it. I am incredibly proud to announce that I’ve teamed up with two of the sharpest minds in the industry to lead this programme with me: 🔵 Katariina Kari (Lead Ontologist): Katariina has spent years building KG teams at retail giants. She knows exactly how to capture business expertise to drive ROI. She’s the master of the philosophy: "a little semantics goes a long way." 🔵 Jessica Talisman Tallisman (Senior KG Consultant): With 25+ years in data architecture, Jessica is a true veteran of the trenches. She’s a LinkedIn Top Voice, an expert on the W3C SKOS standard, and creator of the 'Ontology Pipeline' framework. This isn't just training. It’s a shift in mindset. This course doesn't just teach you which syntax to use or buttons to press, The Knowledge Graph Academy is designed to change how you think. Whether you are a practitioner, a leader shaping AI strategy, or someone looking to pivot your career: this is your invitation. Let’s turn ideas into understanding, and understanding into impact. ⭕ The Knowledge Graph Academy: https://lnkd.in/ecQBMCg3 | 59 comments on LinkedIn
·linkedin.com·
Launching The Knowledge Graph Academy
StrangerGraphs is a fan theory prediction engine that applies graph database analytics to the chaotic world of Stranger Things fan theories on Reddit.
StrangerGraphs is a fan theory prediction engine that applies graph database analytics to the chaotic world of Stranger Things fan theories on Reddit.
The company scraped 150,000 posts and ran community detection algorithms to identify which Stranger Things fan groups have the best track records for predictions. Theories were mapped as a graph (234k nodes and 1.5M relationships) that track characters, plot points and speculation and then used natural language processing to surface patterns across seasons. These predictions are then mapped out in a visualization for extra analysis. Top theories include ■■■ ■■■■■ ■■■■, ■■■ ■■■■■■■■ ■■ and ■■■■ ■■■■■■■■ ■■■ ■■ ■■■■. (Editor note: these theories have been redacted to avoid any angry emails about spoilers.)
·strangergraphs.com·
StrangerGraphs is a fan theory prediction engine that applies graph database analytics to the chaotic world of Stranger Things fan theories on Reddit.
OSMnx is a Python package that downloads any city’s street network, buildings, bike lanes, rail, or walkable paths from OpenStreetMap and instantly turns them into clean, routable NetworkX graphs with correct topology, projected coordinates, edge lengths, bearings, and travel speeds.
OSMnx is a Python package that downloads any city’s street network, buildings, bike lanes, rail, or walkable paths from OpenStreetMap and instantly turns them into clean, routable NetworkX graphs with correct topology, projected coordinates, edge lengths, bearings, and travel speeds.
OSMnx is a Python package that downloads any city’s street network, buildings, bike lanes, rail, or walkable paths from OpenStreetMap and instantly turns them into clean, routable NetworkX graphs with correct topology, projected coordinates, edge lengths, bearings, and travel speeds.
OSMnx is a Python package that downloads any city’s street network, buildings, bike lanes, rail, or walkable paths from OpenStreetMap and instantly turns them into clean, routable NetworkX graphs with correct topology, projected coordinates, edge lengths, bearings, and travel speeds.
·linkedin.com·
OSMnx is a Python package that downloads any city’s street network, buildings, bike lanes, rail, or walkable paths from OpenStreetMap and instantly turns them into clean, routable NetworkX graphs with correct topology, projected coordinates, edge lengths, bearings, and travel speeds.
A series of posts about knowledge graphs and ontology design patterns
A series of posts about knowledge graphs and ontology design patterns
The first in a series of posts about knowledge graphs and ontology design patterns that I swear by. They will lead you through how at Yale we went from a challenge from leadership (build a system that allows discovery of cultural heritage objects across our libraries, archives and museums) to a fully functioning, easy to use, easy to maintain, extremely robust, public knowledge graph. *The 10 Design Principles to Live By* 1. Scope design through shared use cases 2. Design for international use 3. Make easy things easy, complex things possible 4. Avoid dependency on specific technologies 5. Use REST / Don’t break the web / Don’t fear the network 6. Design for JSON-LD, using LOD principles 7. Follow existing standards & best practices, when possible 8. Define success, not failure 9. Separate concerns, keep APIs & systems loosely coupled 10. Address concerns at the right level You must first agree on your design principles and priorities. These are crucial because when the inevitable conflicts of opinion arise, you have a set of neutral requirements to compare the different options against. (1) The first keeps you honest: good ideas are only ideas if they don't advance your business / use cases. Keeping to scope is critical as ontologies have a tendency to expand uncontrollably, reducing usability and maintainability. (2) Internationalization of knowledge is important because your audience and community doesn't just speak your language, or come from your culture. If you limit your language, you limit your potential. (3) Ensure that your in-scope edge cases aren't lost, but that in solving them, you haven't made the core functionality more complicated than it needs to be. If your KG isn't usable, then it won't be used. (4) Don't build for a specific software environment, because that environment is going to change. Probably before you get to production. Locking yourself in is quickest way to obsolescence and oblivion. (5) Don't try to pack everything a consuming application might need into a single package, browsers and apps deal just fine with hundreds of HTTP requests. Especially with web caches. (6) JSON-LD is the serialization to use, as devs use JSON all the time, and those devs need to build applications that consume your knowledge. Usability first! (7) Standards are great... especially as there are so many of them. Don't get all tied up trying to follow a standard that isn't right, but don't invent the wheel unnecessarily. (8) Define the ontology/API, but don't require errors for all other situations, as you've made versioning impossible. Allow extensions to co-exist, as tomorrow they might be core. (9) Don't require a single monolith if you can avoid it. If a consuming app only needs half of the functionality, don't make them implement everything. (10) If there's a problem with the API, don't work around it in the ontology, or vice versa. Solve model problems in the model, vocabulary problems in vocabulary, and API problems in the API. | 12 comments on LinkedIn
a series of posts about knowledge graphs and ontology design patterns
·linkedin.com·
A series of posts about knowledge graphs and ontology design patterns
Orionbelt Ontology Builder
Orionbelt Ontology Builder
After a vivid conversation with Juan Sequeda and others at Connected Data London 2025 about how to start with Ontologies at business clients w/o relying on another KG platform, I have now started to roll (eh vibe coding 🤓) my own Ontology Builder as a simple Streamlit app! Have a look and collaborate if you like. https://lnkd.in/egGZJHiP
·linkedin.com·
Orionbelt Ontology Builder
Scalable Feature Learning on Huge Knowledge Graphs for Downstream Machine Learning
Scalable Feature Learning on Huge Knowledge Graphs for Downstream Machine Learning
✨ #NeurIPS2025 paper: Scalable Feature Learning on Huge Knowledge Graphs for Downstream Machine Learning Combining contrastive learning and message passing markedly improves features created from embedding graphs, scalable to huge graphs. It taught us a lot on graph feature learning 👇 Graphs can represent knowledge and have scaled to huge sizes (115M entities in Wikidata). How to distill these into good downstream features, eg for machine learning? The challenge is to create feature vectors, and for this graph embeddings have been invaluable. Our paper shows that message passing is a great tool to build feature vectors from graphs As opposed to contrastive learning, message passing helps embeddings represent the large-scale structure of the graph (it gives Arnoldi-type iterations). Our approach uses contrastive learning on a core subset of entities, to capture a large-scale structure. Consistent with knowledge-graph embedding literature, this step represents relations as operators on the embedding space. It also anchors the central entities. Knowledge graphs have long-tailed entity distributions, with many weakly-connected entities on which contrastive learning is under constrained. For these, we propagate embeddings via the relation operators, in a diffusion-like step, extrapolating from the central entities. To have a very efficient algorithm, we split the graph in overlapping highly-connected blocks that fit in GPU memory. Propagation is then simple in-memory iterations, and we embed huge graphs on a single GPU. Splitting huge knowledge graphs in sub-parts is actually hard because of the mix of very highly-connected nodes, and a huge long tail hard to reach. We introduce a procedure that allows for overlap in the blocks, relaxing a lot the difficulty. ‪Our approach, SEPAL, combines these elements for feature learning on large knowledge graphs. It creates feature vectors that lead to better performance on downstream tasks, and it is more scalable. Larger knowledge graphs give feature vectors that provide downstream value. We also learned that performance on link prediction, the canonical task of knowledge-graph embedding, is not a good proxy for downstream utility. We believe this is because link prediction only needs local structure, unlike downstream tasks The papier is well reproducible, and we hope it will unleash more progress in knowledge graph embedding. We'll present at #NeurIPS and #Eurips
Scalable Feature Learning on Huge Knowledge Graphs for Downstream Machine Learning
·linkedin.com·
Scalable Feature Learning on Huge Knowledge Graphs for Downstream Machine Learning
Graph-constrained Reasoning (GCR) is a novel framework that bridges structured knowledge in knowledge graphs (KG) with unstructured reasoning in LLMs
Graph-constrained Reasoning (GCR) is a novel framework that bridges structured knowledge in knowledge graphs (KG) with unstructured reasoning in LLMs
Graph-constrained Reasoning (GCR) is a novel framework that bridges structured knowledge in knowledge graphs (KG) with unstructured reasoning in LLMs.
Graph-constrained Reasoning (GCR) is a novel framework that bridges structured knowledge in knowledge graphs (KG) with unstructured reasoning in LLMs
·linkedin.com·
Graph-constrained Reasoning (GCR) is a novel framework that bridges structured knowledge in knowledge graphs (KG) with unstructured reasoning in LLMs
Ontology Evolution: When New Knowledge Challenges Old Categories
Ontology Evolution: When New Knowledge Challenges Old Categories
# ⚙️ Ontology Evolution: When New Knowledge Challenges Old Categories  When new entities don’t fit existing assumptions, your ontology must **evolve logically**, not patch reactively. --- ## 🧩 The Challenge: Going Beyond Binary Thinking  Early models often start simple:   **Appliance → ElectricityConsumer OR NonElectricAppliance** But what happens when a *WindTurbine* appears?  It **produces** more electricity than it consumes.  --- ## 🔧 Step 1: Extend the Energy Role Taxonomy  To reflect the real world:  ``` EnergyRole ├─ ElectricityConsumer ├─ ElectricityProducer   ├─ ElectricityProsumer (both) └─ PassiveComponent (neither) ``` Now we can classify correctly:   - 🏠 HVAC → ElectricityConsumer   - ☀️ Solar Panel → ElectricityProducer   - 🔋 Battery → ElectricityProsumer   - 🪟 Window → PassiveComponent  This simple hierarchy shift restores consistency — every new entity has a logical home. --- ## 🧠 Step 2: Add Axioms for Automated Reasoning  Instead of manual assignment, let the reasoner decide. Example rule set: - If `producesPower consumesPower` → ElectricityProducer   - If `consumesPower producesPower` → ElectricityConsumer   - If both 0 → ElectricityProsumer   - If both = 0 → PassiveComponent  💡 **Outcome:** The system adapts dynamically to new data while preserving logical harmony. --- ## ⚡ Step 3: Support Evolution, Don’t Break History  When expanding ontologies: 1. Preserve backward compatibility — old data must still make sense.   2. Maintain logical consistency — define disjoint and equivalent classes clearly.   3. Enable gradual migration — version and document each model improvement.   4. Use reasoning — automate classification from quantitative features.  Evolution isn’t about tearing down—it’s about **strengthening the structure**. --- ## 🌍 Real-World Analogy  Think of this like upgrading an energy grid:   You don’t replace the whole system — you extend the schema to accommodate solar panels, batteries, and wind farms while ensuring the old consumers still work.  Ontology evolution works the same way — graceful adaptation ensures **stability + intelligence**. --- ## 💬 Key Takeaway  *WindTurbine* example shows why **ontology evolution** is essential:   - Models must expand beyond rigid assumptions.   - Axiomatic rules make adaptation automatic.   - Logic-based flexibility sustains long-term scalability.  In short: **don’t model just the present — model the principles of change.** #Ontology #KnowledgeEngineering #KnowledgeGraphs #ExplainableAI #OntologyEvolution #NeuroSymbolicAI #AITransformation #KnowledgeManagement 👉 Follow me for Knowledge Management and Neuro Symbolic AI daily nuggets. 👉 Join my group for more insights and community discussions [Join the Group](https://lnkd.in/d9Z8-RQd)
Ontology Evolution: When New Knowledge Challenges Old Categories
·linkedin.com·
Ontology Evolution: When New Knowledge Challenges Old Categories
Introducing the ONTO-TRON-5000. A personal project that allows users to build their ontologies right from their data
Introducing the ONTO-TRON-5000. A personal project that allows users to build their ontologies right from their data
Introducing the ONTO-TRON-5000. A personal project that allows users to build their ontologies right from their data! The onto-tron is built with the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO) as semantic frameworks for classification. This program emphasizes the importance of design patterns as best practices for ontology documentation and combines it with machine readability. Simply upload your CSV, set the semantic types of your columns and continuously build your ontology above. The program has 3 options for extraction: RDF, R2RML, and the Mermaid Live Editor syntax if you would like to further develop your design pattern there. Included is a BFO/CCO ontology viewer, allowing you to explore the hierarchy and understand how terms are used- no protege required. This is the alpha version and would love feedback as there is a growing list of features to be added. Included in the README are instructions for manual installation and Docker. Enjoy! https://lnkd.in/ehrDwVrf | 13 comments on LinkedIn
Introducing the ONTO-TRON-5000. A personal project that allows users to build their ontologies right from their data
·linkedin.com·
Introducing the ONTO-TRON-5000. A personal project that allows users to build their ontologies right from their data