GraphNews

4343 bookmarks
Custom sorting
Cellosaurus is now available in RDF format
Cellosaurus is now available in RDF format
Cellosaurus is now available in RDF format, with a triple store that supports SPARQL queries If this sounds a bit abstract or unfamiliar… 1) RDF stands for Resource Description Framework. Think of RDF as a way to express knowledge using triplets: Subject – Predicate – Object. Example: HeLa (subject) – is_transformed_by (predicate) – Human papillomavirus type 18 (object) These triplets are like little facts that can be connected together to form a graph of knowledge. 2) A triple store is a database designed specifically to store and retrieve these RDF triplets. Unlike traditional databases (tables, rows), triple stores are optimized for linked data. They allow you to navigate connections between biological entities, like species, tissues, genes, diseases, etc. 3) SPARQL is a query language for RDF data. It lets you ask complex questions, such as: - Find all cell lines with a *RAS (HRAS, NRAS, KRAS) mutation in p.Gly12 - Find all Cell lines from animals belonging the order "carnivora" More specifically we now offer from the Tool - API submenu 6 new options: 1) SPARQL Editor (https://lnkd.in/eF2QMsYR). The SPARQL Editor is a tool designed to assist users in developing their SPARQL queries. 2) SPARQL Service (https://lnkd.in/eZ-iN7_e). The SPARQL service is the web service that accepts SPARQL queries over HTTP and returns results from the RDF dataset. 3) Cellosaurs Ontology (https://lnkd.in/eX5ExjMe). An RDF ontology is a formal, structured representation of knowledge. It explicitly defines domain-specific concepts - such as classes and properties - enabling data to be described with meaningful semantics that both humans and machines can interpret. The Cellosaurus ontology is expressed in OWL. 4) Cellosaurus Concept Hopper (https://lnkd.in/e7CH5nj4). The Concept Hopper, is a tool that provides an alternative view of the Cellosaurus ontology. It focuses on a single concept at a time - either a class or a property - and shows how that concept is linked to others within the ontology, as well as how it appears in the data. 5) Cellosaurus dereferencing service (https://lnkd.in/eSATMhGb). The RDF dereferencing service is the mechanism that, given a URI, returns an RDF description of the resource identified by that URI, enabling clients to retrieve structured, machine-readable data about the resource from the web in different formats. 6) Cellosaurus RDF files download (https://lnkd.in/emuEYnMD). This allows you to download the Cellosaurus RDF files in Turtle (ttl) format.
Cellosaurus is now available in RDF format
·linkedin.com·
Cellosaurus is now available in RDF format
How do you explain the difference between Semantic Layers and Ontologies?
How do you explain the difference between Semantic Layers and Ontologies?
How do you explain the difference between Semantic Layers and Ontologies? That’s the discussion I had yesterday with the CTO of a very large and well known organization. 📊 Semantic Layers Today: The First Stepping Stone • Semantic layer is commonly used in data analytics/BI reporting, tied to modeling fact/dimension tables and defining measures • DataLakehouse/Data Cloud, transformation tools, BI tools and semantic layer vendors exemplify this usage • Provide descriptive metadata: definitions, calculations (e.g., revenue formulas), and human readable labels, to enhance the schema • Serve as a first step toward better data understanding and governance • Help in aligning glossary terms with tables and columns, improving metadata quality and documentation • Typically proprietary (even if expressed in YAML) and are not broadly interoperable • Enable “chat with your data” experiences over the warehouse When organizations need to integrate diverse data sources beyond the data warehouse/lakehouse model, they hit the limits of fact/dimension modeling. This is where ontologies and knowledge graphs come in. 🌐 Ontologies & Knowledge Graphs: Scaling Beyond BI • Represent complex relationships, hierarchies, synonyms, and taxonomies that go beyond rigid table structures • Knowledge graphs bridge the gap from technical metadata to business metadata and ultimately to core business concepts • Enable the integration of all types of data (structured, semi-structured, unstructured) because a graph is a common model • Through open web standards such as RDF, OWL and SPARQL you get interoperability without lock in Strategic Role in the Enterprise • Knowledge graphs enable the creation of an enterprise brain, connecting disparate data and semantics across all systems inside an organization • Represent the context and meaning that LLMs lack. Our research has proven this. • They lay the groundwork for digital twins and what-if scenario modeling, powering advanced analytics and decision-making. 💡 Key Takeaway The semantic layer is a first step, especially for BI use cases. Most organizations will start with them. This will eventually create semantic silos that are not inherently interoperable. Over time, they realize they need more than just local semantics for BI; they want to model real-world business assets and relationships across systems. Organizations will realize they want to define semantics once and reuse them across tools and platforms. This requires semantic interoperability, so the meaning behind data is not tied to one system. Large scale enterprises operate across multiple systems, so interoperability is not optional, it’s essential. To truly integrate and reason over enterprise data, you need ontologies and knowledge graphs with open standards. They form the foundation for enterprise-wide semantic reuse, providing the flexibility, connectivity, and context required for next-generation analytics, AI, and enterprise intelligence. | 102 comments on LinkedIn
How do you explain the difference between Semantic Layers and Ontologies?
·linkedin.com·
How do you explain the difference between Semantic Layers and Ontologies?
A New Map for Product Docs
A New Map for Product Docs
AI and knowledge graphs will transform product documentation, especially for complex, networked systems that require configuration…
·medium.com·
A New Map for Product Docs
the Ontology Pipeline
the Ontology Pipeline
It’s been a while since I have posted about the Ontology Pipeline. With parts borrowed from library science, the Ontology Pipeline is a simple framework for building rich knowledge infrastructures. Librarians are professional stewards of knowledge, and have valuable methodologies for building information and knowledge systems for human and machine information retrieval tasks. While LinkedIn conversations seem to be wrestling with defining “what is the semantic layer”, we are failing to see the root of semantics. Semantics matter because knowledge structures, not just layers, define semantics. Semantics are more than labels or concept maps. Semantics lend structure and meaning through relationships, disambiguation of concepts, definitions and context. The Ontology pipeline is an iterative build process that is focused upon ensuring data hygiene while minding domain data, information and knowledge. I share this framework because it is how I have successfully built information and knowledge ecosystems , with or without AI. #taxonomy #ontology #metadata #knowledgegraph #ia #ai Some friends focused on building knowledge infrastructures Andrew Padilla Nagim Ashufta Ole Olesen-Bagneux Jérémy Ravenel Paco Nathan Adriano Vlad-Starrabba Andrea Gioia | 10 comments on LinkedIn
the Ontology Pipeline
·linkedin.com·
the Ontology Pipeline
Alice enters the magical, branchy world of Graphs and Graph Neural Networks
Alice enters the magical, branchy world of Graphs and Graph Neural Networks
The first draft 'G' chapter of the geometric deep learning book is live! 🚀 Alice enters the magical, branchy world of Graphs and Graph Neural Networks 🕸️ (Large Language Models are there too!) I've spent 7+ years studying, researching & talking about graphs -- This text is my best attempt at conveying everything i've learnt 💎 You may read this chapter in the usual place (link in comments!) Any and all feedback / thoughts / questions on the content, and/or words of encouragement for finishing this book (pretty please! 😇) are warmly welcomed! Michael Bronstein Joan Bruna Taco Cohen | 18 comments on LinkedIn
Alice enters the magical, branchy world of Graphs and Graph Neural Networks
·linkedin.com·
Alice enters the magical, branchy world of Graphs and Graph Neural Networks
Integrating Knowledge Graphs with Symbolic AI: The Path to Interpretable Hybrid AI Systems in Medicine
Integrating Knowledge Graphs with Symbolic AI: The Path to Interpretable Hybrid AI Systems in Medicine
In this position paper "Integrating Knowledge Graphs with Symbolic AI: The Path to Interpretable Hybrid AI Systems in Medicine" my L3S Research Center and TIB – Leibniz-Informationszentrum Technik und Naturwissenschaften und Universitätsbibliothek colleagues around Maria-Esther Vidal have nicely laid out some research challenges on the way to interpretable hybrid AI systems in medicine. However, I think the conceptual framework is broadly applicable way beyond medicine. For example, my former colleagues and PhD students at eccenca are working on operationalizing Neuro-Symbolic AI for Enterprise Knowledge Management with eccenca's Corporate Memory. The paper outlines a compelling architecture for combining sub-symbolic models (e.g., deep learning) with symbolic reasoning systems to enable AI that is interpretable, robust, and aligned with human values. eccenca implements these principles at scale through its neuro-symbolic Enterprise Knowledge Graph platform, Corporate Memory for real-world industrial settings: 1. Symbolic Foundation via Semantic Web Standards - Corporate Memory is grounded in W3C standards (RDF, RDFS, OWL, SHACL, SPARQL), enabling formal knowledge representation, inferencing, and constraint validation. This allows to encode domain ontologies, business rules, and data governance policies in a machine-interpretable and human-verifiable manner. 2. Integration of Sub-symbolic Components - it integrates LLMs and ML models for tasks such as schema matching, natural language interpretation, entity resolution, and ontology population. These are linked to the symbolic layer via mappings and annotations, ensuring traceability and explainability. 3. Neuro-Symbolic Interfaces for Hybrid Reasoning - Hybrid workflows where symbolic constraints (e.g., SHACL shapes) guide LLM-based data enrichment. LLMs suggest schema alignments, which are verified against ontological axioms. Graph embeddings and path-based querying power semantic search and similarity. 4. Human-in-the-loop Interactions - Domain experts interact through low-code interfaces and semantic UIs that allow inspection, validation, and refinement of both the symbolic and neural outputs, promoting human oversight and continuous improvement. Such an approach can power Industrial Applications, e.g. in digital thread integration in manufacturing, compliance automation in pharma and finance and in general, cross-domain interoperability in data mesh architectures. Corporate Memory is a practical instantiation of neuro-symbolic AI that meets industrial-grade requirements for governance, scalability, and explainability – key tenets of Human-Centric AI. Check it out here: https://lnkd.in/evyarUsR #NeuroSymbolicAI #HumanCentricAI #KnowledgeGraphs #EnterpriseArchitecture #ExplainableAI #SemanticWeb #LinkedData #LLM #eccenca #CorporateMemory #OntologyDrivenAI #AI4Industry
Integrating Knowledge Graphs with Symbolic AI: The Path to Interpretable Hybrid AI Systems in Medicine
·linkedin.com·
Integrating Knowledge Graphs with Symbolic AI: The Path to Interpretable Hybrid AI Systems in Medicine
The Great Divide: Why Ontology and Data Architecture Teams Are Solving the Same Problems with Different Languages | LinkedIn
The Great Divide: Why Ontology and Data Architecture Teams Are Solving the Same Problems with Different Languages | LinkedIn
In enterprise organisations today, two important disciplines are working in parallel universes, tackling nearly identical challenges whilst speaking completely different languages. Ontology architects and data architects are both wrestling with ETL processes, data modelling, transformations, referen
·linkedin.com·
The Great Divide: Why Ontology and Data Architecture Teams Are Solving the Same Problems with Different Languages | LinkedIn
Everyone is talking about Semantic Layers, but what is a semantic layer?
Everyone is talking about Semantic Layers, but what is a semantic layer?
Everyone is talking about Semantic Layers, but what is a semantic layer? Some of the latest hot topics to get more out of your agents discuss topics such as knowledge graphs, vector search, semantics, and agent frameworks. A new and important area that encompasses the above is the notion that we need to have a stronger semantic layer on top of our data to provide structure, definitions, discoverability and more for our agents (human or other). While a lot of these concepts are not new, they have had to evolve to be relevant in today's world and this means that there is a fair bit of confusion surrounding this whole area. Depending on your background (AI, ML, Library Sciences) and focus (LLM-first or Knowledge Graph), you likely will emphasize different aspects as being key to a semantic layer. I come primarily from an AI/ML/LLM-first world, but have built and utilized knowledge graphs for most of my career. Given my background, I of course have my perspective on this and I tend to break things down to first principles and I like to simplify. Given this, preamble, here is what I think makes a semantic layer. WHAT MAKES A SEMANTIC LAYER: 🟤 Scope 🟢 You should not create a semantic layer that covers everything in the world, nor even everything in your company. You can tie semantic layers together, but focus on the job to be done. 🟤 You will need to have semantics, obviously. There are two particular types semantics that are important to include. 🟢 Vectors: These encapsulate semantics at a high-dimensional space so you can easily find similar concepts in your data 🟢 Ontology (including Taxonomy): Explicitly define meaning of your data in a structured and fact-based way, including appropriate vocabulary. This complements vectors superbly. 🟤 You need to respect the data and meet it where it is at. 🟢 Structured data: For most companies, their data reside in data lakes of some sort and most of it is structured. There is power in this structure, but also noise. The semantic layer needs to understand this and map it into the semantics above. 🟢 Unstructured data: Most data is unstructured and resides all over the place. Often this is stored in object stores or databases as part of structured tables, for example. However there is a lot of information in the unstructured data that the semantic layer needs to map -- and for that you need extraction, resolution, and a number of other techniques based on the modality of the data. 🟤 You need to index the data 🟢 You will need to index all of this to make your data discoverable and retrievable. And this needs to scale. 🟢 You need to have tight integration between vectors, ontology/knowledge graph and keywords to make this seamless. These are 4 key components that are all needed for you to have a true semantic layer. Thoughts? #knowledgegraph, #semanticlayer, #agent, #rag | 13 comments on LinkedIn
Everyone is talking about Semantic Layers, but what is a semantic layer?
·linkedin.com·
Everyone is talking about Semantic Layers, but what is a semantic layer?
Introducing RAG-Anything: All-in-One RAG System
Introducing RAG-Anything: All-in-One RAG System
🚀 Introducing RAG-Anything: All-in-One RAG System! ⚡ LightRAG + Multi-Modal = RAG-Anything 🔗 Get started today: https://lnkd.in/gF3D8rnc 📦 Install: pip install raganything No more switching between multiple tools or losing critical visual information! With RAG-Anything, you get ONE unified solution that understands your documents as completely as you do ✨ 🌟 What makes RAG-Anything innovative: - 🔄 End-to-End Multimodal Pipeline: Complete workflow from document ingestion and parsing to intelligent multimodal query answering. - 📄 Universal Document Support: Seamless processing of PDFs, Office documents (DOC/DOCX/PPT/PPTX/XLS/XLSX), images, and diverse file formats. - 🧠 Specialized Content Analysis: Dedicated processors for images, tables, mathematical equations, and heterogeneous content types. - 🔗 Multimodal Knowledge Graph: Automatic entity extraction and cross-modal relationship discovery for enhanced understanding. - ⚡ Adaptive Processing Modes: Flexible MinerU-based parsing or direct multimodal content injection workflows. - 🎯 Hybrid Intelligent Retrieval: Advanced search capabilities spanning textual and multimodal content with contextual understanding. 💡 Well-suited for: - 🎓 Academic research with complex documents - 📋 Technical documentation processing - 💼 Financial report analysis - 🏢 Enterprise knowledge management
Introducing RAG-Anything: All-in-One RAG System
·linkedin.com·
Introducing RAG-Anything: All-in-One RAG System
Why AI Hallucinates: The Shallow Semantics Problem | LinkedIn
Why AI Hallucinates: The Shallow Semantics Problem | LinkedIn
By J Bittner Part 1 in our 5-part series: From Hallucination to Reasoning—The Case for Ontology-Driven AI Welcome to “Semantically Speaking”—a new series on what makes AI systems genuinely trustworthy, explainable, and future-proof. This is Part 1 in a 5-part journey, exploring why so many AI system
·linkedin.com·
Why AI Hallucinates: The Shallow Semantics Problem | LinkedIn
Towards Multi-modal Graph Large Language Model
Towards Multi-modal Graph Large Language Model
Multi-modal graphs are everywhere in the digital world. Yet the tools used to understand them haven't evolved as much as one would expect. What if the same model could handle your social network analysis, molecular discovery, AND urban planning tasks? A new paper from Tsinghua University proposes Multi-modal Graph Large Language Models (MG-LLM) - a paradigm shift in how we process complex interconnected data that combines text, images, audio, and structured relationships. Think of it as ChatGPT for graphs, but, metaphorically speaking, with eyes, ears, and structural understanding. Their key insight? Treating all graph tasks as generative problems. Instead of training separate models for node classification, link prediction, or graph reasoning, MG-LLM frames everything as transforming one multi-modal graph into another. This unified approach means the same model that predicts protein interactions could also analyze social media networks or urban traffic patterns. What makes this particularly exciting is the vision for natural language interaction with graph data. Imagine querying complex molecular structures or editing knowledge graphs using plain English, without learning specialized query languages. The challenges remain substantial - from handling the multi-granularity of data (pixels to full images) to managing multi-scale tasks (entire graph input, single node output). But if successful, this could fundamentally change the level of graph-based insights across industries that have barely scratched the surface of AI adoption. ↓ 𝐖𝐚𝐧𝐭 𝐭𝐨 𝐤𝐞𝐞𝐩 𝐮𝐩? Join my newsletter with 50k+ readers and be the first to learn about the latest AI research: llmwatch.com 💡
Towards Multi-modal Graph Large Language Model
·linkedin.com·
Towards Multi-modal Graph Large Language Model
Gartner 2025 AI Hype Cycle: The focus is shifting from hype to foundational innovations
Gartner 2025 AI Hype Cycle: The focus is shifting from hype to foundational innovations
Gartner 2025 AI Hype Cycle: The focus is shifting from hype to foundational innovations Knowledge Graphs are a key part of the shift, positioned on the slope of enlightenment By Haritha Khandabattu and Birgi Tamersoy: Al investment remains strong, but focus is shifting from GenAl hype to foundational innovations like Al-ready data, Al agents, Al engineering and ModelOps. This research helps leaders prioritize high-impact, emerging Al techniques while navigating regulatory complexity and operational scaling. As Gartner notes, Generative AI capabilities are advancing at a rapid pace and the tools that will become available over the next 2-5 years will be transformative. The rapid evolution of these technologies and techniques continues unabated, as does the corresponding hype, making this tumultuous landscape difficult to navigate. These conditions mean GenAI continues to be a top priority for the C-suite. Weaving in another foundational concept, Systems of Intelligence as coined by Geoffrey Moore and reference by David Vellante and George Gilbert: Systems of Intelligence are the linchpin of modern enterprise architecture because [AI] agents are only as smart as the state of the business represented in the knowledge graph. If a platform controls that graph, it becomes the default policymaker for “why is this happening, what comes next, and what should we do?” For enterprises, there is only one feasible answer to the "who controls the graph" question: you should. To do that, start working on your enterprise knowledge graph today, if you haven't already. And if you are looking for the place to learn, network, and share experience and knowledge, look no further 👇 Connected Data London 2025 has been announced! 20-21 November, Leonardo Royal Hotel London Tower Bridge Join us for all things #KnowledgeGraph #Graph #analytics #datascience #AI #graphDB #SemTech 🎟️ Ticket sales are open. Benefit from early bird prices with discounts up to 30%. 2025.connected-data.london 📋 Call for submissions is open. Check topics of interest, submission process and evaluation criteria https://lnkd.in/dhbAeYtq 📺 Sponsorship opportunities are available. Maximize your exposure with early onboarding. Contact us at info@connected-data.london for more.
Gartner 2025 AI Hype Cycle: The focus is shifting from hype to foundational innovations
·linkedin.com·
Gartner 2025 AI Hype Cycle: The focus is shifting from hype to foundational innovations
Multi-modal Graph Large Language Models (MG-LLM)
Multi-modal Graph Large Language Models (MG-LLM)
Multi-modal graphs are everywhere in the digital world. Yet the tools used to understand them haven't evolved as much as one would expect. What if the same model could handle your social network analysis, molecular discovery, AND urban planning tasks? A new paper from Tsinghua University proposes Multi-modal Graph Large Language Models (MG-LLM) - a paradigm shift in how we process complex interconnected data that combines text, images, audio, and structured relationships. Think of it as ChatGPT for graphs, but, metaphorically speaking, with eyes, ears, and structural understanding. Their key insight? Treating all graph tasks as generative problems. Instead of training separate models for node classification, link prediction, or graph reasoning, MG-LLM frames everything as transforming one multi-modal graph into another. This unified approach means the same model that predicts protein interactions could also analyze social media networks or urban traffic patterns. What makes this particularly exciting is the vision for natural language interaction with graph data. Imagine querying complex molecular structures or editing knowledge graphs using plain English, without learning specialized query languages. The challenges remain substantial - from handling the multi-granularity of data (pixels to full images) to managing multi-scale tasks (entire graph input, single node output). But if successful, this could fundamentally change the level of graph-based insights across industries that have barely scratched the surface of AI adoption. ↓ 𝐖𝐚𝐧𝐭 𝐭𝐨 𝐤𝐞𝐞𝐩 𝐮𝐩? Join my newsletter with 50k+ readers and be the first to learn about the latest AI research: llmwatch.com 💡
Multi-modal Graph Large Language Models (MG-LLM)
·linkedin.com·
Multi-modal Graph Large Language Models (MG-LLM)