GraphNews

3943 bookmarks
Custom sorting
On the benefits of using ontologies for data integration
On the benefits of using ontologies for data integration
And this is the amazing prof. Maurizio Lenzerini on the benefits of using ontologies for data integration, captured when showing the long journey of knowledge…
on the benefits of using ontologies for data integration
·linkedin.com·
On the benefits of using ontologies for data integration
Let Your Graph Do the Talking: Encoding Structured Data for LLMs
Let Your Graph Do the Talking: Encoding Structured Data for LLMs
𝗟𝗲𝘁 𝘆𝗼𝘂𝗿 𝗱𝗮𝘁𝗮 𝘀𝗽𝗲𝗮𝗸! Inject structured data directly with GraphTokens and supercharge your LLM's reasoning abilities. Our exciting research is… | 16 comments on LinkedIn
·linkedin.com·
Let Your Graph Do the Talking: Encoding Structured Data for LLMs
Ontologies and Knowledge Graphs offer a way to connect embedding vectors to structured knowledge
Ontologies and Knowledge Graphs offer a way to connect embedding vectors to structured knowledge
Ontologies and Knowledge Graphs offer a way to connect embedding vectors to structured knowledge, enhancing their meaning and explainability. Let's delve into… | 25 comments on LinkedIn
Ontologies and Knowledge Graphs offer a way to connect embedding vectors to structured knowledge,
·linkedin.com·
Ontologies and Knowledge Graphs offer a way to connect embedding vectors to structured knowledge
Knowledge graphs for Information Sherpas | LinkedIn
Knowledge graphs for Information Sherpas | LinkedIn
Information developers, technical writers, and knowledge management professionals face enormous challenges that are often not clear to their "customers", and not even to their managers. In a nutshell, they put significant effort into organizing and managing enormous collections of information – in t
·linkedin.com·
Knowledge graphs for Information Sherpas | LinkedIn
Ontologies are the backbone of the Semantic Web bridging the gap between human and machine understanding
Ontologies are the backbone of the Semantic Web bridging the gap between human and machine understanding
Ontologies are the backbone of the Semantic Web bridging the gap between human and machine understanding. They define the concepts and relationships that… | 29 comments on LinkedIn
Ontologies are the backbone of the Semantic Web bridging the gap between human and machine understanding
·linkedin.com·
Ontologies are the backbone of the Semantic Web bridging the gap between human and machine understanding
An integrative dynamical perspective for graph theory and the study of complex networks
An integrative dynamical perspective for graph theory and the study of complex networks
Built upon the shoulders of graph theory, the field of complex networks has become a central tool for studying real systems across various fields of research. Represented as graphs, different systems can be studied using the same analysis methods, which allows for their comparison. Here, we challenge the wide-spread idea that graph theory is a universal analysis tool, uniformly applicable to any kind of network data. Instead, we show that many classical graph metrics (including degree, clustering coefficient and geodesic distance) arise from a common hidden propagation model: the discrete cascade. From this perspective, graph metrics are no longer regarded as combinatorial measures of the graph, but as spatio-temporal properties of the network dynamics unfolded at different temporal scales. Once graph theory is seen as a model-based (and not a purely data-driven) analysis tool, we can freely or intentionally replace the discrete cascade by other canonical propagation models and define new network metrics. This opens the opportunity to design, explicitly and transparently, dedicated analyses for different types of real networks by choosing a propagation model that matches their individual constraints. In this way, we take stand that network topology cannot always be abstracted independently from network dynamics, but shall be jointly studied. Which is key for the interpretability of the analyses. The model-based perspective here proposed serves to integrate into a common context both the classical graph analysis and the more recent network metrics defined in the literature which were, directly or indirectly, inspired by propagation phenomena on networks.
·arxiv.org·
An integrative dynamical perspective for graph theory and the study of complex networks
Deep Graph Library
Deep Graph Library
DGL 2.0 was released featuring GraphBolt - a new tool for streaming data loading and sampling offering around 30% speedups in node classification and up to 400% in link prediction 🚀 Besides that, the new version includes utilities for building graph transformers and a handful of new datasets - LRGB and a recent suite of heterophilic datasets
·dgl.ai·
Deep Graph Library
Using the Shapes Constraint Language for modelling regulatory requirements
Using the Shapes Constraint Language for modelling regulatory requirements
Ontologies are traditionally expressed in the Web Ontology Language (OWL), that provides a syntax for expressing taxonomies with axioms regulating class membership. The semantics of OWL, based on Description Logic (DL), allows for the use of automated reasoning to check the consistency of ontologies, perform classification, and to answer DL queries. However, the open world assumption of OWL, along with limitations in its expressiveness, makes OWL less suitable for modelling rules and regulations, used in public administration. In such cases, it is desirable to have closed world semantics and a rule-based engine to check compliance with regulations. In this paper we describe and discuss data model management using the Shapes Constraint Language (SHACL), for concept modelling of concrete requirements in regulation documents within the public sector. We show how complex regulations, often containing a number of alternative requirements, can be expressed as constraints, and the utility of SHACL engines in verification of instance data against the SHACL model. We discuss benefits of modelling with SHACL, compared to OWL, and demonstrate the maintainability of the SHACL model by domain experts without prior knowledge of ontology management.
·arxiv.org·
Using the Shapes Constraint Language for modelling regulatory requirements
Two Heads Are Better Than One: Integrating Knowledge from Knowledge Graphs and Large Language Models for Entity Alignment
Two Heads Are Better Than One: Integrating Knowledge from Knowledge Graphs and Large Language Models for Entity Alignment
Entity alignment, which is a prerequisite for creating a more comprehensive Knowledge Graph (KG), involves pinpointing equivalent entities across disparate KGs. Contemporary methods for entity alignment have predominantly utilized knowledge embedding models to procure entity embeddings that encapsulate various similarities-structural, relational, and attributive. These embeddings are then integrated through attention-based information fusion mechanisms. Despite this progress, effectively harnessing multifaceted information remains challenging due to inherent heterogeneity. Moreover, while Large Language Models (LLMs) have exhibited exceptional performance across diverse downstream tasks by implicitly capturing entity semantics, this implicit knowledge has yet to be exploited for entity alignment. In this study, we propose a Large Language Model-enhanced Entity Alignment framework (LLMEA), integrating structural knowledge from KGs with semantic knowledge from LLMs to enhance entity alignment. Specifically, LLMEA identifies candidate alignments for a given entity by considering both embedding similarities between entities across KGs and edit distances to a virtual equivalent entity. It then engages an LLM iteratively, posing multiple multi-choice questions to draw upon the LLM's inference capability. The final prediction of the equivalent entity is derived from the LLM's output. Experiments conducted on three public datasets reveal that LLMEA surpasses leading baseline models. Additional ablation studies underscore the efficacy of our proposed framework.
·arxiv.org·
Two Heads Are Better Than One: Integrating Knowledge from Knowledge Graphs and Large Language Models for Entity Alignment
The Intersection of Graphs and Language Models
The Intersection of Graphs and Language Models
The Intersection of Graphs and Language Models 🔲 ⚫ Large language models (LLMs) have rapidly advanced, displaying impressive abilities in comprehending… | 27 comments on LinkedIn
The Intersection of Graphs and Language Models
·linkedin.com·
The Intersection of Graphs and Language Models
LangGraph: Multi-Agent Workflows
LangGraph: Multi-Agent Workflows
Links * Python Examples * JS Examples * YouTube Last week we highlighted LangGraph - a new package (available in both Python and JS) to better enable creation of LLM workflows containing cycles, which are a critical component of most agent runtimes. As a part of the launch, we highlighted two simple runtimes:
a second set of use cases for langgraph - multi-agent workflows. In this blog we will cover:What does "multi-agent" mean?Why are "multi-agent" workflows interesting?Three concrete examples of using LangGraph for multi-agent workflowsTwo examples of third-party applications built on top of LangGraph using multi-agent workflows (GPT-Newspaper and CrewAI)Comparison to other frameworks (Autogen and CrewAI)
·blog.langchain.dev·
LangGraph: Multi-Agent Workflows
🦜🕸️LangGraph | 🦜️🔗 Langchain
🦜🕸️LangGraph | 🦜️🔗 Langchain
⚡ Building language agents as graphs ⚡
🦜🕸️LangGraph⚡ Building language agents as graphs ⚡Overview​LangGraph is a library for building stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain. It extends the LangChain Expression Language with the ability to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner. It is inspired by Pregel and Apache Beam. The current interface exposed is one inspired by NetworkX.The main use is for adding cycles to your LLM application. Crucially, this is NOT a DAG framework. If you want to build a DAG, you should use just use LangChain Expression Language.Cycles are important for agent-like behaviors, where you call an LLM in a loop, asking it what action to take next.
·python.langchain.com·
🦜🕸️LangGraph | 🦜️🔗 Langchain
Architecting Solid Foundations for Scalable Knowledge Graphs | LinkedIn
Architecting Solid Foundations for Scalable Knowledge Graphs | LinkedIn
Whether we remember them or not, we rely directly on unexamined and often very murky foundational assumptions that permeate everything we do. These assumptions are formulated using keystone concepts – core concepts that are so crucial that mere dictionary-style definitions are not enough.
·linkedin.com·
Architecting Solid Foundations for Scalable Knowledge Graphs | LinkedIn