GraphNews

3951 bookmarks
Custom sorting
Using the Shapes Constraint Language for modelling regulatory requirements
Using the Shapes Constraint Language for modelling regulatory requirements
Ontologies are traditionally expressed in the Web Ontology Language (OWL), that provides a syntax for expressing taxonomies with axioms regulating class membership. The semantics of OWL, based on Description Logic (DL), allows for the use of automated reasoning to check the consistency of ontologies, perform classification, and to answer DL queries. However, the open world assumption of OWL, along with limitations in its expressiveness, makes OWL less suitable for modelling rules and regulations, used in public administration. In such cases, it is desirable to have closed world semantics and a rule-based engine to check compliance with regulations. In this paper we describe and discuss data model management using the Shapes Constraint Language (SHACL), for concept modelling of concrete requirements in regulation documents within the public sector. We show how complex regulations, often containing a number of alternative requirements, can be expressed as constraints, and the utility of SHACL engines in verification of instance data against the SHACL model. We discuss benefits of modelling with SHACL, compared to OWL, and demonstrate the maintainability of the SHACL model by domain experts without prior knowledge of ontology management.
·arxiv.org·
Using the Shapes Constraint Language for modelling regulatory requirements
Two Heads Are Better Than One: Integrating Knowledge from Knowledge Graphs and Large Language Models for Entity Alignment
Two Heads Are Better Than One: Integrating Knowledge from Knowledge Graphs and Large Language Models for Entity Alignment
Entity alignment, which is a prerequisite for creating a more comprehensive Knowledge Graph (KG), involves pinpointing equivalent entities across disparate KGs. Contemporary methods for entity alignment have predominantly utilized knowledge embedding models to procure entity embeddings that encapsulate various similarities-structural, relational, and attributive. These embeddings are then integrated through attention-based information fusion mechanisms. Despite this progress, effectively harnessing multifaceted information remains challenging due to inherent heterogeneity. Moreover, while Large Language Models (LLMs) have exhibited exceptional performance across diverse downstream tasks by implicitly capturing entity semantics, this implicit knowledge has yet to be exploited for entity alignment. In this study, we propose a Large Language Model-enhanced Entity Alignment framework (LLMEA), integrating structural knowledge from KGs with semantic knowledge from LLMs to enhance entity alignment. Specifically, LLMEA identifies candidate alignments for a given entity by considering both embedding similarities between entities across KGs and edit distances to a virtual equivalent entity. It then engages an LLM iteratively, posing multiple multi-choice questions to draw upon the LLM's inference capability. The final prediction of the equivalent entity is derived from the LLM's output. Experiments conducted on three public datasets reveal that LLMEA surpasses leading baseline models. Additional ablation studies underscore the efficacy of our proposed framework.
·arxiv.org·
Two Heads Are Better Than One: Integrating Knowledge from Knowledge Graphs and Large Language Models for Entity Alignment
The Intersection of Graphs and Language Models
The Intersection of Graphs and Language Models
The Intersection of Graphs and Language Models 🔲 ⚫ Large language models (LLMs) have rapidly advanced, displaying impressive abilities in comprehending… | 27 comments on LinkedIn
The Intersection of Graphs and Language Models
·linkedin.com·
The Intersection of Graphs and Language Models
LangGraph: Multi-Agent Workflows
LangGraph: Multi-Agent Workflows
Links * Python Examples * JS Examples * YouTube Last week we highlighted LangGraph - a new package (available in both Python and JS) to better enable creation of LLM workflows containing cycles, which are a critical component of most agent runtimes. As a part of the launch, we highlighted two simple runtimes:
a second set of use cases for langgraph - multi-agent workflows. In this blog we will cover:What does "multi-agent" mean?Why are "multi-agent" workflows interesting?Three concrete examples of using LangGraph for multi-agent workflowsTwo examples of third-party applications built on top of LangGraph using multi-agent workflows (GPT-Newspaper and CrewAI)Comparison to other frameworks (Autogen and CrewAI)
·blog.langchain.dev·
LangGraph: Multi-Agent Workflows
🦜🕸️LangGraph | 🦜️🔗 Langchain
🦜🕸️LangGraph | 🦜️🔗 Langchain
⚡ Building language agents as graphs ⚡
🦜🕸️LangGraph⚡ Building language agents as graphs ⚡Overview​LangGraph is a library for building stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain. It extends the LangChain Expression Language with the ability to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner. It is inspired by Pregel and Apache Beam. The current interface exposed is one inspired by NetworkX.The main use is for adding cycles to your LLM application. Crucially, this is NOT a DAG framework. If you want to build a DAG, you should use just use LangChain Expression Language.Cycles are important for agent-like behaviors, where you call an LLM in a loop, asking it what action to take next.
·python.langchain.com·
🦜🕸️LangGraph | 🦜️🔗 Langchain
Architecting Solid Foundations for Scalable Knowledge Graphs | LinkedIn
Architecting Solid Foundations for Scalable Knowledge Graphs | LinkedIn
Whether we remember them or not, we rely directly on unexamined and often very murky foundational assumptions that permeate everything we do. These assumptions are formulated using keystone concepts – core concepts that are so crucial that mere dictionary-style definitions are not enough.
·linkedin.com·
Architecting Solid Foundations for Scalable Knowledge Graphs | LinkedIn
Knowledge Graphs Achieve Superior Reasoning versus Vector Search alone for Retrieval Augmentation
Knowledge Graphs Achieve Superior Reasoning versus Vector Search alone for Retrieval Augmentation
Knowledge Graphs Achieve Superior Reasoning versus Vector Search alone for Retrieval Augmentation 🔗 As artificial intelligence permeates business… | 29 comments on LinkedIn
Knowledge Graphs Achieve Superior Reasoning versus Vector Search alone for Retrieval Augmentation
·linkedin.com·
Knowledge Graphs Achieve Superior Reasoning versus Vector Search alone for Retrieval Augmentation
Semantic Random Walk for Graph Representation Learning in Attributed Graphs
Semantic Random Walk for Graph Representation Learning in Attributed Graphs
For so many papers about graph ML, I find that they start with a graph theoretic-ish G = (V, E) perspective, and ignore the fact that in production people are working with a semantic layer atop the labels (IRI), or with properties (aka, "attributes"). This is, quite frankly, the first time that I've encountered a paper which begins with formalisms that cover both labeled property graphs (LPGs) and semantic inference.
·arxiv.org·
Semantic Random Walk for Graph Representation Learning in Attributed Graphs
Graph analytics for a new kind of economic analysis: Measures of the Capital Network of the U.S. Economy
Graph analytics for a new kind of economic analysis: Measures of the Capital Network of the U.S. Economy
500 million+ members | Manage your professional identity. Build and engage with your professional network. Access knowledge, insights and opportunities.
graph analytics for a new kind of economic analysis:"Measures of the Capital Network of the U.S. Economy"
·linkedin.com·
Graph analytics for a new kind of economic analysis: Measures of the Capital Network of the U.S. Economy
Learning to Count Isomorphisms with Graph Neural Networks
Learning to Count Isomorphisms with Graph Neural Networks
Subgraph isomorphism counting is an important problem on graphs, as many graph-based tasks exploit recurring subgraph patterns. Classical methods usually boil down to a backtracking framework that needs to navigate a huge search space with prohibitive computational costs. Some recent studies resort to graph neural networks (GNNs) to learn a low-dimensional representation for both the query and input graphs, in order to predict the number of subgraph isomorphisms on the input graph. However, typical GNNs employ a node-centric message passing scheme that receives and aggregates messages on nodes, which is inadequate in complex structure matching for isomorphism counting. Moreover, on an input graph, the space of possible query graphs is enormous, and different parts of the input graph will be triggered to match different queries. Thus, expecting a fixed representation of the input graph to match diversely structured query graphs is unrealistic. In this paper, we propose a novel GNN called Count-GNN for subgraph isomorphism counting, to deal with the above challenges. At the edge level, given that an edge is an atomic unit of encoding graph structures, we propose an edge-centric message passing scheme, where messages on edges are propagated and aggregated based on the edge adjacency to preserve fine-grained structural information. At the graph level, we modulate the input graph representation conditioned on the query, so that the input graph can be adapted to each query individually to improve their matching. Finally, we conduct extensive experiments on a number of benchmark datasets to demonstrate the superior performance of Count-GNN.
·arxiv.org·
Learning to Count Isomorphisms with Graph Neural Networks
Relational Deep Learning: Graph Representation Learning on Relational Databases
Relational Deep Learning: Graph Representation Learning on Relational Databases
Much of the world's most valued data is stored in relational databases and data warehouses, where the data is organized into many tables connected by primary-foreign key relations. However, building machine learning models using this data is both challenging and time consuming. The core problem is that no machine learning method is capable of learning on multiple tables interconnected by primary-foreign key relations. Current methods can only learn from a single table, so the data must first be manually joined and aggregated into a single training table, the process known as feature engineering. Feature engineering is slow, error prone and leads to suboptimal models. Here we introduce an end-to-end deep representation learning approach to directly learn on data laid out across multiple tables. We name our approach Relational Deep Learning (RDL). The core idea is to view relational databases as a temporal, heterogeneous graph, with a node for each row in each table, and edges specified by primary-foreign key links. Message Passing Graph Neural Networks can then automatically learn across the graph to extract representations that leverage all input data, without any manual feature engineering. Relational Deep Learning leads to more accurate models that can be built much faster. To facilitate research in this area, we develop RelBench, a set of benchmark datasets and an implementation of Relational Deep Learning. The data covers a wide spectrum, from discussions on Stack Exchange to book reviews on the Amazon Product Catalog. Overall, we define a new research area that generalizes graph machine learning and broadens its applicability to a wide set of AI use cases.
·arxiv.org·
Relational Deep Learning: Graph Representation Learning on Relational Databases
TAG-DS
TAG-DS
Welcome to Topology, Algebra, and Geometry in Data Science (TAG-DS). This site is intended to bring together researchers who are applying mathematical techniques to the rapidly growing field of data science. The three identified fields encompass more than 100-years of finely tuned machinery that
·tagds.com·
TAG-DS
Graph & Geometric ML in 2024: Where We Are and What’s Next (Part I — Theory & Architectures)
Graph & Geometric ML in 2024: Where We Are and What’s Next (Part I — Theory & Architectures)
Trends and recent advancements in Graph and Geometric Deep Learning
Following the tradition from previous years, we interviewed a cohort of distinguished and prolific academic and industrial experts in an attempt to summarise the highlights of the past year and predict what is in store for 2024. Past 2023 was so ripe with results that we had to break this post into two parts. This is Part I focusing on theory & new architectures,
·towardsdatascience.com·
Graph & Geometric ML in 2024: Where We Are and What’s Next (Part I — Theory & Architectures)