Found 806 bookmarks
Custom sorting
Learning to Count Isomorphisms with Graph Neural Networks
Learning to Count Isomorphisms with Graph Neural Networks
Subgraph isomorphism counting is an important problem on graphs, as many graph-based tasks exploit recurring subgraph patterns. Classical methods usually boil down to a backtracking framework that needs to navigate a huge search space with prohibitive computational costs. Some recent studies resort to graph neural networks (GNNs) to learn a low-dimensional representation for both the query and input graphs, in order to predict the number of subgraph isomorphisms on the input graph. However, typical GNNs employ a node-centric message passing scheme that receives and aggregates messages on nodes, which is inadequate in complex structure matching for isomorphism counting. Moreover, on an input graph, the space of possible query graphs is enormous, and different parts of the input graph will be triggered to match different queries. Thus, expecting a fixed representation of the input graph to match diversely structured query graphs is unrealistic. In this paper, we propose a novel GNN called Count-GNN for subgraph isomorphism counting, to deal with the above challenges. At the edge level, given that an edge is an atomic unit of encoding graph structures, we propose an edge-centric message passing scheme, where messages on edges are propagated and aggregated based on the edge adjacency to preserve fine-grained structural information. At the graph level, we modulate the input graph representation conditioned on the query, so that the input graph can be adapted to each query individually to improve their matching. Finally, we conduct extensive experiments on a number of benchmark datasets to demonstrate the superior performance of Count-GNN.
·arxiv.org·
Learning to Count Isomorphisms with Graph Neural Networks
Relational Deep Learning: Graph Representation Learning on Relational Databases
Relational Deep Learning: Graph Representation Learning on Relational Databases
Much of the world's most valued data is stored in relational databases and data warehouses, where the data is organized into many tables connected by primary-foreign key relations. However, building machine learning models using this data is both challenging and time consuming. The core problem is that no machine learning method is capable of learning on multiple tables interconnected by primary-foreign key relations. Current methods can only learn from a single table, so the data must first be manually joined and aggregated into a single training table, the process known as feature engineering. Feature engineering is slow, error prone and leads to suboptimal models. Here we introduce an end-to-end deep representation learning approach to directly learn on data laid out across multiple tables. We name our approach Relational Deep Learning (RDL). The core idea is to view relational databases as a temporal, heterogeneous graph, with a node for each row in each table, and edges specified by primary-foreign key links. Message Passing Graph Neural Networks can then automatically learn across the graph to extract representations that leverage all input data, without any manual feature engineering. Relational Deep Learning leads to more accurate models that can be built much faster. To facilitate research in this area, we develop RelBench, a set of benchmark datasets and an implementation of Relational Deep Learning. The data covers a wide spectrum, from discussions on Stack Exchange to book reviews on the Amazon Product Catalog. Overall, we define a new research area that generalizes graph machine learning and broadens its applicability to a wide set of AI use cases.
·arxiv.org·
Relational Deep Learning: Graph Representation Learning on Relational Databases
TAG-DS
TAG-DS
Welcome to Topology, Algebra, and Geometry in Data Science (TAG-DS). This site is intended to bring together researchers who are applying mathematical techniques to the rapidly growing field of data science. The three identified fields encompass more than 100-years of finely tuned machinery that
·tagds.com·
TAG-DS
Graph & Geometric ML in 2024: Where We Are and What’s Next (Part I — Theory & Architectures)
Graph & Geometric ML in 2024: Where We Are and What’s Next (Part I — Theory & Architectures)
Trends and recent advancements in Graph and Geometric Deep Learning
Following the tradition from previous years, we interviewed a cohort of distinguished and prolific academic and industrial experts in an attempt to summarise the highlights of the past year and predict what is in store for 2024. Past 2023 was so ripe with results that we had to break this post into two parts. This is Part I focusing on theory & new architectures,
·towardsdatascience.com·
Graph & Geometric ML in 2024: Where We Are and What’s Next (Part I — Theory & Architectures)
pacoid (Paco Xander Nathan)
pacoid (Paco Xander Nathan)
Python open source projects; natural language meets graph technologies; graph topological transformations; graph levels of detail (abstraction layers)
·huggingface.co·
pacoid (Paco Xander Nathan)
Neural algorithmic reasoning without intermediate supervision
Neural algorithmic reasoning without intermediate supervision
Neural algorithmic reasoning focuses on building models that can execute classic algorithms. It allows one to combine the advantages of neural networks, such as handling raw and noisy input data, with theoretical guarantees and strong generalization of algorithms. Assuming we have a neural network capable of solving a classic algorithmic task, we can incorporate it into a more complex pipeline and train end-to-end. For instance, if we have a neural solver aligned to the shortest path problem, it can be used as a building block for a routing system that accounts for complex and dynamically changing traffic conditions. In our work [ref1], we study algorithmic reasoners trained only from input-output pairs, in contrast to current state-of-the-art approaches that utilize the trajectory of a given algorithm. We propose several architectural modifications and demonstrate how standard contrastive learning techniques can regularize intermediate computations of the models without appealing to any predefined algorithm’s trajectory.
·research.yandex.com·
Neural algorithmic reasoning without intermediate supervision
Data gauging, covariance and equivariance | Maurice Weiler
Data gauging, covariance and equivariance | Maurice Weiler
The numerical representation of data is often ambiguous. This leads to a gauge theoretic view on data, requiring covariant or equivariant neural networks which are reviewed in this blog post.
·maurice-weiler.gitlab.io·
Data gauging, covariance and equivariance | Maurice Weiler
Neural algorithmic reasoning
Neural algorithmic reasoning
In this article, we will talk about classical computation: the kind of computation typically found in an undergraduate Computer Science course on Algorithms and Data Structures [1]. Think shortest path-finding, sorting, clever ways to break problems down into simpler problems, incredible ways to organise data for efficient retrieval and updates.
·thegradient.pub·
Neural algorithmic reasoning
Graph Learning Meets Artificial Intelligence
Graph Learning Meets Artificial Intelligence
By request, here are the slides from our #neurips2023 presentation yesterday! We really enjoyed the opportunity to present the different aspects of the work… | 18 comments on LinkedIn
·linkedin.com·
Graph Learning Meets Artificial Intelligence
Co-operative Graph Neural Networks
Co-operative Graph Neural Networks
A new message-passing paradigm where every node can choose to either ‘listen’, ‘broadcast’, ‘listen & broadcast’ or ‘isolate’.
·towardsdatascience.com·
Co-operative Graph Neural Networks
Scaling deep learning for materials discovery
Scaling deep learning for materials discovery
Nature - A protocol using large-scale training of graph networks enables high-throughput discovery of novel stable structures and led to the identification of 2.2 million crystal structures, of...
Novel functional materials enable fundamental breakthroughs across technological applications from clean energy to information processing1,2,3,4,5,6,7,8,9,10,11. From microchips to batteries and photovoltaics, discovery of inorganic crystals has been bottlenecked by expensive trial-and-error approaches. Concurrently, deep-learning models for language, vision and biology have showcased emergent predictive capabilities with increasing data and computation12,13,14. Here we show that graph networks trained at scale can reach unprecedented levels of generalization, improving the efficiency of materials discovery by an order of magnitude.
·nature.com·
Scaling deep learning for materials discovery
Relational Deep Learning
Relational Deep Learning
A new research area that generalizes graph machine learning and broadens its applicability to a wide set of #AI use cases
·relbench.stanford.edu·
Relational Deep Learning
A Survey of Graph Meets Large Language Model: Progress and Future Directions
A Survey of Graph Meets Large Language Model: Progress and Future Directions
Graph plays a significant role in representing and analyzing complex relationships in real-world applications such as citation networks, social networks, and biological data. Recently, Large Language Models (LLMs), which have achieved tremendous success in various domains, have also been leveraged in graph-related tasks to surpass traditional Graph Neural Networks (GNNs) based methods and yield state-of-the-art performance. In this survey, we first present a comprehensive review and analysis of existing methods that integrate LLMs with graphs. First of all, we propose a new taxonomy, which organizes existing methods into three categories based on the role (i.e., enhancer, predictor, and alignment component) played by LLMs in graph-related tasks. Then we systematically survey the representative methods along the three categories of the taxonomy. Finally, we discuss the remaining limitations of existing studies and highlight promising avenues for future research. The relevant papers are summarized and will be consistently updated at: https://github.com/yhLeeee/Awesome-LLMs-in-Graph-tasks.
·arxiv.org·
A Survey of Graph Meets Large Language Model: Progress and Future Directions
Fast-track graph ML with GraphStorm: A new way to solve problems on enterprise-scale graphs | Amazon Web Services
Fast-track graph ML with GraphStorm: A new way to solve problems on enterprise-scale graphs | Amazon Web Services
We are excited to announce the open-source release of GraphStorm 0.1, a low-code enterprise graph machine learning (ML) framework to build, train, and deploy graph ML solutions on complex enterprise-scale graphs in days instead of months. With GraphStorm, you can build solutions that directly take into account the structure of relationships or interactions between billions […]
·aws.amazon.com·
Fast-track graph ML with GraphStorm: A new way to solve problems on enterprise-scale graphs | Amazon Web Services
VGAE-MCTS: A New Molecular Generative Model Combining the Variational Graph Auto-Encoder and Monte Carlo Tree Search
VGAE-MCTS: A New Molecular Generative Model Combining the Variational Graph Auto-Encoder and Monte Carlo Tree Search
Molecular generation is crucial for advancing drug discovery, materials science, and chemical exploration. It expedites the search for new drug candidates, facilitates tailored material creation, and enhances our understanding of molecular diversity. By employing artificial intelligence techniques such as molecular generative models based on molecular graphs, researchers have tackled the challenge of identifying efficient molecules with desired properties. Here, we propose a new molecular generative model combining a graph-based deep neural network and a reinforcement learning technique. We evaluated the validity, novelty, and optimized physicochemical properties of the generated molecules. Importantly, the model explored uncharted regions of chemical space, allowing for the efficient discovery and design of new molecules. This innovative approach has considerable potential to revolutionize drug discovery, materials science, and chemical research for accelerating scientific innovation. By leveraging advanced techniques and exploring previously unexplored chemical spaces, this study offers promising prospects for the efficient discovery and design of new molecules in the field of drug development.
·pubs.acs.org·
VGAE-MCTS: A New Molecular Generative Model Combining the Variational Graph Auto-Encoder and Monte Carlo Tree Search
Talk like a Graph: Encoding Graphs for Large Language Models
Talk like a Graph: Encoding Graphs for Large Language Models
Graphs are a powerful tool for representing and analyzing complex relationships in real-world applications such as social networks, recommender systems, and computational finance. Reasoning on graphs is essential for drawing inferences about the relationships between entities in a complex system, and to identify hidden patterns and trends. Despite the remarkable progress in automated reasoning with natural text, reasoning on graphs with large language models (LLMs) remains an understudied problem. In this work, we perform the first comprehensive study of encoding graph-structured data as text for consumption by LLMs. We show that LLM performance on graph reasoning tasks varies on three fundamental levels: (1) the graph encoding method, (2) the nature of the graph task itself, and (3) interestingly, the very structure of the graph considered. These novel results provide valuable insight on strategies for encoding graphs as text. Using these insights we illustrate how the correct choice of encoders can boost performance on graph reasoning tasks inside LLMs by 4.8% to 61.8%, depending on the task.
·arxiv.org·
Talk like a Graph: Encoding Graphs for Large Language Models