GraphNews

4207 bookmarks
Custom sorting
Graph Deep Learning for Time Series Forecasting
Graph Deep Learning for Time Series Forecasting
Graph-based deep learning methods have become popular tools to process collections of correlated time series. Differently from traditional multivariate forecasting methods, neural graph-based predictors take advantage of pairwise relationships by conditioning forecasts on a (possibly dynamic) graph spanning the time series collection. The conditioning can take the form of an architectural inductive bias on the neural forecasting architecture, resulting in a family of deep learning models called spatiotemporal graph neural networks. Such relational inductive biases enable the training of global forecasting models on large time-series collections, while at the same time localizing predictions w.r.t. each element in the set (i.e., graph nodes) by accounting for local correlations among them (i.e., graph edges). Indeed, recent theoretical and practical advances in graph neural networks and deep learning for time series forecasting make the adoption of such processing frameworks appealing and timely. However, most of the studies in the literature focus on proposing variations of existing neural architectures by taking advantage of modern deep learning practices, while foundational and methodological aspects have not been subject to systematic investigation. To fill the gap, this paper aims to introduce a comprehensive methodological framework that formalizes the forecasting problem and provides design principles for graph-based predictive models and methods to assess their performance. At the same time, together with an overview of the field, we provide design guidelines, recommendations, and best practices, as well as an in-depth discussion of open challenges and future research directions.
·arxiv.org·
Graph Deep Learning for Time Series Forecasting
Supply chain data analysis and visualization using Amazon Neptune and the Neptune workbench | Amazon Web Services
Supply chain data analysis and visualization using Amazon Neptune and the Neptune workbench | Amazon Web Services
Many global corporations are managing multiple supply chains, and they depend on those operations to not only deliver goods on time but to respond to divergent customer and supplier needs. According to a McKinsey study, it’s estimated that significant disruptions to production now occur every 3.7 years on average, adding new urgency to supply chain […]
·aws.amazon.com·
Supply chain data analysis and visualization using Amazon Neptune and the Neptune workbench | Amazon Web Services
TacticAI: an AI assistant for football tactics using Graph AI
TacticAI: an AI assistant for football tactics using Graph AI
"TacticAI: an AI assistant for football tactics" by Zhe W., Petar Veličković, Daniel Hennes, Nenad Tomašev, Laurel Prince, Yoram Bachrach, Romuald Elie, Kevin… | 28 comments on LinkedIn
TacticAI: an AI assistant for football tactics
·linkedin.com·
TacticAI: an AI assistant for football tactics using Graph AI
Graph of Thoughts: Solving Elaborate Problems with Large Language Models
Graph of Thoughts: Solving Elaborate Problems with Large Language Models
We introduce Graph of Thoughts (GoT): a framework that advances prompting capabilities in large language models (LLMs) beyond those offered by paradigms such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary advantage of GoT is the ability to model the information generated by an LLM as an arbitrary graph, where units of information ("LLM thoughts") are vertices, and edges correspond to dependencies between these vertices. This approach enables combining arbitrary LLM thoughts into synergistic outcomes, distilling the essence of whole networks of thoughts, or enhancing thoughts using feedback loops. We illustrate that GoT offers advantages over state of the art on different tasks, for example increasing the quality of sorting by 62% over ToT, while simultaneously reducing costs by 31%. We ensure that GoT is extensible with new thought transformations and thus can be used to spearhead new prompting schemes. This work brings the LLM reasoning closer to human thinking or brain mechanisms such as recurrence, both of which form complex networks.
·arxiv.org·
Graph of Thoughts: Solving Elaborate Problems with Large Language Models
Vector databases vs Graph databases
Vector databases vs Graph databases
Graph Databases should be the better choice for Retrieval Augmented Generation (RAG)! We have seen the debate RAG vs fine-tuning, but what about Vector… | 37 comments on LinkedIn
Vector databases vs Graph databases
·linkedin.com·
Vector databases vs Graph databases
Graph Instruction Tuning for Large Language Models
Graph Instruction Tuning for Large Language Models
🔥 Let #LLM understand graphs directly? GraphGPT made it! 📢 GraphGPT is a Graph Large Language Model, which aligns Large Language Models (LLMs) with Graphs…
·linkedin.com·
Graph Instruction Tuning for Large Language Models
Vectors need Graphs!
Vectors need Graphs!
Vectors need Graphs! Embedding vectors are a pivotal tool when using Generative AI. While vectors might initially seem an unlikely partner to graphs, their… | 61 comments on LinkedIn
Vectors need Graphs!
·linkedin.com·
Vectors need Graphs!
Constructing knowledge graphs from text using OpenAI functions: Leveraging knowledge graphs to power LangChain Applications
Constructing knowledge graphs from text using OpenAI functions: Leveraging knowledge graphs to power LangChain Applications
Editor's Note: This post was written by Tomaz Bratanic from the Neo4j team. Extracting structured information from unstructured data like text has been around for some time and is nothing new. However, LLMs brought a significant shift to the field of information extraction. If before you needed a team of
·blog.langchain.dev·
Constructing knowledge graphs from text using OpenAI functions: Leveraging knowledge graphs to power LangChain Applications
Overcoming the "Reversal Curse" in LLMs with Ontologies
Overcoming the "Reversal Curse" in LLMs with Ontologies
Overcoming the "Reversal Curse" in LLMs with Ontologies: The "Reversal Curse" is a term coined in a recent paper to describe a particular failure of… | 108 comments on LinkedIn
Overcoming the "Reversal Curse" in LLMs with Ontologies
·linkedin.com·
Overcoming the "Reversal Curse" in LLMs with Ontologies
Introducing "Reasoning on Graphs (RoG)" - Unlocking Next-Level Reasoning for Large Language Models
Introducing "Reasoning on Graphs (RoG)" - Unlocking Next-Level Reasoning for Large Language Models
🚀 Exciting News: Introducing "Reasoning on Graphs (RoG)" - Unlocking Next-Level Reasoning for Large Language Models! 📊🧠 We are thrilled to unveil our… | 42 comments on LinkedIn
Introducing "Reasoning on Graphs (RoG)" - Unlocking Next-Level Reasoning for Large Language Models
·linkedin.com·
Introducing "Reasoning on Graphs (RoG)" - Unlocking Next-Level Reasoning for Large Language Models
Concepts is All You Need: A More Direct Path to AGI
Concepts is All You Need: A More Direct Path to AGI
Little demonstrable progress has been made toward AGI (Artificial General Intelligence) since the term was coined some 20 years ago. In spite of the fantastic breakthroughs in Statistical AI such as AlphaZero, ChatGPT, and Stable Diffusion none of these projects have, or claim to have, a clear path to AGI. In order to expedite the development of AGI it is crucial to understand and identify the core requirements of human-like intelligence as it pertains to AGI. From that one can distill which particular development steps are necessary to achieve AGI, and which are a distraction. Such analysis highlights the need for a Cognitive AI approach rather than the currently favored statistical and generative efforts. More specifically it identifies the central role of concepts in human-like cognition. Here we outline an architecture and development plan, together with some preliminary results, that offers a much more direct path to full Human-Level AI (HLAI)/ AGI.
·arxiv.org·
Concepts is All You Need: A More Direct Path to AGI
Knowledge Graphs Bootcamp on the O'Reilly Learning Platform
Knowledge Graphs Bootcamp on the O'Reilly Learning Platform
Three months ago, I had the privilege of hosting the Knowledge Graphs Bootcamp on the O'Reilly Learning Platform, and I'm truly grateful for the overwhelming…
Knowledge Graphs Bootcamp on the O'Reilly Learning Platform
·linkedin.com·
Knowledge Graphs Bootcamp on the O'Reilly Learning Platform
Graph Hairball
Graph Hairball
Knowledge graph system logic, the "things" and "relations between things" that  graph theory calls "vertices" (a.k.a. nodes, points, entit...
·digitalfinancialreporting.blogspot.com·
Graph Hairball
Graph Neural Prompting with Large Language Models
Graph Neural Prompting with Large Language Models
Large Language Models (LLMs) have shown remarkable generalization capability with exceptional performance in various language modeling tasks. However, they still exhibit inherent limitations in precisely capturing and returning grounded knowledge. While existing work has explored utilizing knowledge graphs to enhance language modeling via joint training and customized model architectures, applying this to LLMs is problematic owing to their large number of parameters and high computational cost. In addition, how to leverage the pre-trained LLMs and avoid training a customized model from scratch remains an open question. In this work, we propose Graph Neural Prompting (GNP), a novel plug-and-play method to assist pre-trained LLMs in learning beneficial knowledge from KGs. GNP encompasses various designs, including a standard graph neural network encoder, a cross-modality pooling module, a domain projector, and a self-supervised link prediction objective. Extensive experiments on multiple datasets demonstrate the superiority of GNP on both commonsense and biomedical reasoning tasks across different LLM sizes and settings.
·arxiv.org·
Graph Neural Prompting with Large Language Models
Scoping Knowledge Graphs | LinkedIn
Scoping Knowledge Graphs | LinkedIn
Building knowledge graphs is supposedly a huge and terrifying project, like fighting dragons or sending humans to Mars. I hear or see it time and time again: Knowledge graphs are too difficult, too time consuming, and too expensive to build.
·linkedin.com·
Scoping Knowledge Graphs | LinkedIn
Affinity-Aware Graph Networks
Affinity-Aware Graph Networks
Graph Neural Networks (GNNs) have emerged as a powerful technique for learning on relational data. Owing to the relatively limited number of message passing steps they perform -- and hence a smaller receptive field -- there has been significant interest in improving their expressivity by incorporating structural aspects of the underlying graph. In this paper, we explore the use of affinity measures as features in graph neural networks, in particular measures arising from random walks, including effective resistance, hitting and commute times. We propose message passing networks based on these features and evaluate their performance on a variety of node and graph property prediction tasks. Our architecture has lower computational complexity, while our features are invariant to the permutations of the underlying graph. The measures we compute allow the network to exploit the connectivity properties of the graph, thereby allowing us to outperform relevant benchmarks for a wide variety of tasks, often with significantly fewer message passing steps. On one of the largest publicly available graph regression datasets, OGB-LSC-PCQM4Mv1, we obtain the best known single-model validation MAE at the time of writing.
·arxiv.org·
Affinity-Aware Graph Networks
Chat with the Data Benchmark: Understanding Synergies between Large Language Models and Knowledge Graphs for Enterprise Conversations
Chat with the Data Benchmark: Understanding Synergies between Large Language Models and Knowledge Graphs for Enterprise Conversations
It was an honor to present the initial results of the Chat with the Data benchmark last week at the The Alan Turing Institute Knowledge Graph meetup (link to… | 11 comments on LinkedIn
·linkedin.com·
Chat with the Data Benchmark: Understanding Synergies between Large Language Models and Knowledge Graphs for Enterprise Conversations