Found 212 bookmarks
Custom sorting
Reasoning on Graphs: Faithful and Interpretable Large Language Model Reasoning
Reasoning on Graphs: Faithful and Interpretable Large Language Model Reasoning
Large language models (LLMs) have demonstrated impressive reasoning abilities in complex tasks. However, they lack up-to-date knowledge and experience hallucinations during reasoning, which can lead to incorrect reasoning processes and diminish their performance and trustworthiness. Knowledge graphs (KGs), which capture vast amounts of facts in a structured format, offer a reliable source of knowledge for reasoning. Nevertheless, existing KG-based LLM reasoning methods only treat KGs as factual knowledge bases and overlook the importance of their structural information for reasoning. In this paper, we propose a novel method called reasoning on graphs (RoG) that synergizes LLMs with KGs to enable faithful and interpretable reasoning. Specifically, we present a planning-retrieval-reasoning framework, where RoG first generates relation paths grounded by KGs as faithful plans. These plans are then used to retrieve valid reasoning paths from the KGs for LLMs to conduct faithful reasoning. Furthermore, RoG not only distills knowledge from KGs to improve the reasoning ability of LLMs through training but also allows seamless integration with any arbitrary LLMs during inference. Extensive experiments on two benchmark KGQA datasets demonstrate that RoG achieves state-of-the-art performance on KG reasoning tasks and generates faithful and interpretable reasoning results.
·arxiv.org·
Reasoning on Graphs: Faithful and Interpretable Large Language Model Reasoning
Working on a LangChain template that adds a custom graph conversational memory to the Neo4j Cypher chain
Working on a LangChain template that adds a custom graph conversational memory to the Neo4j Cypher chain
Working on a LangChain template that adds a custom graph conversational memory to the Neo4j Cypher chain, which uses LLMs to generate Cypher statements. This…
Working on a LangChain template that adds a custom graph conversational memory to the Neo4j Cypher chain
·linkedin.com·
Working on a LangChain template that adds a custom graph conversational memory to the Neo4j Cypher chain
Talk like a Graph: Encoding Graphs for Large Language Models
Talk like a Graph: Encoding Graphs for Large Language Models
Graphs are a powerful tool for representing and analyzing complex relationships in real-world applications such as social networks, recommender systems, and computational finance. Reasoning on graphs is essential for drawing inferences about the relationships between entities in a complex system, and to identify hidden patterns and trends. Despite the remarkable progress in automated reasoning with natural text, reasoning on graphs with large language models (LLMs) remains an understudied problem. In this work, we perform the first comprehensive study of encoding graph-structured data as text for consumption by LLMs. We show that LLM performance on graph reasoning tasks varies on three fundamental levels: (1) the graph encoding method, (2) the nature of the graph task itself, and (3) interestingly, the very structure of the graph considered. These novel results provide valuable insight on strategies for encoding graphs as text. Using these insights we illustrate how the correct choice of encoders can boost performance on graph reasoning tasks inside LLMs by 4.8% to 61.8%, depending on the task.
·arxiv.org·
Talk like a Graph: Encoding Graphs for Large Language Models
Constructing knowledge graphs from text using OpenAI functions: Leveraging knowledge graphs to power LangChain Applications
Constructing knowledge graphs from text using OpenAI functions: Leveraging knowledge graphs to power LangChain Applications
Editor's Note: This post was written by Tomaz Bratanic from the Neo4j team. Extracting structured information from unstructured data like text has been around for some time and is nothing new. However, LLMs brought a significant shift to the field of information extraction. If before you needed a team of
·blog.langchain.dev·
Constructing knowledge graphs from text using OpenAI functions: Leveraging knowledge graphs to power LangChain Applications
Introducing "Reasoning on Graphs (RoG)" - Unlocking Next-Level Reasoning for Large Language Models
Introducing "Reasoning on Graphs (RoG)" - Unlocking Next-Level Reasoning for Large Language Models
🚀 Exciting News: Introducing "Reasoning on Graphs (RoG)" - Unlocking Next-Level Reasoning for Large Language Models! 📊🧠 We are thrilled to unveil our… | 42 comments on LinkedIn
Introducing "Reasoning on Graphs (RoG)" - Unlocking Next-Level Reasoning for Large Language Models
·linkedin.com·
Introducing "Reasoning on Graphs (RoG)" - Unlocking Next-Level Reasoning for Large Language Models
Concepts is All You Need: A More Direct Path to AGI
Concepts is All You Need: A More Direct Path to AGI
Little demonstrable progress has been made toward AGI (Artificial General Intelligence) since the term was coined some 20 years ago. In spite of the fantastic breakthroughs in Statistical AI such as AlphaZero, ChatGPT, and Stable Diffusion none of these projects have, or claim to have, a clear path to AGI. In order to expedite the development of AGI it is crucial to understand and identify the core requirements of human-like intelligence as it pertains to AGI. From that one can distill which particular development steps are necessary to achieve AGI, and which are a distraction. Such analysis highlights the need for a Cognitive AI approach rather than the currently favored statistical and generative efforts. More specifically it identifies the central role of concepts in human-like cognition. Here we outline an architecture and development plan, together with some preliminary results, that offers a much more direct path to full Human-Level AI (HLAI)/ AGI.
·arxiv.org·
Concepts is All You Need: A More Direct Path to AGI
Affinity-Aware Graph Networks
Affinity-Aware Graph Networks
Graph Neural Networks (GNNs) have emerged as a powerful technique for learning on relational data. Owing to the relatively limited number of message passing steps they perform -- and hence a smaller receptive field -- there has been significant interest in improving their expressivity by incorporating structural aspects of the underlying graph. In this paper, we explore the use of affinity measures as features in graph neural networks, in particular measures arising from random walks, including effective resistance, hitting and commute times. We propose message passing networks based on these features and evaluate their performance on a variety of node and graph property prediction tasks. Our architecture has lower computational complexity, while our features are invariant to the permutations of the underlying graph. The measures we compute allow the network to exploit the connectivity properties of the graph, thereby allowing us to outperform relevant benchmarks for a wide variety of tasks, often with significantly fewer message passing steps. On one of the largest publicly available graph regression datasets, OGB-LSC-PCQM4Mv1, we obtain the best known single-model validation MAE at the time of writing.
·arxiv.org·
Affinity-Aware Graph Networks
There is a lot of research on establishing mappings between different ontologies, but hardly any work on how to leverage such mappings in a query federation engine
There is a lot of research on establishing mappings between different ontologies, but hardly any work on how to leverage such mappings in a query federation engine
“There is a lot of research on establishing mappings between different ontologies, but hardly any work on how to leverage such mappings in a query federation engine. With @Sijin_Cheng and @ferradest, we have embarked on changing that. Paper at @CoopIS2023 https://t.co/vF1emf9R6Z”
·twitter.com·
There is a lot of research on establishing mappings between different ontologies, but hardly any work on how to leverage such mappings in a query federation engine
pg-schema schemas for property graphs
pg-schema schemas for property graphs
Arrived to SIGMOD in Seattle and it’s an amazing honor that our joint academic/industry work on Property Graph Schema received the Best Industry Paper award.… | 14 comments on LinkedIn
·linkedin.com·
pg-schema schemas for property graphs
Link prediction on knowledge graphs is a loosing game, IMHO. Without injecting any new info, you'll only find links similar to those you already had. That's why this work is interesting: injecting external knowledge into link prediction is the only way to find truly new links
Link prediction on knowledge graphs is a loosing game, IMHO. Without injecting any new info, you'll only find links similar to those you already had. That's why this work is interesting: injecting external knowledge into link prediction is the only way to find truly new links
“Link prediction on knowledge graphs is a loosing game, IMHO. Without injecting any new info, you'll only find links similar to those you already had. That's why this work is interesting: injecting external knowledge into link prediction is the only way to find truly new links.”
Link prediction on knowledge graphs is a loosing game, IMHO. Without injecting any new info, you'll only find links similar to those you already had. That's why this work is interesting: injecting external knowledge into link prediction is the only way to find truly new links
·twitter.com·
Link prediction on knowledge graphs is a loosing game, IMHO. Without injecting any new info, you'll only find links similar to those you already had. That's why this work is interesting: injecting external knowledge into link prediction is the only way to find truly new links
Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Large Language Models
Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Large Language Models
With the widespread use of large language models (LLMs) in NLP tasks, researchers have discovered the potential of Chain-of-thought (CoT) to assist LLMs in accomplishing complex reasoning tasks by generating intermediate steps. However, human thought processes are often non-linear, rather than simply sequential chains of thoughts. Therefore, we propose Graph-of-Thought (GoT) reasoning, which models human thought processes not only as a chain but also as a graph. By representing thought units as nodes and connections between them as edges, our approach captures the non-sequential nature of human thinking and allows for a more realistic modeling of thought processes. Similar to Multimodal-CoT, we modeled GoT reasoning as a two-stage framework, generating rationales first and then producing the final answer. Specifically, we employ an additional graph-of-thoughts encoder for GoT representation learning and fuse the GoT representation with the original input representation through a gated fusion mechanism. We implement a GoT reasoning model on the T5 pre-trained model and evaluate its performance on a text-only reasoning task (GSM8K) and a multimodal reasoning task (ScienceQA). Our model achieves significant improvement over the strong CoT baseline with 3.41% and 5.08% on the GSM8K test set with T5-base and T5-large architectures, respectively. Additionally, our model boosts accuracy from 84.91% to 91.54% using the T5-base model and from 91.68% to 92.77% using the T5-large model over the state-of-the-art Multimodal-CoT on the ScienceQA test set. Experiments have shown that GoT achieves comparable results to Multimodal-CoT(large) with over 700M parameters, despite having fewer than 250M backbone model parameters, demonstrating the effectiveness of GoT.
·arxiv.org·
Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Large Language Models
Empowering Counterfactual Reasoning over Graph Neural Networks through Inductivity
Empowering Counterfactual Reasoning over Graph Neural Networks through Inductivity
Graph neural networks (GNNs) have various practical applications, such as drug discovery, recommendation engines, and chip design. However, GNNs lack transparency as they cannot provide understandable explanations for their predictions. To address this issue, counterfactual reasoning is used. The main goal is to make minimal changes to the input graph of a GNN in order to alter its prediction. While several algorithms have been proposed for counterfactual explanations of GNNs, most of them have two main drawbacks. Firstly, they only consider edge deletions as perturbations. Secondly, the counterfactual explanation models are transductive, meaning they do not generalize to unseen data. In this study, we introduce an inductive algorithm called INDUCE, which overcomes these limitations. By conducting extensive experiments on several datasets, we demonstrate that incorporating edge additions leads to better counterfactual results compared to the existing methods. Moreover, the inductive modeling approach allows INDUCE to directly predict counterfactual perturbations without requiring instance-specific training. This results in significant computational speed improvements compared to baseline methods and enables scalable counterfactual analysis for GNNs.
·arxiv.org·
Empowering Counterfactual Reasoning over Graph Neural Networks through Inductivity
A Survey on Knowledge Graphs for Healthcare: Resources, Applications, and Promises
A Survey on Knowledge Graphs for Healthcare: Resources, Applications, and Promises
Healthcare knowledge graphs (HKGs) have emerged as a promising tool for organizing medical knowledge in a structured and interpretable way, which provides a comprehensive view of medical concepts and their relationships. However, challenges such as data heterogeneity and limited coverage remain, emphasizing the need for further research in the field of HKGs. This survey paper serves as the first comprehensive overview of HKGs. We summarize the pipeline and key techniques for HKG construction (i.e., from scratch and through integration), as well as the common utilization approaches (i.e., model-free and model-based). To provide researchers with valuable resources, we organize existing HKGs (The resource is available at https://github.com/lujiaying/Awesome-HealthCare-KnowledgeBase) based on the data types they capture and application domains, supplemented with pertinent statistical information. In the application section, we delve into the transformative impact of HKGs across various healthcare domains, spanning from fine-grained basic science research to high-level clinical decision support. Lastly, we shed light on the opportunities for creating comprehensive and accurate HKGs in the era of large language models, presenting the potential to revolutionize healthcare delivery and enhance the interpretability and reliability of clinical prediction.
·arxiv.org·
A Survey on Knowledge Graphs for Healthcare: Resources, Applications, and Promises
Don't make directed graphs undirected! Introducing Dir-GNN, extending any spatial MPNN to handle directed graphs
Don't make directed graphs undirected! Introducing Dir-GNN, extending any spatial MPNN to handle directed graphs
Don't make directed graphs undirected!  Introducing Dir-GNN, extending any spatial MPNN to handle directed graphs.  Preserving edge directionality is crucial…
Don't make directed graphs undirected! Introducing Dir-GNN, extending any spatial MPNN to handle directed graphs
·linkedin.com·
Don't make directed graphs undirected! Introducing Dir-GNN, extending any spatial MPNN to handle directed graphs
Graph ML News (May 20th)
Graph ML News (May 20th)
Graph ML News (May 20th) The NeurIPS deadline has passed so we could finally disconnect from the cluster, breathe in some fresh air and get ready for the supplementary deadline and/or paper bidding depending on your status. The Workshop on Mining and Learning with Graph (MLG) at KDD’23 accepts submissions until May 30th. This year KDD will feature both MLG and Graph Learning Benchmarks (GLB), so two more reasons to visit Long Beach and chat with the fellow graph folks 😉 CS224W, one of the best graph courses from Stanford, started publishing project reports of the Winter 2023 cohort: some new articles include solving TSP with GNNs, approaching code similarity, and building music recommendation system with GNNs. More reports will be published within the next few weeks. Weekend reading: DRew: Dynamically Rewired Message Passing with Delay feat. Michael Bronstein and Francesco Di Giovanni (ICML’23) Random Edge Coding: One-Shot Bits-Back Coding of Large Labeled Graphs (ICML’23) Can Language Models Solve Graph Problems in Natural Language? On the Connection Between MPNN and Graph Transformer
·t.me·
Graph ML News (May 20th)
GNNs work in practical biological applications
GNNs work in practical biological applications
Very cool paper showing that GNNs work in practical biological applications. It takes several years of effort to produce an experimental validation of computational results https://t.co/rmHiIFTOzn— Michael Bronstein (@mmbronstein) May 26, 2023
·twitter.com·
GNNs work in practical biological applications
Vision GNN: An Image is Worth Graph of Nodes
Vision GNN: An Image is Worth Graph of Nodes
Network architecture plays a key role in the deep learning-based computer vision system. The widely-used convolutional neural network and transformer treat the image as a grid or sequence...
·arxiv.org·
Vision GNN: An Image is Worth Graph of Nodes
Deep Learning with Graph-Structured Representations
Deep Learning with Graph-Structured Representations
Very honored to receive the ELLIS PhD Award for my thesis on Deep Learning with Graph-Structured Representations -- alongside with with @NagraniArsha for her work on multimodal DL (congrats!) https://t.co/FUzNA87okN pic.twitter.com/W2KrbQN7yS— Thomas Kipf (@thomaskipf) December 9, 2021
·twitter.com·
Deep Learning with Graph-Structured Representations