Found 368 bookmarks
Custom sorting
Talk like a Graph: Encoding Graphs for Large Language Models
Talk like a Graph: Encoding Graphs for Large Language Models
Graphs are a powerful tool for representing and analyzing complex relationships in real-world applications such as social networks, recommender systems, and computational finance. Reasoning on graphs is essential for drawing inferences about the relationships between entities in a complex system, and to identify hidden patterns and trends. Despite the remarkable progress in automated reasoning with natural text, reasoning on graphs with large language models (LLMs) remains an understudied problem. In this work, we perform the first comprehensive study of encoding graph-structured data as text for consumption by LLMs. We show that LLM performance on graph reasoning tasks varies on three fundamental levels: (1) the graph encoding method, (2) the nature of the graph task itself, and (3) interestingly, the very structure of the graph considered. These novel results provide valuable insight on strategies for encoding graphs as text. Using these insights we illustrate how the correct choice of encoders can boost performance on graph reasoning tasks inside LLMs by 4.8% to 61.8%, depending on the task.
·arxiv.org·
Talk like a Graph: Encoding Graphs for Large Language Models
Charting the Graphical Roadmap to Smarter AI
Charting the Graphical Roadmap to Smarter AI
Subscribe • Previous Issues Boosting LLMs with External Knowledge: The Case for Knowledge Graphs When we wrote our post on Graph Intelligence in early 2022, our goal was to highlight techniques for deriving insights about relationships and connections from structured data using graph analytics and machine learning. We focused mainly on business intelligence and machine learning applications, showcasing how technology companies were applying graph neural networks (GNNs) in areas like recommendations and fraud detection.
·gradientflow.substack.com·
Charting the Graphical Roadmap to Smarter AI
Graph of Thoughts: Solving Elaborate Problems with Large Language Models
Graph of Thoughts: Solving Elaborate Problems with Large Language Models
We introduce Graph of Thoughts (GoT): a framework that advances prompting capabilities in large language models (LLMs) beyond those offered by paradigms such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary advantage of GoT is the ability to model the information generated by an LLM as an arbitrary graph, where units of information ("LLM thoughts") are vertices, and edges correspond to dependencies between these vertices. This approach enables combining arbitrary LLM thoughts into synergistic outcomes, distilling the essence of whole networks of thoughts, or enhancing thoughts using feedback loops. We illustrate that GoT offers advantages over state of the art on different tasks, for example increasing the quality of sorting by 62% over ToT, while simultaneously reducing costs by 31%. We ensure that GoT is extensible with new thought transformations and thus can be used to spearhead new prompting schemes. This work brings the LLM reasoning closer to human thinking or brain mechanisms such as recurrence, both of which form complex networks.
·arxiv.org·
Graph of Thoughts: Solving Elaborate Problems with Large Language Models
Graph Instruction Tuning for Large Language Models
Graph Instruction Tuning for Large Language Models
🔥 Let #LLM understand graphs directly? GraphGPT made it! 📢 GraphGPT is a Graph Large Language Model, which aligns Large Language Models (LLMs) with Graphs…
·linkedin.com·
Graph Instruction Tuning for Large Language Models
Vectors need Graphs!
Vectors need Graphs!
Vectors need Graphs! Embedding vectors are a pivotal tool when using Generative AI. While vectors might initially seem an unlikely partner to graphs, their… | 61 comments on LinkedIn
Vectors need Graphs!
·linkedin.com·
Vectors need Graphs!
Constructing knowledge graphs from text using OpenAI functions: Leveraging knowledge graphs to power LangChain Applications
Constructing knowledge graphs from text using OpenAI functions: Leveraging knowledge graphs to power LangChain Applications
Editor's Note: This post was written by Tomaz Bratanic from the Neo4j team. Extracting structured information from unstructured data like text has been around for some time and is nothing new. However, LLMs brought a significant shift to the field of information extraction. If before you needed a team of
·blog.langchain.dev·
Constructing knowledge graphs from text using OpenAI functions: Leveraging knowledge graphs to power LangChain Applications
Overcoming the "Reversal Curse" in LLMs with Ontologies
Overcoming the "Reversal Curse" in LLMs with Ontologies
Overcoming the "Reversal Curse" in LLMs with Ontologies: The "Reversal Curse" is a term coined in a recent paper to describe a particular failure of… | 108 comments on LinkedIn
Overcoming the "Reversal Curse" in LLMs with Ontologies
·linkedin.com·
Overcoming the "Reversal Curse" in LLMs with Ontologies
Introducing "Reasoning on Graphs (RoG)" - Unlocking Next-Level Reasoning for Large Language Models
Introducing "Reasoning on Graphs (RoG)" - Unlocking Next-Level Reasoning for Large Language Models
🚀 Exciting News: Introducing "Reasoning on Graphs (RoG)" - Unlocking Next-Level Reasoning for Large Language Models! 📊🧠 We are thrilled to unveil our… | 42 comments on LinkedIn
Introducing "Reasoning on Graphs (RoG)" - Unlocking Next-Level Reasoning for Large Language Models
·linkedin.com·
Introducing "Reasoning on Graphs (RoG)" - Unlocking Next-Level Reasoning for Large Language Models
Graph Neural Prompting with Large Language Models
Graph Neural Prompting with Large Language Models
Large Language Models (LLMs) have shown remarkable generalization capability with exceptional performance in various language modeling tasks. However, they still exhibit inherent limitations in precisely capturing and returning grounded knowledge. While existing work has explored utilizing knowledge graphs to enhance language modeling via joint training and customized model architectures, applying this to LLMs is problematic owing to their large number of parameters and high computational cost. In addition, how to leverage the pre-trained LLMs and avoid training a customized model from scratch remains an open question. In this work, we propose Graph Neural Prompting (GNP), a novel plug-and-play method to assist pre-trained LLMs in learning beneficial knowledge from KGs. GNP encompasses various designs, including a standard graph neural network encoder, a cross-modality pooling module, a domain projector, and a self-supervised link prediction objective. Extensive experiments on multiple datasets demonstrate the superiority of GNP on both commonsense and biomedical reasoning tasks across different LLM sizes and settings.
·arxiv.org·
Graph Neural Prompting with Large Language Models
Chat with the Data Benchmark: Understanding Synergies between Large Language Models and Knowledge Graphs for Enterprise Conversations
Chat with the Data Benchmark: Understanding Synergies between Large Language Models and Knowledge Graphs for Enterprise Conversations
It was an honor to present the initial results of the Chat with the Data benchmark last week at the The Alan Turing Institute Knowledge Graph meetup (link to… | 11 comments on LinkedIn
·linkedin.com·
Chat with the Data Benchmark: Understanding Synergies between Large Language Models and Knowledge Graphs for Enterprise Conversations
LLMs-represent-Knowledge Graphs | LinkedIn
LLMs-represent-Knowledge Graphs | LinkedIn
On August 14, 2023, the paper Natural Language is All a Graph Needs by Ruosong Ye, Caiqi Zhang, Runhui Wang, Shuyuan Xu and Yongfeng Zhang hit the arXiv streets and made quite a bang! The paper outlines a model called InstructGLM that adds further evidence that the future of graph representation lea
·linkedin.com·
LLMs-represent-Knowledge Graphs | LinkedIn
The Memory Game: Investigating the Accuracy of AI Models in Storing and Recalling Facts. Comparing LLMs and Knowledge Graph on Factual Knowledge
The Memory Game: Investigating the Accuracy of AI Models in Storing and Recalling Facts. Comparing LLMs and Knowledge Graph on Factual Knowledge
The Memory Game: Investigating the Accuracy of AI Models in Storing and Recalling Facts … 🧠 ... Comparing LLMs and Knowledge Graph on Factual Knowledge I’m… | 18 comments on LinkedIn
·linkedin.com·
The Memory Game: Investigating the Accuracy of AI Models in Storing and Recalling Facts. Comparing LLMs and Knowledge Graph on Factual Knowledge
LLMs4OL: Large Language Models for Ontology Learning
LLMs4OL: Large Language Models for Ontology Learning
We propose the LLMs4OL approach, which utilizes Large Language Models (LLMs) for Ontology Learning (OL). LLMs have shown significant advancements in natural language processing, demonstrating their ability to capture complex language patterns in different knowledge domains. Our LLMs4OL paradigm investigates the following hypothesis: \textit{Can LLMs effectively apply their language pattern capturing capability to OL, which involves automatically extracting and structuring knowledge from natural language text?} To test this hypothesis, we conduct a comprehensive evaluation using the zero-shot prompting method. We evaluate nine different LLM model families for three main OL tasks: term typing, taxonomy discovery, and extraction of non-taxonomic relations. Additionally, the evaluations encompass diverse genres of ontological knowledge, including lexicosemantic knowledge in WordNet, geographical knowledge in GeoNames, and medical knowledge in UMLS.
·arxiv.org·
LLMs4OL: Large Language Models for Ontology Learning
More Graph DBs in @LangChainAI
More Graph DBs in @LangChainAI
“📈 More Graph DBs in @LangChainAI Graphs can store structured information in a way embeddings can't capture, and we're excited to support even more of them in LangChain: HugeGraph and SPARQL Not only can you query data, but you can also update graph data (!!!) 🧵”
More Graph DBs in @LangChainAI
·twitter.com·
More Graph DBs in @LangChainAI
Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Large Language Models
Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Large Language Models
With the widespread use of large language models (LLMs) in NLP tasks, researchers have discovered the potential of Chain-of-thought (CoT) to assist LLMs in accomplishing complex reasoning tasks by generating intermediate steps. However, human thought processes are often non-linear, rather than simply sequential chains of thoughts. Therefore, we propose Graph-of-Thought (GoT) reasoning, which models human thought processes not only as a chain but also as a graph. By representing thought units as nodes and connections between them as edges, our approach captures the non-sequential nature of human thinking and allows for a more realistic modeling of thought processes. Similar to Multimodal-CoT, we modeled GoT reasoning as a two-stage framework, generating rationales first and then producing the final answer. Specifically, we employ an additional graph-of-thoughts encoder for GoT representation learning and fuse the GoT representation with the original input representation through a gated fusion mechanism. We implement a GoT reasoning model on the T5 pre-trained model and evaluate its performance on a text-only reasoning task (GSM8K) and a multimodal reasoning task (ScienceQA). Our model achieves significant improvement over the strong CoT baseline with 3.41% and 5.08% on the GSM8K test set with T5-base and T5-large architectures, respectively. Additionally, our model boosts accuracy from 84.91% to 91.54% using the T5-base model and from 91.68% to 92.77% using the T5-large model over the state-of-the-art Multimodal-CoT on the ScienceQA test set. Experiments have shown that GoT achieves comparable results to Multimodal-CoT(large) with over 700M parameters, despite having fewer than 250M backbone model parameters, demonstrating the effectiveness of GoT.
·arxiv.org·
Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Large Language Models
LLM Ontology-prompting for Knowledge Graph Extraction
LLM Ontology-prompting for Knowledge Graph Extraction
Prompting an LLM with an ontology to drive Knowledge Graph extraction from unstructured documents
I make no apology for saying that a graph is the best organization of structured data. However, the vast majority of data is unstructured text. Therefore, data needs to be transformed from its original format using an Extract-Transform-Load (ETL) or Extract-Load-Transform (ELT) into a Knowledge Graph format. There is no problem when the original format is structured, such as SQL tables, spreadsheets, etc, or at least semi-structured, such as tweets. However, when the source data is unstructured text the task of ETL/ELT to a graph is far more challenging.This article shows how an LLM can be prompted with an unstructured document and asked to extract a graph corresponding to a specific ontology/schema. This is demonstrated with a Kennedy ontology in conjunction with a publicly available description of the Kennedy family tree.
·medium.com·
LLM Ontology-prompting for Knowledge Graph Extraction
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
·arxiv.org·
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Imagine the next phase of LLM prompts: a ‘Graph of Thought’
Imagine the next phase of LLM prompts: a ‘Graph of Thought’
The way we engage with Large Language Models (LLMs) is rapidly evolving. We started with prompt engineering and progressed to combining prompts into 'Chains of…
Now, imagine the next phase: a ‘Graph of Thought’
·linkedin.com·
Imagine the next phase of LLM prompts: a ‘Graph of Thought’