Found 402 bookmarks
Newest
Constructing knowledge graphs from text using OpenAI functions: Leveraging knowledge graphs to power LangChain Applications
Constructing knowledge graphs from text using OpenAI functions: Leveraging knowledge graphs to power LangChain Applications
Editor's Note: This post was written by Tomaz Bratanic from the Neo4j team. Extracting structured information from unstructured data like text has been around for some time and is nothing new. However, LLMs brought a significant shift to the field of information extraction. If before you needed a team of
·blog.langchain.dev·
Constructing knowledge graphs from text using OpenAI functions: Leveraging knowledge graphs to power LangChain Applications
Overcoming the "Reversal Curse" in LLMs with Ontologies
Overcoming the "Reversal Curse" in LLMs with Ontologies
Overcoming the "Reversal Curse" in LLMs with Ontologies: The "Reversal Curse" is a term coined in a recent paper to describe a particular failure of… | 108 comments on LinkedIn
Overcoming the "Reversal Curse" in LLMs with Ontologies
·linkedin.com·
Overcoming the "Reversal Curse" in LLMs with Ontologies
Introducing "Reasoning on Graphs (RoG)" - Unlocking Next-Level Reasoning for Large Language Models
Introducing "Reasoning on Graphs (RoG)" - Unlocking Next-Level Reasoning for Large Language Models
🚀 Exciting News: Introducing "Reasoning on Graphs (RoG)" - Unlocking Next-Level Reasoning for Large Language Models! 📊🧠 We are thrilled to unveil our… | 42 comments on LinkedIn
Introducing "Reasoning on Graphs (RoG)" - Unlocking Next-Level Reasoning for Large Language Models
·linkedin.com·
Introducing "Reasoning on Graphs (RoG)" - Unlocking Next-Level Reasoning for Large Language Models
Concepts is All You Need: A More Direct Path to AGI
Concepts is All You Need: A More Direct Path to AGI
Little demonstrable progress has been made toward AGI (Artificial General Intelligence) since the term was coined some 20 years ago. In spite of the fantastic breakthroughs in Statistical AI such as AlphaZero, ChatGPT, and Stable Diffusion none of these projects have, or claim to have, a clear path to AGI. In order to expedite the development of AGI it is crucial to understand and identify the core requirements of human-like intelligence as it pertains to AGI. From that one can distill which particular development steps are necessary to achieve AGI, and which are a distraction. Such analysis highlights the need for a Cognitive AI approach rather than the currently favored statistical and generative efforts. More specifically it identifies the central role of concepts in human-like cognition. Here we outline an architecture and development plan, together with some preliminary results, that offers a much more direct path to full Human-Level AI (HLAI)/ AGI.
·arxiv.org·
Concepts is All You Need: A More Direct Path to AGI
Graph Neural Prompting with Large Language Models
Graph Neural Prompting with Large Language Models
Large Language Models (LLMs) have shown remarkable generalization capability with exceptional performance in various language modeling tasks. However, they still exhibit inherent limitations in precisely capturing and returning grounded knowledge. While existing work has explored utilizing knowledge graphs to enhance language modeling via joint training and customized model architectures, applying this to LLMs is problematic owing to their large number of parameters and high computational cost. In addition, how to leverage the pre-trained LLMs and avoid training a customized model from scratch remains an open question. In this work, we propose Graph Neural Prompting (GNP), a novel plug-and-play method to assist pre-trained LLMs in learning beneficial knowledge from KGs. GNP encompasses various designs, including a standard graph neural network encoder, a cross-modality pooling module, a domain projector, and a self-supervised link prediction objective. Extensive experiments on multiple datasets demonstrate the superiority of GNP on both commonsense and biomedical reasoning tasks across different LLM sizes and settings.
·arxiv.org·
Graph Neural Prompting with Large Language Models
Chat with the Data Benchmark: Understanding Synergies between Large Language Models and Knowledge Graphs for Enterprise Conversations
Chat with the Data Benchmark: Understanding Synergies between Large Language Models and Knowledge Graphs for Enterprise Conversations
It was an honor to present the initial results of the Chat with the Data benchmark last week at the The Alan Turing Institute Knowledge Graph meetup (link to… | 11 comments on LinkedIn
·linkedin.com·
Chat with the Data Benchmark: Understanding Synergies between Large Language Models and Knowledge Graphs for Enterprise Conversations
PyGraft, a configurable #Python tool to generate both synthetic #schemas and #knowledgeGraphs easily, supporting several RDFS and OWL constructs
PyGraft, a configurable #Python tool to generate both synthetic #schemas and #knowledgeGraphs easily, supporting several RDFS and OWL constructs
Happy to announce PyGraft, a configurable #Python tool to generate both synthetic #schemas and #knowledgeGraphs easily, supporting several RDFS and OWL constructs. Paper: https://t.co/p1Ei3PIhVz Code: https://t.co/ID6gU3elqK (also available on PyPI) @nicolas_hubr @mdaquin
·twitter.com·
PyGraft, a configurable #Python tool to generate both synthetic #schemas and #knowledgeGraphs easily, supporting several RDFS and OWL constructs
LLMs-represent-Knowledge Graphs | LinkedIn
LLMs-represent-Knowledge Graphs | LinkedIn
On August 14, 2023, the paper Natural Language is All a Graph Needs by Ruosong Ye, Caiqi Zhang, Runhui Wang, Shuyuan Xu and Yongfeng Zhang hit the arXiv streets and made quite a bang! The paper outlines a model called InstructGLM that adds further evidence that the future of graph representation lea
·linkedin.com·
LLMs-represent-Knowledge Graphs | LinkedIn
The Memory Game: Investigating the Accuracy of AI Models in Storing and Recalling Facts. Comparing LLMs and Knowledge Graph on Factual Knowledge
The Memory Game: Investigating the Accuracy of AI Models in Storing and Recalling Facts. Comparing LLMs and Knowledge Graph on Factual Knowledge
The Memory Game: Investigating the Accuracy of AI Models in Storing and Recalling Facts … 🧠 ... Comparing LLMs and Knowledge Graph on Factual Knowledge I’m… | 18 comments on LinkedIn
·linkedin.com·
The Memory Game: Investigating the Accuracy of AI Models in Storing and Recalling Facts. Comparing LLMs and Knowledge Graph on Factual Knowledge
LLMs4OL: Large Language Models for Ontology Learning
LLMs4OL: Large Language Models for Ontology Learning
We propose the LLMs4OL approach, which utilizes Large Language Models (LLMs) for Ontology Learning (OL). LLMs have shown significant advancements in natural language processing, demonstrating their ability to capture complex language patterns in different knowledge domains. Our LLMs4OL paradigm investigates the following hypothesis: \textit{Can LLMs effectively apply their language pattern capturing capability to OL, which involves automatically extracting and structuring knowledge from natural language text?} To test this hypothesis, we conduct a comprehensive evaluation using the zero-shot prompting method. We evaluate nine different LLM model families for three main OL tasks: term typing, taxonomy discovery, and extraction of non-taxonomic relations. Additionally, the evaluations encompass diverse genres of ontological knowledge, including lexicosemantic knowledge in WordNet, geographical knowledge in GeoNames, and medical knowledge in UMLS.
·arxiv.org·
LLMs4OL: Large Language Models for Ontology Learning
Gartner's Hype Cycle 2023
Gartner's Hype Cycle 2023
Ah, Gartner's Hype Cycle. It's always fun to see what's on the roller coaster. I think the position of Knowledge Graphs is about right - the KM community is… | 10 comments on LinkedIn
·linkedin.com·
Gartner's Hype Cycle 2023
More Graph DBs in @LangChainAI
More Graph DBs in @LangChainAI
“📈 More Graph DBs in @LangChainAI Graphs can store structured information in a way embeddings can't capture, and we're excited to support even more of them in LangChain: HugeGraph and SPARQL Not only can you query data, but you can also update graph data (!!!) 🧵”
More Graph DBs in @LangChainAI
·twitter.com·
More Graph DBs in @LangChainAI
Link prediction on knowledge graphs is a loosing game, IMHO. Without injecting any new info, you'll only find links similar to those you already had. That's why this work is interesting: injecting external knowledge into link prediction is the only way to find truly new links
Link prediction on knowledge graphs is a loosing game, IMHO. Without injecting any new info, you'll only find links similar to those you already had. That's why this work is interesting: injecting external knowledge into link prediction is the only way to find truly new links
“Link prediction on knowledge graphs is a loosing game, IMHO. Without injecting any new info, you'll only find links similar to those you already had. That's why this work is interesting: injecting external knowledge into link prediction is the only way to find truly new links.”
Link prediction on knowledge graphs is a loosing game, IMHO. Without injecting any new info, you'll only find links similar to those you already had. That's why this work is interesting: injecting external knowledge into link prediction is the only way to find truly new links
·twitter.com·
Link prediction on knowledge graphs is a loosing game, IMHO. Without injecting any new info, you'll only find links similar to those you already had. That's why this work is interesting: injecting external knowledge into link prediction is the only way to find truly new links
Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Large Language Models
Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Large Language Models
With the widespread use of large language models (LLMs) in NLP tasks, researchers have discovered the potential of Chain-of-thought (CoT) to assist LLMs in accomplishing complex reasoning tasks by generating intermediate steps. However, human thought processes are often non-linear, rather than simply sequential chains of thoughts. Therefore, we propose Graph-of-Thought (GoT) reasoning, which models human thought processes not only as a chain but also as a graph. By representing thought units as nodes and connections between them as edges, our approach captures the non-sequential nature of human thinking and allows for a more realistic modeling of thought processes. Similar to Multimodal-CoT, we modeled GoT reasoning as a two-stage framework, generating rationales first and then producing the final answer. Specifically, we employ an additional graph-of-thoughts encoder for GoT representation learning and fuse the GoT representation with the original input representation through a gated fusion mechanism. We implement a GoT reasoning model on the T5 pre-trained model and evaluate its performance on a text-only reasoning task (GSM8K) and a multimodal reasoning task (ScienceQA). Our model achieves significant improvement over the strong CoT baseline with 3.41% and 5.08% on the GSM8K test set with T5-base and T5-large architectures, respectively. Additionally, our model boosts accuracy from 84.91% to 91.54% using the T5-base model and from 91.68% to 92.77% using the T5-large model over the state-of-the-art Multimodal-CoT on the ScienceQA test set. Experiments have shown that GoT achieves comparable results to Multimodal-CoT(large) with over 700M parameters, despite having fewer than 250M backbone model parameters, demonstrating the effectiveness of GoT.
·arxiv.org·
Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Large Language Models
LLM Ontology-prompting for Knowledge Graph Extraction
LLM Ontology-prompting for Knowledge Graph Extraction
Prompting an LLM with an ontology to drive Knowledge Graph extraction from unstructured documents
I make no apology for saying that a graph is the best organization of structured data. However, the vast majority of data is unstructured text. Therefore, data needs to be transformed from its original format using an Extract-Transform-Load (ETL) or Extract-Load-Transform (ELT) into a Knowledge Graph format. There is no problem when the original format is structured, such as SQL tables, spreadsheets, etc, or at least semi-structured, such as tweets. However, when the source data is unstructured text the task of ETL/ELT to a graph is far more challenging.This article shows how an LLM can be prompted with an unstructured document and asked to extract a graph corresponding to a specific ontology/schema. This is demonstrated with a Kennedy ontology in conjunction with a publicly available description of the Kennedy family tree.
·medium.com·
LLM Ontology-prompting for Knowledge Graph Extraction
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
·arxiv.org·
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Imagine the next phase of LLM prompts: a ‘Graph of Thought’
Imagine the next phase of LLM prompts: a ‘Graph of Thought’
The way we engage with Large Language Models (LLMs) is rapidly evolving. We started with prompt engineering and progressed to combining prompts into 'Chains of…
Now, imagine the next phase: a ‘Graph of Thought’
·linkedin.com·
Imagine the next phase of LLM prompts: a ‘Graph of Thought’