Found 469 bookmarks
Custom sorting
Charting the Graphical Roadmap to Smarter AI
Charting the Graphical Roadmap to Smarter AI
Subscribe • Previous Issues Boosting LLMs with External Knowledge: The Case for Knowledge Graphs When we wrote our post on Graph Intelligence in early 2022, our goal was to highlight techniques for deriving insights about relationships and connections from structured data using graph analytics and machine learning. We focused mainly on business intelligence and machine learning applications, showcasing how technology companies were applying graph neural networks (GNNs) in areas like recommendations and fraud detection.
·gradientflow.substack.com·
Charting the Graphical Roadmap to Smarter AI
TacticAI: an AI assistant for football tactics using Graph AI
TacticAI: an AI assistant for football tactics using Graph AI
"TacticAI: an AI assistant for football tactics" by Zhe W., Petar Veličković, Daniel Hennes, Nenad Tomašev, Laurel Prince, Yoram Bachrach, Romuald Elie, Kevin… | 28 comments on LinkedIn
TacticAI: an AI assistant for football tactics
·linkedin.com·
TacticAI: an AI assistant for football tactics using Graph AI
Graph of Thoughts: Solving Elaborate Problems with Large Language Models
Graph of Thoughts: Solving Elaborate Problems with Large Language Models
We introduce Graph of Thoughts (GoT): a framework that advances prompting capabilities in large language models (LLMs) beyond those offered by paradigms such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary advantage of GoT is the ability to model the information generated by an LLM as an arbitrary graph, where units of information ("LLM thoughts") are vertices, and edges correspond to dependencies between these vertices. This approach enables combining arbitrary LLM thoughts into synergistic outcomes, distilling the essence of whole networks of thoughts, or enhancing thoughts using feedback loops. We illustrate that GoT offers advantages over state of the art on different tasks, for example increasing the quality of sorting by 62% over ToT, while simultaneously reducing costs by 31%. We ensure that GoT is extensible with new thought transformations and thus can be used to spearhead new prompting schemes. This work brings the LLM reasoning closer to human thinking or brain mechanisms such as recurrence, both of which form complex networks.
·arxiv.org·
Graph of Thoughts: Solving Elaborate Problems with Large Language Models
Vector databases vs Graph databases
Vector databases vs Graph databases
Graph Databases should be the better choice for Retrieval Augmented Generation (RAG)! We have seen the debate RAG vs fine-tuning, but what about Vector… | 37 comments on LinkedIn
Vector databases vs Graph databases
·linkedin.com·
Vector databases vs Graph databases
Graph Instruction Tuning for Large Language Models
Graph Instruction Tuning for Large Language Models
🔥 Let #LLM understand graphs directly? GraphGPT made it! 📢 GraphGPT is a Graph Large Language Model, which aligns Large Language Models (LLMs) with Graphs…
·linkedin.com·
Graph Instruction Tuning for Large Language Models
Vectors need Graphs!
Vectors need Graphs!
Vectors need Graphs! Embedding vectors are a pivotal tool when using Generative AI. While vectors might initially seem an unlikely partner to graphs, their… | 61 comments on LinkedIn
Vectors need Graphs!
·linkedin.com·
Vectors need Graphs!
Constructing knowledge graphs from text using OpenAI functions: Leveraging knowledge graphs to power LangChain Applications
Constructing knowledge graphs from text using OpenAI functions: Leveraging knowledge graphs to power LangChain Applications
Editor's Note: This post was written by Tomaz Bratanic from the Neo4j team. Extracting structured information from unstructured data like text has been around for some time and is nothing new. However, LLMs brought a significant shift to the field of information extraction. If before you needed a team of
·blog.langchain.dev·
Constructing knowledge graphs from text using OpenAI functions: Leveraging knowledge graphs to power LangChain Applications
Overcoming the "Reversal Curse" in LLMs with Ontologies
Overcoming the "Reversal Curse" in LLMs with Ontologies
Overcoming the "Reversal Curse" in LLMs with Ontologies: The "Reversal Curse" is a term coined in a recent paper to describe a particular failure of… | 108 comments on LinkedIn
Overcoming the "Reversal Curse" in LLMs with Ontologies
·linkedin.com·
Overcoming the "Reversal Curse" in LLMs with Ontologies
Introducing "Reasoning on Graphs (RoG)" - Unlocking Next-Level Reasoning for Large Language Models
Introducing "Reasoning on Graphs (RoG)" - Unlocking Next-Level Reasoning for Large Language Models
🚀 Exciting News: Introducing "Reasoning on Graphs (RoG)" - Unlocking Next-Level Reasoning for Large Language Models! 📊🧠 We are thrilled to unveil our… | 42 comments on LinkedIn
Introducing "Reasoning on Graphs (RoG)" - Unlocking Next-Level Reasoning for Large Language Models
·linkedin.com·
Introducing "Reasoning on Graphs (RoG)" - Unlocking Next-Level Reasoning for Large Language Models
Concepts is All You Need: A More Direct Path to AGI
Concepts is All You Need: A More Direct Path to AGI
Little demonstrable progress has been made toward AGI (Artificial General Intelligence) since the term was coined some 20 years ago. In spite of the fantastic breakthroughs in Statistical AI such as AlphaZero, ChatGPT, and Stable Diffusion none of these projects have, or claim to have, a clear path to AGI. In order to expedite the development of AGI it is crucial to understand and identify the core requirements of human-like intelligence as it pertains to AGI. From that one can distill which particular development steps are necessary to achieve AGI, and which are a distraction. Such analysis highlights the need for a Cognitive AI approach rather than the currently favored statistical and generative efforts. More specifically it identifies the central role of concepts in human-like cognition. Here we outline an architecture and development plan, together with some preliminary results, that offers a much more direct path to full Human-Level AI (HLAI)/ AGI.
·arxiv.org·
Concepts is All You Need: A More Direct Path to AGI
Graph Neural Prompting with Large Language Models
Graph Neural Prompting with Large Language Models
Large Language Models (LLMs) have shown remarkable generalization capability with exceptional performance in various language modeling tasks. However, they still exhibit inherent limitations in precisely capturing and returning grounded knowledge. While existing work has explored utilizing knowledge graphs to enhance language modeling via joint training and customized model architectures, applying this to LLMs is problematic owing to their large number of parameters and high computational cost. In addition, how to leverage the pre-trained LLMs and avoid training a customized model from scratch remains an open question. In this work, we propose Graph Neural Prompting (GNP), a novel plug-and-play method to assist pre-trained LLMs in learning beneficial knowledge from KGs. GNP encompasses various designs, including a standard graph neural network encoder, a cross-modality pooling module, a domain projector, and a self-supervised link prediction objective. Extensive experiments on multiple datasets demonstrate the superiority of GNP on both commonsense and biomedical reasoning tasks across different LLM sizes and settings.
·arxiv.org·
Graph Neural Prompting with Large Language Models
Chat with the Data Benchmark: Understanding Synergies between Large Language Models and Knowledge Graphs for Enterprise Conversations
Chat with the Data Benchmark: Understanding Synergies between Large Language Models and Knowledge Graphs for Enterprise Conversations
It was an honor to present the initial results of the Chat with the Data benchmark last week at the The Alan Turing Institute Knowledge Graph meetup (link to… | 11 comments on LinkedIn
·linkedin.com·
Chat with the Data Benchmark: Understanding Synergies between Large Language Models and Knowledge Graphs for Enterprise Conversations
PyGraft, a configurable #Python tool to generate both synthetic #schemas and #knowledgeGraphs easily, supporting several RDFS and OWL constructs
PyGraft, a configurable #Python tool to generate both synthetic #schemas and #knowledgeGraphs easily, supporting several RDFS and OWL constructs
Happy to announce PyGraft, a configurable #Python tool to generate both synthetic #schemas and #knowledgeGraphs easily, supporting several RDFS and OWL constructs. Paper: https://t.co/p1Ei3PIhVz Code: https://t.co/ID6gU3elqK (also available on PyPI) @nicolas_hubr @mdaquin
·twitter.com·
PyGraft, a configurable #Python tool to generate both synthetic #schemas and #knowledgeGraphs easily, supporting several RDFS and OWL constructs
LLMs-represent-Knowledge Graphs | LinkedIn
LLMs-represent-Knowledge Graphs | LinkedIn
On August 14, 2023, the paper Natural Language is All a Graph Needs by Ruosong Ye, Caiqi Zhang, Runhui Wang, Shuyuan Xu and Yongfeng Zhang hit the arXiv streets and made quite a bang! The paper outlines a model called InstructGLM that adds further evidence that the future of graph representation lea
·linkedin.com·
LLMs-represent-Knowledge Graphs | LinkedIn
The Memory Game: Investigating the Accuracy of AI Models in Storing and Recalling Facts. Comparing LLMs and Knowledge Graph on Factual Knowledge
The Memory Game: Investigating the Accuracy of AI Models in Storing and Recalling Facts. Comparing LLMs and Knowledge Graph on Factual Knowledge
The Memory Game: Investigating the Accuracy of AI Models in Storing and Recalling Facts … 🧠 ... Comparing LLMs and Knowledge Graph on Factual Knowledge I’m… | 18 comments on LinkedIn
·linkedin.com·
The Memory Game: Investigating the Accuracy of AI Models in Storing and Recalling Facts. Comparing LLMs and Knowledge Graph on Factual Knowledge
LLMs4OL: Large Language Models for Ontology Learning
LLMs4OL: Large Language Models for Ontology Learning
We propose the LLMs4OL approach, which utilizes Large Language Models (LLMs) for Ontology Learning (OL). LLMs have shown significant advancements in natural language processing, demonstrating their ability to capture complex language patterns in different knowledge domains. Our LLMs4OL paradigm investigates the following hypothesis: \textit{Can LLMs effectively apply their language pattern capturing capability to OL, which involves automatically extracting and structuring knowledge from natural language text?} To test this hypothesis, we conduct a comprehensive evaluation using the zero-shot prompting method. We evaluate nine different LLM model families for three main OL tasks: term typing, taxonomy discovery, and extraction of non-taxonomic relations. Additionally, the evaluations encompass diverse genres of ontological knowledge, including lexicosemantic knowledge in WordNet, geographical knowledge in GeoNames, and medical knowledge in UMLS.
·arxiv.org·
LLMs4OL: Large Language Models for Ontology Learning
Gartner's Hype Cycle 2023
Gartner's Hype Cycle 2023
Ah, Gartner's Hype Cycle. It's always fun to see what's on the roller coaster. I think the position of Knowledge Graphs is about right - the KM community is… | 10 comments on LinkedIn
·linkedin.com·
Gartner's Hype Cycle 2023
More Graph DBs in @LangChainAI
More Graph DBs in @LangChainAI
“📈 More Graph DBs in @LangChainAI Graphs can store structured information in a way embeddings can't capture, and we're excited to support even more of them in LangChain: HugeGraph and SPARQL Not only can you query data, but you can also update graph data (!!!) 🧵”
More Graph DBs in @LangChainAI
·twitter.com·
More Graph DBs in @LangChainAI
Link prediction on knowledge graphs is a loosing game, IMHO. Without injecting any new info, you'll only find links similar to those you already had. That's why this work is interesting: injecting external knowledge into link prediction is the only way to find truly new links
Link prediction on knowledge graphs is a loosing game, IMHO. Without injecting any new info, you'll only find links similar to those you already had. That's why this work is interesting: injecting external knowledge into link prediction is the only way to find truly new links
“Link prediction on knowledge graphs is a loosing game, IMHO. Without injecting any new info, you'll only find links similar to those you already had. That's why this work is interesting: injecting external knowledge into link prediction is the only way to find truly new links.”
Link prediction on knowledge graphs is a loosing game, IMHO. Without injecting any new info, you'll only find links similar to those you already had. That's why this work is interesting: injecting external knowledge into link prediction is the only way to find truly new links
·twitter.com·
Link prediction on knowledge graphs is a loosing game, IMHO. Without injecting any new info, you'll only find links similar to those you already had. That's why this work is interesting: injecting external knowledge into link prediction is the only way to find truly new links
Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Large Language Models
Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Large Language Models
With the widespread use of large language models (LLMs) in NLP tasks, researchers have discovered the potential of Chain-of-thought (CoT) to assist LLMs in accomplishing complex reasoning tasks by generating intermediate steps. However, human thought processes are often non-linear, rather than simply sequential chains of thoughts. Therefore, we propose Graph-of-Thought (GoT) reasoning, which models human thought processes not only as a chain but also as a graph. By representing thought units as nodes and connections between them as edges, our approach captures the non-sequential nature of human thinking and allows for a more realistic modeling of thought processes. Similar to Multimodal-CoT, we modeled GoT reasoning as a two-stage framework, generating rationales first and then producing the final answer. Specifically, we employ an additional graph-of-thoughts encoder for GoT representation learning and fuse the GoT representation with the original input representation through a gated fusion mechanism. We implement a GoT reasoning model on the T5 pre-trained model and evaluate its performance on a text-only reasoning task (GSM8K) and a multimodal reasoning task (ScienceQA). Our model achieves significant improvement over the strong CoT baseline with 3.41% and 5.08% on the GSM8K test set with T5-base and T5-large architectures, respectively. Additionally, our model boosts accuracy from 84.91% to 91.54% using the T5-base model and from 91.68% to 92.77% using the T5-large model over the state-of-the-art Multimodal-CoT on the ScienceQA test set. Experiments have shown that GoT achieves comparable results to Multimodal-CoT(large) with over 700M parameters, despite having fewer than 250M backbone model parameters, demonstrating the effectiveness of GoT.
·arxiv.org·
Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Large Language Models