Tree-based RAG with RAPTOR and how knowledge graphs can come to the rescue to enhance answer quality.
Long-Context models, such as Google Gemini Pro 1.5 or Large World Model, are probably changing the way we think about RAG (retrieval-augmented generation)… | 12 comments on LinkedIn
, how knowledge graphs can come to the rescue to enhance answer quality.
Jensen Huang in his keynote at NVIDIA GTC24 calls out three sources of data to integrate with LLMs: 1) vector databases, 2) ERP / CRM and 3) knowledge graphs
Wow, in Jensen Huang (CEO) his keynote at NVIDIA #GTC24, he calls out three sources of data to integrate with LLMs: 1) vector databases, 2) ERP / CRM and 3)…
Jensen Huang (CEO) his keynote at NVIDIA hashtag#GTC24, he calls out three sources of data to integrate with LLMs: 1) vector databases, 2) ERP / CRM and 3) *knowledge graphs*
Kurt Cagle chatbot on Knowledge Graphs, Ontology, GenAI and Data
I want to thank Jay (JieBing) Yu, PhD for his hard work in creating a Mini-Me (https://lnkd.in/g6TR543j), a virtual assistant built on his fantastic LLM work…
Kurt is one of my favorite writers, a seasoned practitioner and deep thinker in the areas of Knowledge Graphs, Ontology, GenAI and Data
Exploring the Potential of Large Language Models in Graph Generation
Large language models (LLMs) have achieved great success in many fields, and recent works have studied exploring LLMs for graph discriminative tasks such as node classification. However, the abilities of LLMs for graph generation remain unexplored in the literature. Graph generation requires the LLM to generate graphs with given properties, which has valuable real-world applications such as drug discovery, while tends to be more challenging. In this paper, we propose LLM4GraphGen to explore the ability of LLMs for graph generation with systematical task designs and extensive experiments. Specifically, we propose several tasks tailored with comprehensive experiments to address key questions regarding LLMs' understanding of different graph structure rules, their ability to capture structural type distributions, and their utilization of domain knowledge for property-based graph generation. Our evaluations demonstrate that LLMs, particularly GPT-4, exhibit preliminary abilities in graph generation tasks, including rule-based and distribution-based generation. We also observe that popular prompting methods, such as few-shot and chain-of-thought prompting, do not consistently enhance performance. Besides, LLMs show potential in generating molecules with specific properties. These findings may serve as foundations for designing good LLMs based models for graph generation and provide valuable insights and further research.
Knowledge Graph Large Language Model (KG-LLM) for Link Prediction
The task of predicting multiple links within knowledge graphs (KGs) stands as a challenge in the field of knowledge graph analysis, a challenge increasingly resolvable due to advancements in natural language processing (NLP) and KG embedding techniques. This paper introduces a novel methodology, the Knowledge Graph Large Language Model Framework (KG-LLM), which leverages pivotal NLP paradigms, including chain-of-thought (CoT) prompting and in-context learning (ICL), to enhance multi-hop link prediction in KGs. By converting the KG to a CoT prompt, our framework is designed to discern and learn the latent representations of entities and their interrelations. To show the efficacy of the KG-LLM Framework, we fine-tune three leading Large Language Models (LLMs) within this framework, employing both non-ICL and ICL tasks for a comprehensive evaluation. Further, we explore the framework's potential to provide LLMs with zero-shot capabilities for handling previously unseen prompts. Our experimental findings discover that integrating ICL and CoT not only augments the performance of our approach but also significantly boosts the models' generalization capacity, thereby ensuring more precise predictions in unfamiliar scenarios.
DeepOnto: A Python Package for Ontology Engineering with Deep Learning
Integrating deep learning techniques, particularly language models (LMs), with knowledge representation techniques like ontologies has raised widespread attention, urging the need of a platform that supports both paradigms. Although packages such as OWL API and Jena offer robust support for basic ontology processing features, they lack the capability to transform various types of information within ontologies into formats suitable for downstream deep learning-based applications. Moreover, widely-used ontology APIs are primarily Java-based while deep learning frameworks like PyTorch and Tensorflow are mainly for Python programming. To address the needs, we present DeepOnto, a Python package designed for ontology engineering with deep learning. The package encompasses a core ontology processing module founded on the widely-recognised and reliable OWL API, encapsulating its fundamental features in a more "Pythonic" manner and extending its capabilities to incorporate other essential components including reasoning, verbalisation, normalisation, taxonomy, projection, and more. Building on this module, DeepOnto offers a suite of tools, resources, and algorithms that support various ontology engineering tasks, such as ontology alignment and completion, by harnessing deep learning methods, primarily pre-trained LMs. In this paper, we also demonstrate the practical utility of DeepOnto through two use-cases: the Digital Health Coaching in Samsung Research UK and the Bio-ML track of the Ontology Alignment Evaluation Initiative (OAEI).
Enhancing RAG-based application accuracy by constructing and leveraging knowledge graphs
A practical guide to constructing and retrieving information from knowledge graphs in RAG applications with Neo4j and LangChain
Editor's Note: the following is a guest blog post from Tomaz Bratanic, who focuses on Graph ML and GenAI research at Neo4j. Neo4j is a graph database and analytics company which helps
The Limitations of Cosine Similarity and the Case for Knowledge Graphs in AI
The Limitations of Cosine Similarity and the Case for Knowledge Graphs in AI ◼ As artificial intelligence (AI) systems become increasingly integrated into…
The Limitations of Cosine Similarity and the Case for Knowledge Graphs in AI
KGLM-Loop: A Bi-Directional Data Flywheel for Knowledge Graph Refinement and Hallucination Detection in Large Language Models
KGLM-Loop: A Bi-Directional Data Flywheel for Knowledge Graph Refinement and Hallucination Detection in Large Language Models ☀ 🌑 In the pursuit of…
KGLM-Loop: A Bi-Directional Data Flywheel for Knowledge Graph Refinement and Hallucination Detection in Large Language Models
Telecom GenAI based Network Operations: The Integration of LLMs, GraphRAG, Reinforcement Learning, and Scoring Models
Telecom GenAI based Network Operations: The Integration of LLMs, GraphRAG, Reinforcement Learning, and Scoring Models With the increasing complexity of… | 12 comments on LinkedIn
Telecom GenAI based Network Operations: The Integration of LLMs, GraphRAG, Reinforcement Learning, and Scoring Models
Why do LangChain and Autogen use graphs? Here are the top reasons
LLM frameworks like LangChain are moving towards a graph-based approach for handling their workflows. This represents the initial steps of a much larger… | 90 comments on LinkedIn
Why do LangChain and Autogen use graphs? Here are the top reasons