Found 368 bookmarks
Custom sorting
Tree-based RAG with RAPTOR and how knowledge graphs can come to the rescue to enhance answer quality.
Tree-based RAG with RAPTOR and how knowledge graphs can come to the rescue to enhance answer quality.
Long-Context models, such as Google Gemini Pro 1.5 or Large World Model, are probably changing the way we think about RAG (retrieval-augmented generation)… | 12 comments on LinkedIn
, how knowledge graphs can come to the rescue to enhance answer quality.
·linkedin.com·
Tree-based RAG with RAPTOR and how knowledge graphs can come to the rescue to enhance answer quality.
Jensen Huang in his keynote at NVIDIA GTC24 calls out three sources of data to integrate with LLMs: 1) vector databases, 2) ERP / CRM and 3) knowledge graphs
Jensen Huang in his keynote at NVIDIA GTC24 calls out three sources of data to integrate with LLMs: 1) vector databases, 2) ERP / CRM and 3) knowledge graphs
Wow, in Jensen Huang (CEO) his keynote at NVIDIA #GTC24, he calls out three sources of data to integrate with LLMs: 1) vector databases, 2) ERP / CRM and 3)…
Jensen Huang (CEO) his keynote at NVIDIA hashtag#GTC24, he calls out three sources of data to integrate with LLMs: 1) vector databases, 2) ERP / CRM and 3) *knowledge graphs*
·linkedin.com·
Jensen Huang in his keynote at NVIDIA GTC24 calls out three sources of data to integrate with LLMs: 1) vector databases, 2) ERP / CRM and 3) knowledge graphs
Kurt Cagle chatbot on Knowledge Graphs, Ontology, GenAI and Data
Kurt Cagle chatbot on Knowledge Graphs, Ontology, GenAI and Data
I want to thank Jay (JieBing) Yu, PhD for his hard work in creating a Mini-Me (https://lnkd.in/g6TR543j), a virtual assistant built on his fantastic LLM work…
Kurt is one of my favorite writers, a seasoned practitioner and deep thinker in the areas of Knowledge Graphs, Ontology, GenAI and Data
·linkedin.com·
Kurt Cagle chatbot on Knowledge Graphs, Ontology, GenAI and Data
Exploring the Potential of Large Language Models in Graph Generation
Exploring the Potential of Large Language Models in Graph Generation
Large language models (LLMs) have achieved great success in many fields, and recent works have studied exploring LLMs for graph discriminative tasks such as node classification. However, the abilities of LLMs for graph generation remain unexplored in the literature. Graph generation requires the LLM to generate graphs with given properties, which has valuable real-world applications such as drug discovery, while tends to be more challenging. In this paper, we propose LLM4GraphGen to explore the ability of LLMs for graph generation with systematical task designs and extensive experiments. Specifically, we propose several tasks tailored with comprehensive experiments to address key questions regarding LLMs' understanding of different graph structure rules, their ability to capture structural type distributions, and their utilization of domain knowledge for property-based graph generation. Our evaluations demonstrate that LLMs, particularly GPT-4, exhibit preliminary abilities in graph generation tasks, including rule-based and distribution-based generation. We also observe that popular prompting methods, such as few-shot and chain-of-thought prompting, do not consistently enhance performance. Besides, LLMs show potential in generating molecules with specific properties. These findings may serve as foundations for designing good LLMs based models for graph generation and provide valuable insights and further research.
·arxiv.org·
Exploring the Potential of Large Language Models in Graph Generation
TigerGraph CoPilot's public alpha release
TigerGraph CoPilot's public alpha release
🚀 Exciting News Alert! 🚀 We're over the moon to announce the launch of TigerGraph CoPilot's public alpha release! 🌟 🔗 Get ready to explore the future of…
·linkedin.com·
TigerGraph CoPilot's public alpha release
Knowledge Graph Large Language Model (KG-LLM) for Link Prediction
Knowledge Graph Large Language Model (KG-LLM) for Link Prediction
The task of predicting multiple links within knowledge graphs (KGs) stands as a challenge in the field of knowledge graph analysis, a challenge increasingly resolvable due to advancements in natural language processing (NLP) and KG embedding techniques. This paper introduces a novel methodology, the Knowledge Graph Large Language Model Framework (KG-LLM), which leverages pivotal NLP paradigms, including chain-of-thought (CoT) prompting and in-context learning (ICL), to enhance multi-hop link prediction in KGs. By converting the KG to a CoT prompt, our framework is designed to discern and learn the latent representations of entities and their interrelations. To show the efficacy of the KG-LLM Framework, we fine-tune three leading Large Language Models (LLMs) within this framework, employing both non-ICL and ICL tasks for a comprehensive evaluation. Further, we explore the framework's potential to provide LLMs with zero-shot capabilities for handling previously unseen prompts. Our experimental findings discover that integrating ICL and CoT not only augments the performance of our approach but also significantly boosts the models' generalization capacity, thereby ensuring more precise predictions in unfamiliar scenarios.
·arxiv.org·
Knowledge Graph Large Language Model (KG-LLM) for Link Prediction
DeepOnto: A Python Package for Ontology Engineering with Deep Learning
DeepOnto: A Python Package for Ontology Engineering with Deep Learning
Integrating deep learning techniques, particularly language models (LMs), with knowledge representation techniques like ontologies has raised widespread attention, urging the need of a platform that supports both paradigms. Although packages such as OWL API and Jena offer robust support for basic ontology processing features, they lack the capability to transform various types of information within ontologies into formats suitable for downstream deep learning-based applications. Moreover, widely-used ontology APIs are primarily Java-based while deep learning frameworks like PyTorch and Tensorflow are mainly for Python programming. To address the needs, we present DeepOnto, a Python package designed for ontology engineering with deep learning. The package encompasses a core ontology processing module founded on the widely-recognised and reliable OWL API, encapsulating its fundamental features in a more "Pythonic" manner and extending its capabilities to incorporate other essential components including reasoning, verbalisation, normalisation, taxonomy, projection, and more. Building on this module, DeepOnto offers a suite of tools, resources, and algorithms that support various ontology engineering tasks, such as ontology alignment and completion, by harnessing deep learning methods, primarily pre-trained LMs. In this paper, we also demonstrate the practical utility of DeepOnto through two use-cases: the Digital Health Coaching in Samsung Research UK and the Bio-ML track of the Ontology Alignment Evaluation Initiative (OAEI).
·arxiv.org·
DeepOnto: A Python Package for Ontology Engineering with Deep Learning
Enhancing RAG-based application accuracy by constructing and leveraging knowledge graphs
Enhancing RAG-based application accuracy by constructing and leveraging knowledge graphs
A practical guide to constructing and retrieving information from knowledge graphs in RAG applications with Neo4j and LangChain Editor's Note: the following is a guest blog post from Tomaz Bratanic, who focuses on Graph ML and GenAI research at Neo4j. Neo4j is a graph database and analytics company which helps
·blog.langchain.dev·
Enhancing RAG-based application accuracy by constructing and leveraging knowledge graphs
Knowledge, Data and LLMs
Knowledge, Data and LLMs
Today is a pretty special day. In some sense, this is the day I’ve been waiting for all my life. The day that we figure out how to make…
·medium.com·
Knowledge, Data and LLMs
Tony Seale Knowledge Graph Chatbot
Tony Seale Knowledge Graph Chatbot
I am thrilled to introduce a new AI Study Guide (https://lnkd.in/g4rPZVHW) dedicated to Tony Seale, another of my favorite authors, thought leaders, and…
Knowledge Graph
·linkedin.com·
Tony Seale Knowledge Graph Chatbot
KGLM-Loop: A Bi-Directional Data Flywheel for Knowledge Graph Refinement and Hallucination Detection in Large Language Models
KGLM-Loop: A Bi-Directional Data Flywheel for Knowledge Graph Refinement and Hallucination Detection in Large Language Models
KGLM-Loop: A Bi-Directional Data Flywheel for Knowledge Graph Refinement and Hallucination Detection in Large Language Models ☀ 🌑 In the pursuit of…
KGLM-Loop: A Bi-Directional Data Flywheel for Knowledge Graph Refinement and Hallucination Detection in Large Language Models
·linkedin.com·
KGLM-Loop: A Bi-Directional Data Flywheel for Knowledge Graph Refinement and Hallucination Detection in Large Language Models
Telecom GenAI based Network Operations: The Integration of LLMs, GraphRAG, Reinforcement Learning, and Scoring Models
Telecom GenAI based Network Operations: The Integration of LLMs, GraphRAG, Reinforcement Learning, and Scoring Models
Telecom GenAI based Network Operations: The Integration of LLMs, GraphRAG, Reinforcement Learning, and Scoring Models With the increasing complexity of… | 12 comments on LinkedIn
Telecom GenAI based Network Operations: The Integration of LLMs, GraphRAG, Reinforcement Learning, and Scoring Models
·linkedin.com·
Telecom GenAI based Network Operations: The Integration of LLMs, GraphRAG, Reinforcement Learning, and Scoring Models
Why do LangChain and Autogen use graphs? Here are the top reasons
Why do LangChain and Autogen use graphs? Here are the top reasons
LLM frameworks like LangChain are moving towards a graph-based approach for handling their workflows. This represents the initial steps of a much larger… | 90 comments on LinkedIn
Why do LangChain and Autogen use graphs? Here are the top reasons
·linkedin.com·
Why do LangChain and Autogen use graphs? Here are the top reasons
Knowledge Engineering using Large Language Models
Knowledge Engineering using Large Language Models
Knowledge engineering is a discipline that focuses on the creation and maintenance of processes that generate and apply knowledge. Traditionally, knowledge engineering approaches have focused on knowledge expressed in formal languages. The emergence of large language models and their capabilities to effectively work with natural language, in its broadest sense, raises questions about the foundations and practice of knowledge engineering. Here, we outline the potential role of LLMs in knowledge engineering, identifying two central directions: 1) creating hybrid neuro-symbolic knowledge systems; and 2) enabling knowledge engineering in natural language. Additionally, we formulate key open research questions to tackle these directions.
·arxiv.org·
Knowledge Engineering using Large Language Models
Chatbot created based on the prolific writings of Mike Dillinger. This chatbot helps you better digest his posts and articles on Knowledge Graphs, Taxonomy, Ontology and their critical roles in getting LLM technology more accurate and practical
Chatbot created based on the prolific writings of Mike Dillinger. This chatbot helps you better digest his posts and articles on Knowledge Graphs, Taxonomy, Ontology and their critical roles in getting LLM technology more accurate and practical
Check out this chatbot (https://lnkd.in/gv8Afk57) that I created entirely based on the prolific writings of Mike Dillinger, PhD . This chatbot helps you better…
created entirely based on the prolific writings of Mike Dillinger, PhD . This chatbot helps you better digest his posts and articles on Knowledge Graphs, Taxonomy, Ontology and their critical roles in getting LLM technology more accurate and practical
·linkedin.com·
Chatbot created based on the prolific writings of Mike Dillinger. This chatbot helps you better digest his posts and articles on Knowledge Graphs, Taxonomy, Ontology and their critical roles in getting LLM technology more accurate and practical