Found 349 bookmarks
Custom sorting
The Era of Semantic Decoding
The Era of Semantic Decoding
Recent work demonstrated great promise in the idea of orchestrating collaborations between LLMs, human input, and various tools to address the inherent limitations of LLMs. We propose a novel perspective called semantic decoding, which frames these collaborative processes as optimization procedures in semantic space. Specifically, we conceptualize LLMs as semantic processors that manipulate meaningful pieces of information that we call semantic tokens (known thoughts). LLMs are among a large pool of other semantic processors, including humans and tools, such as search engines or code executors. Collectively, semantic processors engage in dynamic exchanges of semantic tokens to progressively construct high-utility outputs. We refer to these orchestrated interactions among semantic processors, optimizing and searching in semantic space, as semantic decoding algorithms. This concept draws a direct parallel to the well-studied problem of syntactic decoding, which involves crafting algorithms to best exploit auto-regressive language models for extracting high-utility sequences of syntactic tokens. By focusing on the semantic level and disregarding syntactic details, we gain a fresh perspective on the engineering of AI systems, enabling us to imagine systems with much greater complexity and capabilities. In this position paper, we formalize the transition from syntactic to semantic tokens as well as the analogy between syntactic and semantic decoding. Subsequently, we explore the possibilities of optimizing within the space of semantic tokens via semantic decoding algorithms. We conclude with a list of research opportunities and questions arising from this fresh perspective. The semantic decoding perspective offers a powerful abstraction for search and optimization directly in the space of meaningful concepts, with semantic tokens as the fundamental units of a new type of computation.
·arxiv.org·
The Era of Semantic Decoding
Neurosymbolic AI for Reasoning over Knowledge Graphs: A Survey
Neurosymbolic AI for Reasoning over Knowledge Graphs: A Survey
Neurosymbolic AI is an increasingly active area of research that combines symbolic reasoning methods with deep learning to leverage their complementary benefits. As knowledge graphs are becoming a popular way to represent heterogeneous and multi-relational data, methods for reasoning on graph structures have attempted to follow this neurosymbolic paradigm. Traditionally, such approaches have utilized either rule-based inference or generated representative numerical embeddings from which patterns could be extracted. However, several recent studies have attempted to bridge this dichotomy to generate models that facilitate interpretability, maintain competitive performance, and integrate expert knowledge. Therefore, we survey methods that perform neurosymbolic reasoning tasks on knowledge graphs and propose a novel taxonomy by which we can classify them. Specifically, we propose three major categories: (1) logically-informed embedding approaches, (2) embedding approaches with logical constraints, and (3) rule learning approaches. Alongside the taxonomy, we provide a tabular overview of the approaches and links to their source code, if available, for more direct comparison. Finally, we discuss the unique characteristics and limitations of these methods, then propose several prospective directions toward which this field of research could evolve.
·arxiv.org·
Neurosymbolic AI for Reasoning over Knowledge Graphs: A Survey
GLaM: Fine-Tuning Large Language Models for Domain Knowledge Graph Alignment via Neighborhood Partitioning and Generative Subgraph Encoding
GLaM: Fine-Tuning Large Language Models for Domain Knowledge Graph Alignment via Neighborhood Partitioning and Generative Subgraph Encoding
Integrating large language models (LLMs) with knowledge graphs derived from domain-specific data represents an important advancement towards more powerful and factual reasoning. As these models grow more capable, it is crucial to enable them to perform multi-step inferences over real-world knowledge graphs while minimizing hallucination. While large language models excel at conversation and text generation, their ability to reason over domain-specialized graphs of interconnected entities remains limited. For example, can we query a LLM to identify the optimal contact in a professional network for a specific goal, based on relationships and attributes in a private database? The answer is no--such capabilities lie beyond current methods. However, this question underscores a critical technical gap that must be addressed. Many high-value applications in areas such as science, security, and e-commerce rely on proprietary knowledge graphs encoding unique structures, relationships, and logical constraints. We introduce a fine-tuning framework for developing Graph-aligned LAnguage Models (GLaM) that transforms a knowledge graph into an alternate text representation with labeled question-answer pairs. We demonstrate that grounding the models in specific graph-based knowledge expands the models' capacity for structure-based reasoning. Our methodology leverages the large-language model's generative capabilities to create the dataset and proposes an efficient alternate to retrieval-augmented generation styled methods.
·arxiv.org·
GLaM: Fine-Tuning Large Language Models for Domain Knowledge Graph Alignment via Neighborhood Partitioning and Generative Subgraph Encoding
RAG, Context and Knowledge Graphs | LinkedIn
RAG, Context and Knowledge Graphs | LinkedIn
Copyright 2024 Kurt Cagle / The Cagle Report There is an interesting tug-of-war going on right now. On one side are the machine learning folks, those who have been harnessing neural networks for a few years.
·linkedin.com·
RAG, Context and Knowledge Graphs | LinkedIn
Tree-based RAG with RAPTOR and how knowledge graphs can come to the rescue to enhance answer quality.
Tree-based RAG with RAPTOR and how knowledge graphs can come to the rescue to enhance answer quality.
Long-Context models, such as Google Gemini Pro 1.5 or Large World Model, are probably changing the way we think about RAG (retrieval-augmented generation)… | 12 comments on LinkedIn
, how knowledge graphs can come to the rescue to enhance answer quality.
·linkedin.com·
Tree-based RAG with RAPTOR and how knowledge graphs can come to the rescue to enhance answer quality.
Jensen Huang in his keynote at NVIDIA GTC24 calls out three sources of data to integrate with LLMs: 1) vector databases, 2) ERP / CRM and 3) knowledge graphs
Jensen Huang in his keynote at NVIDIA GTC24 calls out three sources of data to integrate with LLMs: 1) vector databases, 2) ERP / CRM and 3) knowledge graphs
Wow, in Jensen Huang (CEO) his keynote at NVIDIA #GTC24, he calls out three sources of data to integrate with LLMs: 1) vector databases, 2) ERP / CRM and 3)…
Jensen Huang (CEO) his keynote at NVIDIA hashtag#GTC24, he calls out three sources of data to integrate with LLMs: 1) vector databases, 2) ERP / CRM and 3) *knowledge graphs*
·linkedin.com·
Jensen Huang in his keynote at NVIDIA GTC24 calls out three sources of data to integrate with LLMs: 1) vector databases, 2) ERP / CRM and 3) knowledge graphs
Kurt Cagle chatbot on Knowledge Graphs, Ontology, GenAI and Data
Kurt Cagle chatbot on Knowledge Graphs, Ontology, GenAI and Data
I want to thank Jay (JieBing) Yu, PhD for his hard work in creating a Mini-Me (https://lnkd.in/g6TR543j), a virtual assistant built on his fantastic LLM work…
Kurt is one of my favorite writers, a seasoned practitioner and deep thinker in the areas of Knowledge Graphs, Ontology, GenAI and Data
·linkedin.com·
Kurt Cagle chatbot on Knowledge Graphs, Ontology, GenAI and Data
Knowledge Graph Large Language Model (KG-LLM) for Link Prediction
Knowledge Graph Large Language Model (KG-LLM) for Link Prediction
The task of predicting multiple links within knowledge graphs (KGs) stands as a challenge in the field of knowledge graph analysis, a challenge increasingly resolvable due to advancements in natural language processing (NLP) and KG embedding techniques. This paper introduces a novel methodology, the Knowledge Graph Large Language Model Framework (KG-LLM), which leverages pivotal NLP paradigms, including chain-of-thought (CoT) prompting and in-context learning (ICL), to enhance multi-hop link prediction in KGs. By converting the KG to a CoT prompt, our framework is designed to discern and learn the latent representations of entities and their interrelations. To show the efficacy of the KG-LLM Framework, we fine-tune three leading Large Language Models (LLMs) within this framework, employing both non-ICL and ICL tasks for a comprehensive evaluation. Further, we explore the framework's potential to provide LLMs with zero-shot capabilities for handling previously unseen prompts. Our experimental findings discover that integrating ICL and CoT not only augments the performance of our approach but also significantly boosts the models' generalization capacity, thereby ensuring more precise predictions in unfamiliar scenarios.
·arxiv.org·
Knowledge Graph Large Language Model (KG-LLM) for Link Prediction