𝘛𝘩𝘦 𝘔𝘪𝘯𝘥𝘧𝘶𝘭-𝘙𝘈𝘎 𝘢𝘱𝘱𝘳𝘰𝘢𝘤𝘩 𝘪𝘴 𝘢 𝘧𝘳𝘢𝘮𝘦𝘸𝘰𝘳𝘬 𝘵𝘢𝘪𝘭𝘰𝘳𝘦𝘥 𝘧𝘰𝘳 𝘪𝘯𝘵𝘦𝘯𝘵-𝘣𝘢𝘴𝘦𝘥 𝘢𝘯𝘥 𝘤𝘰𝘯𝘵𝘦𝘹𝘵𝘶𝘢𝘭𝘭𝘺 𝘢𝘭𝘪𝘨𝘯𝘦𝘥 𝘬𝘯𝘰𝘸𝘭𝘦𝘥𝘨𝘦 𝘳𝘦𝘵𝘳𝘪𝘦𝘷𝘢𝘭.
𝗥𝗔𝗚 𝗜𝗺𝗽𝗹𝗲𝗺𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻𝘀 𝗙𝗮𝗶𝗹 𝗗𝘂𝗲 𝗧𝗼 𝗜𝗻𝘀𝘂𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝘁 𝗙𝗼𝗰𝘂𝘀 𝗢𝗻 𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻 𝗜𝗻𝘁𝗲𝗻𝘁 𝘛𝘩𝘦 𝘔𝘪𝘯𝘥𝘧𝘶𝘭-𝘙𝘈𝘎… | 12 comments on LinkedIn
𝘛𝘩𝘦 𝘔𝘪𝘯𝘥𝘧𝘶𝘭-𝘙𝘈𝘎 𝘢𝘱𝘱𝘳𝘰𝘢𝘤𝘩 𝘪𝘴 𝘢 𝘧𝘳𝘢𝘮𝘦𝘸𝘰𝘳𝘬 𝘵𝘢𝘪𝘭𝘰𝘳𝘦𝘥 𝘧𝘰𝘳 𝘪𝘯𝘵𝘦𝘯𝘵-𝘣𝘢𝘴𝘦𝘥 𝘢𝘯𝘥 𝘤𝘰𝘯𝘵𝘦𝘹𝘵𝘶𝘢𝘭𝘭𝘺 𝘢𝘭𝘪𝘨𝘯𝘦𝘥 𝘬𝘯𝘰𝘸𝘭𝘦𝘥𝘨𝘦 𝘳𝘦𝘵𝘳𝘪𝘦𝘷𝘢𝘭.
GraphReader: Building Graph-based Agent to Enhance Long-Context Abilities of Large Language Models
This is something very cool! 3. GraphReader: Building Graph-based Agent to Enhance Long-Context Abilities of Large Language Models "GraphReader addresses the…
GraphReader: Building Graph-based Agent to Enhance Long-Context Abilities of Large Language Models
GitHub - SynaLinks/HybridAGI: The Programmable Neuro-Symbolic AGI that lets you program its behavior using Graph-based Prompt Programming: for people who want AI to behave as expected
The Programmable Neuro-Symbolic AGI that lets you program its behavior using Graph-based Prompt Programming: for people who want AI to behave as expected - SynaLinks/HybridAGI
Open Research Knowledge Graph (ORKG) ASK (Assistant for Scientific Knowledge) uses vector hashtag#embeddings to find the most relevant papers and an open-source hashtag#LLM to synthesize the answer for you
Ask your (research) question against 76 Million scientific articles: https://ask.orkg.org Open Research Knowledge Graph (ORKG) ASK (Assistant for Scientific…
Open Research Knowledge Graph (ORKG) ASK (Assistant for Scientific Knowledge) uses vector hashtag#embeddings to find the most relevant papers and an open-source hashtag#LLM to synthesize the answer for you
Synergizing LLMs and KGs in the GenAI Landscape | LinkedIn
Our paper "Are Large Language Models a Good Replacement of Taxonomies?" was just accepted to VLDB'2024! This finished our last stroke of study on how knowledgeable LLMs are and confirmed our recommendation for the next generation of KGs. How knowledgeable are LLMs? 1.
GraCoRe: Benchmarking Graph Comprehension and Complex Reasoning in Large Language Models
Can LLMs understand graphs? The results might surprise you. Graphs are everywhere, from social networks to biological pathways. As AI systems become more…
GraCoRe: Benchmarking Graph Comprehension and Complex Reasoning in Large Language Models
[2310.01061v1] Reasoning on Graphs: Faithful and Interpretable Large Language Model Reasoning
Large language models (LLMs) have demonstrated impressive reasoning abilities in complex tasks. However, they lack up-to-date knowledge and experience hallucinations during reasoning, which can...
Docs2KG: Unified Knowledge Graph Construction from Heterogeneous Documents Assisted by Large Language Models
Introducing Docs2KG: A New Era in Knowledge Graph Construction from Unstructured Data ... Did you know that 80% of enterprise data resides in unstructured… | 13 comments on LinkedIn
Docs2KG: A New Era in Knowledge Graph Construction from Unstructured Data
An approach for designing learning path recommendations using GPT-4 and Knowledge Graphs
💡 How important are learning paths for gaining the skills needed to tackle real-life problems? 🔬Researchers from the University of Siegen (Germany) and Keio…
an approach for designing learning path recommendations using GPT-4 and Knowledge Graphs
Revolutionizing Document Reranking with G-RAG: A Graph-Based Approach
Revolutionizing Document Reranking with G-RAG: A Graph-Based Approach ... Discover how a novel graph-based reranker is transforming the way we retrieve…
Revolutionizing Document Reranking with G-RAG: A Graph-Based Approach
Knowledge Graph-Augmented Language Models for Knowledge-Grounded Dialogue Generation
Language models have achieved impressive performances on dialogue generation tasks. However, when generating responses for a conversation that requires factual knowledge, they are far from perfect, due to an absence of mechanisms to retrieve, encode, and reflect the knowledge in the generated responses. Some knowledge-grounded dialogue generation methods tackle this problem by leveraging facts from Knowledge Graphs (KGs); however, they do not guarantee that the model utilizes a relevant piece of knowledge from the KG. To overcome this limitation, we propose SUbgraph Retrieval-augmented GEneration (SURGE), a framework for generating context-relevant and knowledge-grounded dialogues with the KG. Specifically, our SURGE framework first retrieves the relevant subgraph from the KG, and then enforces consistency across facts by perturbing their word embeddings conditioned by the retrieved subgraph. Then, we utilize contrastive learning to ensure that the generated texts have high similarity to the retrieved subgraphs. We validate our SURGE framework on OpendialKG and KOMODIS datasets, showing that it generates high-quality dialogues that faithfully reflect the knowledge from KG.
The Era of Semantic Decoding
Recent work demonstrated great promise in the idea of orchestrating collaborations between LLMs, human input, and various tools to address the inherent limitations of LLMs. We propose a novel perspective called semantic decoding, which frames these collaborative processes as optimization procedures in semantic space. Specifically, we conceptualize LLMs as semantic processors that manipulate meaningful pieces of information that we call semantic tokens (known thoughts). LLMs are among a large pool of other semantic processors, including humans and tools, such as search engines or code executors. Collectively, semantic processors engage in dynamic exchanges of semantic tokens to progressively construct high-utility outputs. We refer to these orchestrated interactions among semantic processors, optimizing and searching in semantic space, as semantic decoding algorithms. This concept draws a direct parallel to the well-studied problem of syntactic decoding, which involves crafting algorithms to best exploit auto-regressive language models for extracting high-utility sequences of syntactic tokens. By focusing on the semantic level and disregarding syntactic details, we gain a fresh perspective on the engineering of AI systems, enabling us to imagine systems with much greater complexity and capabilities. In this position paper, we formalize the transition from syntactic to semantic tokens as well as the analogy between syntactic and semantic decoding. Subsequently, we explore the possibilities of optimizing within the space of semantic tokens via semantic decoding algorithms. We conclude with a list of research opportunities and questions arising from this fresh perspective. The semantic decoding perspective offers a powerful abstraction for search and optimization directly in the space of meaningful concepts, with semantic tokens as the fundamental units of a new type of computation.
Neurosymbolic AI for Reasoning over Knowledge Graphs: A Survey
Neurosymbolic AI is an increasingly active area of research that combines symbolic reasoning methods with deep learning to leverage their complementary benefits. As knowledge graphs are becoming a popular way to represent heterogeneous and multi-relational data, methods for reasoning on graph structures have attempted to follow this neurosymbolic paradigm. Traditionally, such approaches have utilized either rule-based inference or generated representative numerical embeddings from which patterns could be extracted. However, several recent studies have attempted to bridge this dichotomy to generate models that facilitate interpretability, maintain competitive performance, and integrate expert knowledge. Therefore, we survey methods that perform neurosymbolic reasoning tasks on knowledge graphs and propose a novel taxonomy by which we can classify them. Specifically, we propose three major categories: (1) logically-informed embedding approaches, (2) embedding approaches with logical constraints, and (3) rule learning approaches. Alongside the taxonomy, we provide a tabular overview of the approaches and links to their source code, if available, for more direct comparison. Finally, we discuss the unique characteristics and limitations of these methods, then propose several prospective directions toward which this field of research could evolve.
GLaM: Fine-Tuning Large Language Models for Domain Knowledge Graph Alignment via Neighborhood Partitioning and Generative Subgraph Encoding
Integrating large language models (LLMs) with knowledge graphs derived from domain-specific data represents an important advancement towards more powerful and factual reasoning. As these models grow more capable, it is crucial to enable them to perform multi-step inferences over real-world knowledge graphs while minimizing hallucination. While large language models excel at conversation and text generation, their ability to reason over domain-specialized graphs of interconnected entities remains limited. For example, can we query a LLM to identify the optimal contact in a professional network for a specific goal, based on relationships and attributes in a private database? The answer is no--such capabilities lie beyond current methods. However, this question underscores a critical technical gap that must be addressed. Many high-value applications in areas such as science, security, and e-commerce rely on proprietary knowledge graphs encoding unique structures, relationships, and logical constraints. We introduce a fine-tuning framework for developing Graph-aligned LAnguage Models (GLaM) that transforms a knowledge graph into an alternate text representation with labeled question-answer pairs. We demonstrate that grounding the models in specific graph-based knowledge expands the models' capacity for structure-based reasoning. Our methodology leverages the large-language model's generative capabilities to create the dataset and proposes an efficient alternate to retrieval-augmented generation styled methods.