Found 2088 bookmarks
Newest
The Role of the Ontologist in the Age of LLMs
The Role of the Ontologist in the Age of LLMs
What do we mean when we say something is a kind of thing? I’ve been wrestling with that question a great deal of late, partly because I think the role of the ontologist transcends the application of knowledge graphs, especially as I’ve watched LLMs and Llamas become a bigger part of the discussion.
·ontologist.substack.com·
The Role of the Ontologist in the Age of LLMs
Knowledge Engineering Using Large Language Models
Knowledge Engineering Using Large Language Models
Knowledge engineering is a discipline that focuses on the creation and maintenance of processes that generate and apply knowledge. Traditionally, knowledge engineering approaches have focused on knowledge expressed in formal languages. The emergence of large language models and their capabilities to effectively work with natural language, in its broadest sense, raises questions about the foundations and practice of knowledge engineering. Here, we outline the potential role of LLMs in knowledge engineering, identifying two central directions: 1) creating hybrid neuro-symbolic knowledge systems; and 2) enabling knowledge engineering in natural language. Additionally, we formulate key open research questions to tackle these directions.
·drops.dagstuhl.de·
Knowledge Engineering Using Large Language Models
Polyhierarchy and the Dissolution of Meaning
Polyhierarchy and the Dissolution of Meaning
“Everything is everything/What is meant to be, will be.” – Lauryn Hill Polyhierarchy Polyhierarchy is “a controlled vocabulary structure in which some terms belong to more than one hierarchy.…
·informationpanopticon.blog·
Polyhierarchy and the Dissolution of Meaning
On to Knowledge-infused Language Models
On to Knowledge-infused Language Models
A broad and deep body of on-going research – hundreds of experiments! – has shown quite conclusively that knowledge graphs are essential to guide, complement, and enrich LLMs in systematic ways. The very wide variety of tests over domains and possible combinations of KGs and LLMs attests to the robu
·linkedin.com·
On to Knowledge-infused Language Models
Do Similar Entities have Similar Embeddings?
Do Similar Entities have Similar Embeddings?
Knowledge graph embedding models (KGEMs) developed for link prediction learn vector representations for graph entities, known as embeddings. A common tacit assumption is the KGE entity similarity assumption, which states that these KGEMs retain the graph's structure within their embedding space, i.e., position similar entities close to one another. This desirable property make KGEMs widely used in downstream tasks such as recommender systems or drug repurposing. Yet, the alignment of graph similarity with embedding space similarity has rarely been formally evaluated. Typically, KGEMs are assessed based on their sole link prediction capabilities, using ranked-based metrics such as Hits@K or Mean Rank. This paper challenges the prevailing assumption that entity similarity in the graph is inherently mirrored in the embedding space. Therefore, we conduct extensive experiments to measure the capability of KGEMs to cluster similar entities together, and investigate the nature of the underlying factors. Moreover, we study if different KGEMs expose a different notion of similarity. Datasets, pre-trained embeddings and code are available at: https://github.com/nicolas-hbt/similar-embeddings.
·arxiv.org·
Do Similar Entities have Similar Embeddings?
Transforming Unstructured Text into RDF Triples with AI. | LinkedIn
Transforming Unstructured Text into RDF Triples with AI. | LinkedIn
Over the past few months, I've been immersed in an exciting experiment, leveraging OpenAI's advanced language models to transform unstructured text into RDF (Resource Description Framework) triples. The journey, as thrilling as it has been, is filled with ongoing challenges and learning experiences.
·linkedin.com·
Transforming Unstructured Text into RDF Triples with AI. | LinkedIn
How the LDMs in knowledge graphs can complement LLMs - DataScienceCentral.com
How the LDMs in knowledge graphs can complement LLMs - DataScienceCentral.com
Large language models (LLMs) fit parameters (features in data topography) to a particular dataset, such as text scraped off the web and conformed to a training set.  Logical data models (LDMs), by contrast, model what becomes shared within entire systems. They bring together the data in a system with the help of various kinds of… Read More »How the LDMs in knowledge graphs can complement LLMs
·datasciencecentral.com·
How the LDMs in knowledge graphs can complement LLMs - DataScienceCentral.com
Knowledge Graphs: Breaking the Ice
Knowledge Graphs: Breaking the Ice
This post talks about the nature and key characteristics of knowledge graphs. It also outlines the benefits of formal semantics and how…
·ontotext.medium.com·
Knowledge Graphs: Breaking the Ice
Language, Graphs, and AI in Industry
Language, Graphs, and AI in Industry
Here's the video for my talk @ K1st World Symposium 2023 about the intersections of KGs and LLMs: https://lnkd.in/gugB8Yjj and also the slides, plus related…
Language, Graphs, and AI in Industry
·linkedin.com·
Language, Graphs, and AI in Industry
Knowledge Graphs - Foundations and Applications
Knowledge Graphs - Foundations and Applications
Despite the fact that it affects our lives on a daily basis, most of us are unfamiliar with the concept of a knowledge graph. When we ask Alexa about tomorrow's weather or use Google to look up the latest news on climate change, knowledge graphs serve as the foundation of today's cutting-edge information systems. In addition, knowledge graphs have the potential to elucidate, assess, and substantiate information produced by Deep Learning models, such as Chat-GPT and other large language models. Knowledge graphs have a wide range of applications, including improving search results, answering questions, providing recommendations, and developing explainable AI systems. In essence, the purpose of this course is to provide a comprehensive overview of knowledge graphs, their underlying technologies, and their significance in today's digital world.
·open.hpi.de·
Knowledge Graphs - Foundations and Applications
knowledge graph based RAG (retrieval-augmentation) consistently improves language model accuracy, this time in biomedical questions
knowledge graph based RAG (retrieval-augmentation) consistently improves language model accuracy, this time in biomedical questions
The evidence for the massive impact of KGs in NLQ keeps piling up - Here's one more paper that shows that knowledge graph based RAG (retrieval-augmentation)…
knowledge graph based RAG (retrieval-augmentation) consistently improves language model accuracy, this time in biomedical questions
·linkedin.com·
knowledge graph based RAG (retrieval-augmentation) consistently improves language model accuracy, this time in biomedical questions
Beyond Transduction: A Survey on Inductive, Few Shot, and Zero Shot Link Prediction in Knowledge Graphs
Beyond Transduction: A Survey on Inductive, Few Shot, and Zero Shot Link Prediction in Knowledge Graphs
Knowledge graphs (KGs) comprise entities interconnected by relations of different semantic meanings. KGs are being used in a wide range of applications. However, they inherently suffer from incompleteness, i.e. entities or facts about entities are missing. Consequently, a larger body of works focuses on the completion of missing information in KGs, which is commonly referred to as link prediction (LP). This task has traditionally and extensively been studied in the transductive setting, where all entities and relations in the testing set are observed during training. Recently, several works have tackled the LP task under more challenging settings, where entities and relations in the test set may be unobserved during training, or appear in only a few facts. These works are known as inductive, few-shot, and zero-shot link prediction. In this work, we conduct a systematic review of existing works in this area. A thorough analysis leads us to point out the undesirable existence of diverging terminologies and task definitions for the aforementioned settings, which further limits the possibility of comparison between recent works. We consequently aim at dissecting each setting thoroughly, attempting to reveal its intrinsic characteristics. A unifying nomenclature is ultimately proposed to refer to each of them in a simple and consistent manner.
·arxiv.org·
Beyond Transduction: A Survey on Inductive, Few Shot, and Zero Shot Link Prediction in Knowledge Graphs