Knowledge graph embedding models (KGEMs) developed for link prediction learn vector representations for graph entities, known as embeddings. A common tacit assumption is the KGE entity similarity assumption, which states that these KGEMs retain the graph's structure within their embedding space, i.e., position similar entities close to one another. This desirable property make KGEMs widely used in downstream tasks such as recommender systems or drug repurposing. Yet, the alignment of graph similarity with embedding space similarity has rarely been formally evaluated. Typically, KGEMs are assessed based on their sole link prediction capabilities, using ranked-based metrics such as Hits@K or Mean Rank. This paper challenges the prevailing assumption that entity similarity in the graph is inherently mirrored in the embedding space. Therefore, we conduct extensive experiments to measure the capability of KGEMs to cluster similar entities together, and investigate the nature of the underlying factors. Moreover, we study if different KGEMs expose a different notion of similarity. Datasets, pre-trained embeddings and code are available at: https://github.com/nicolas-hbt/similar-embeddings.
Data gauging, covariance and equivariance | Maurice Weiler
The numerical representation of data is often ambiguous. This leads to a gauge theoretic view on data, requiring covariant or equivariant neural networks which are reviewed in this blog post.
Knowledge Graph Embeddings as a Bridge between Symbolic and Subsymbolic AI
Knowledge Graph Embeddings as a Bridge between Symbolic and Subsymbolic AI 🌉 The Resurgence of Structure The pendulum in AI is swinging back from purely…
Knowledge Graph Embeddings as a Bridge between Symbolic and Subsymbolic AI
High dimensional, tabular deep learning with an auxiliary knowledge graph Poster
Can deep learning work on small datasets with far more features than samples, like those in biology and other scientific domains? We present PLATO: a method that achieves state-of-the-art performance on such datasets by using prior information about the domain!
In this article, we will talk about classical computation: the kind of computation typically found in an undergraduate Computer Science course on Algorithms and Data Structures [1]. Think shortest path-finding, sorting, clever ways to break problems down into simpler problems, incredible ways to organise data for efficient retrieval and updates.
Using Large Language Models and Retrieval Augmented Generation for creating ontology terms
Our manuscript on using Large Language Models and Retrieval Augmented Generation for creating ontology terms is up on arXiv! https://lnkd.in/d62JPtiH, lead…
using Large Language Models and Retrieval Augmented Generation for creating ontology terms
Transforming Unstructured Text into RDF Triples with AI. | LinkedIn
Over the past few months, I've been immersed in an exciting experiment, leveraging OpenAI's advanced language models to transform unstructured text into RDF (Resource Description Framework) triples. The journey, as thrilling as it has been, is filled with ongoing challenges and learning experiences.
How the LDMs in knowledge graphs can complement LLMs - DataScienceCentral.com
Large language models (LLMs) fit parameters (features in data topography) to a particular dataset, such as text scraped off the web and conformed to a training set. Logical data models (LDMs), by contrast, model what becomes shared within entire systems. They bring together the data in a system with the help of various kinds of… Read More »How the LDMs in knowledge graphs can complement LLMs
By request, here are the slides from our #neurips2023 presentation yesterday! We really enjoyed the opportunity to present the different aspects of the work… | 18 comments on LinkedIn
Here's the video for my talk @ K1st World Symposium 2023 about the intersections of KGs and LLMs: https://lnkd.in/gugB8Yjj and also the slides, plus related…
Despite the fact that it affects our lives on a daily basis, most of us are unfamiliar with the concept of a knowledge graph. When we ask Alexa about tomorrow's weather or use Google to look up the latest news on climate change, knowledge graphs serve as the foundation of today's cutting-edge information systems. In addition, knowledge graphs have the potential to elucidate, assess, and substantiate information produced by Deep Learning models, such as Chat-GPT and other large language models. Knowledge graphs have a wide range of applications, including improving search results, answering questions, providing recommendations, and developing explainable AI systems. In essence, the purpose of this course is to provide a comprehensive overview of knowledge graphs, their underlying technologies, and their significance in today's digital world.
knowledge graph based RAG (retrieval-augmentation) consistently improves language model accuracy, this time in biomedical questions
The evidence for the massive impact of KGs in NLQ keeps piling up - Here's one more paper that shows that knowledge graph based RAG (retrieval-augmentation)…
knowledge graph based RAG (retrieval-augmentation) consistently improves language model accuracy, this time in biomedical questions
Introducing Skills in Microsoft Viva, a new AI-powered service to grow and manage talent | Microsoft 365 Blog
We’re excited to announce a new AI-powered Skills in Viva service that will help organizations understand workforce skills and gaps, and deliver personalized skills-based experiences.
One of our main focuses at Zazuko GmbH is to support government organizations in publishing multidimensional data in RDF format. To this end, we utilize the cube.