Found 3951 bookmarks
Newest
pg-schema schemas for property graphs
pg-schema schemas for property graphs
Arrived to SIGMOD in Seattle and it’s an amazing honor that our joint academic/industry work on Property Graph Schema received the Best Industry Paper award.… | 14 comments on LinkedIn
·linkedin.com·
pg-schema schemas for property graphs
Knowledge graphs are becoming an increasing area of interest in respect to explainability and interpretability of AI.
Knowledge graphs are becoming an increasing area of interest in respect to explainability and interpretability of AI.
Knowledge graphs are becoming an increasing area of interest in respect to explainability and interpretability of AI. To consolidate my own research and… | 30 comments on LinkedIn
Knowledge graphs are becoming an increasing area of interest in respect to explainability and interpretability of AI.
·linkedin.com·
Knowledge graphs are becoming an increasing area of interest in respect to explainability and interpretability of AI.
Link prediction on knowledge graphs is a loosing game, IMHO. Without injecting any new info, you'll only find links similar to those you already had. That's why this work is interesting: injecting external knowledge into link prediction is the only way to find truly new links
Link prediction on knowledge graphs is a loosing game, IMHO. Without injecting any new info, you'll only find links similar to those you already had. That's why this work is interesting: injecting external knowledge into link prediction is the only way to find truly new links
“Link prediction on knowledge graphs is a loosing game, IMHO. Without injecting any new info, you'll only find links similar to those you already had. That's why this work is interesting: injecting external knowledge into link prediction is the only way to find truly new links.”
Link prediction on knowledge graphs is a loosing game, IMHO. Without injecting any new info, you'll only find links similar to those you already had. That's why this work is interesting: injecting external knowledge into link prediction is the only way to find truly new links
·twitter.com·
Link prediction on knowledge graphs is a loosing game, IMHO. Without injecting any new info, you'll only find links similar to those you already had. That's why this work is interesting: injecting external knowledge into link prediction is the only way to find truly new links
Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Large Language Models
Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Large Language Models
With the widespread use of large language models (LLMs) in NLP tasks, researchers have discovered the potential of Chain-of-thought (CoT) to assist LLMs in accomplishing complex reasoning tasks by generating intermediate steps. However, human thought processes are often non-linear, rather than simply sequential chains of thoughts. Therefore, we propose Graph-of-Thought (GoT) reasoning, which models human thought processes not only as a chain but also as a graph. By representing thought units as nodes and connections between them as edges, our approach captures the non-sequential nature of human thinking and allows for a more realistic modeling of thought processes. Similar to Multimodal-CoT, we modeled GoT reasoning as a two-stage framework, generating rationales first and then producing the final answer. Specifically, we employ an additional graph-of-thoughts encoder for GoT representation learning and fuse the GoT representation with the original input representation through a gated fusion mechanism. We implement a GoT reasoning model on the T5 pre-trained model and evaluate its performance on a text-only reasoning task (GSM8K) and a multimodal reasoning task (ScienceQA). Our model achieves significant improvement over the strong CoT baseline with 3.41% and 5.08% on the GSM8K test set with T5-base and T5-large architectures, respectively. Additionally, our model boosts accuracy from 84.91% to 91.54% using the T5-base model and from 91.68% to 92.77% using the T5-large model over the state-of-the-art Multimodal-CoT on the ScienceQA test set. Experiments have shown that GoT achieves comparable results to Multimodal-CoT(large) with over 700M parameters, despite having fewer than 250M backbone model parameters, demonstrating the effectiveness of GoT.
·arxiv.org·
Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Large Language Models
LLM Ontology-prompting for Knowledge Graph Extraction
LLM Ontology-prompting for Knowledge Graph Extraction
Prompting an LLM with an ontology to drive Knowledge Graph extraction from unstructured documents
I make no apology for saying that a graph is the best organization of structured data. However, the vast majority of data is unstructured text. Therefore, data needs to be transformed from its original format using an Extract-Transform-Load (ETL) or Extract-Load-Transform (ELT) into a Knowledge Graph format. There is no problem when the original format is structured, such as SQL tables, spreadsheets, etc, or at least semi-structured, such as tweets. However, when the source data is unstructured text the task of ETL/ELT to a graph is far more challenging.This article shows how an LLM can be prompted with an unstructured document and asked to extract a graph corresponding to a specific ontology/schema. This is demonstrated with a Kennedy ontology in conjunction with a publicly available description of the Kennedy family tree.
·medium.com·
LLM Ontology-prompting for Knowledge Graph Extraction
How we delimit and develop the concept of facts is the key to a deeper, more detailed understanding of knowledge graphs
How we delimit and develop the concept of facts is the key to a deeper, more detailed understanding of knowledge graphs
How we delimit and develop the concept of facts is the key to a deeper, more detailed understanding of knowledge graphs because facts are crucial defining…
How we delimit and develop the concept of facts is the key to a deeper, more detailed understanding of knowledge graphs
·linkedin.com·
How we delimit and develop the concept of facts is the key to a deeper, more detailed understanding of knowledge graphs
This new paper is a wonderful story on how generative AI can be used to help curriculum designers build a Knowledge Space dependency graph
This new paper is a wonderful story on how generative AI can be used to help curriculum designers build a Knowledge Space dependency graph
This new paper is a wonderful story on how generative AI can be used to help curriculum designers build a Knowledge Space dependency graph: Exploring the MIT… | 11 comments on LinkedIn
·linkedin.com·
This new paper is a wonderful story on how generative AI can be used to help curriculum designers build a Knowledge Space dependency graph
Ivo Velitchkov on LinkedIn: #sparql #shacl #knowledgegraphs #quality
Ivo Velitchkov on LinkedIn: #sparql #shacl #knowledgegraphs #quality
Now apart from #SPARQL (https://lnkd.in/ey_6S_6B) there is also a #SHACL wiki https://lnkd.in/ek9PHN9z It is a first release with much to fix, improve and…
Now apart from #SPARQL (https://lnkd.in/ey_6S_6B) there is also a #SHACL wiki
·linkedin.com·
Ivo Velitchkov on LinkedIn: #sparql #shacl #knowledgegraphs #quality
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
·arxiv.org·
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Imagine the next phase of LLM prompts: a ‘Graph of Thought’
Imagine the next phase of LLM prompts: a ‘Graph of Thought’
The way we engage with Large Language Models (LLMs) is rapidly evolving. We started with prompt engineering and progressed to combining prompts into 'Chains of…
Now, imagine the next phase: a ‘Graph of Thought’
·linkedin.com·
Imagine the next phase of LLM prompts: a ‘Graph of Thought’
Automatic Relation-aware Graph Network Proliferation
Automatic Relation-aware Graph Network Proliferation
Graph neural architecture search has sparked much attention as Graph Neural Networks (GNNs) have shown powerful reasoning capability in many relational tasks. However, the currently used graph search space overemphasizes learning node features and neglects mining hierarchical relational information. Moreover, due to diverse mechanisms in the message passing, the graph search space is much larger than that of CNNs. This hinders the straightforward application of classical search strategies for exploring complicated graph search space. We propose Automatic Relation-aware Graph Network Proliferation (ARGNP) for efficiently searching GNNs with a relation-guided message passing mechanism. Specifically, we first devise a novel dual relation-aware graph search space that comprises both node and relation learning operations. These operations can extract hierarchical node/relational information and provide anisotropic guidance for message passing on a graph. Second, analogous to cell proliferation, we design a network proliferation search paradigm to progressively determine the GNN architectures by iteratively performing network division and differentiation. The experiments on six datasets for four graph learning tasks demonstrate that GNNs produced by our method are superior to the current state-of-the-art hand-crafted and search-based GNNs. Codes are available at https://github.com/phython96/ARGNP.
·arxiv.org·
Automatic Relation-aware Graph Network Proliferation
A Survey of Pretraining on Graphs: Taxonomy, Methods, and Applications
A Survey of Pretraining on Graphs: Taxonomy, Methods, and Applications
Pretrained Language Models (PLMs) such as BERT have revolutionized the landscape of Natural Language Processing (NLP). Inspired by their proliferation, tremendous efforts have been devoted to Pretrained Graph Models (PGMs). Owing to the powerful model architectures of PGMs, abundant knowledge from massive labeled and unlabeled graph data can be captured. The knowledge implicitly encoded in model parameters can benefit various downstream tasks and help to alleviate several fundamental issues of learning on graphs. In this paper, we provide the first comprehensive survey for PGMs. We firstly present the limitations of graph representation learning and thus introduce the motivation for graph pre-training. Then, we systematically categorize existing PGMs based on a taxonomy from four different perspectives. Next, we present the applications of PGMs in social recommendation and drug discovery. Finally, we outline several promising research directions that can serve as a guideline for future research.
·arxiv.org·
A Survey of Pretraining on Graphs: Taxonomy, Methods, and Applications
Linkless Link Prediction via Relational Distillation
Linkless Link Prediction via Relational Distillation
Graph Neural Networks (GNNs) have shown exceptional performance in the task of link prediction. Despite their effectiveness, the high latency brought by non-trivial neighborhood data dependency limits GNNs in practical deployments. Conversely, the known efficient MLPs are much less effective than GNNs due to the lack of relational knowledge. In this work, to combine the advantages of GNNs and MLPs, we start with exploring direct knowledge distillation (KD) methods for link prediction, i.e., predicted logit-based matching and node representation-based matching. Upon observing direct KD analogs do not perform well for link prediction, we propose a relational KD framework, Linkless Link Prediction (LLP), to distill knowledge for link prediction with MLPs. Unlike simple KD methods that match independent link logits or node representations, LLP distills relational knowledge that is centered around each (anchor) node to the student MLP. Specifically, we propose rank-based matching and distribution-based matching strategies that complement each other. Extensive experiments demonstrate that LLP boosts the link prediction performance of MLPs with significant margins, and even outperforms the teacher GNNs on 7 out of 8 benchmarks. LLP also achieves a 70.68x speedup in link prediction inference compared to GNNs on the large-scale OGB dataset.
Graph Neural Networks (GNNs) have shown exceptional performance in the task of link prediction. Despite their effectiveness, the high latency brought by non-trivial neighborhood data dependency limits GNNs in practical deployments. Conversely, the known efficient MLPs are much less effective than GNNs due to the lack of relational knowledge. In this work, to combine the advantages of GNNs and MLPs, we start with exploring direct knowledge distillation (KD) methods for link prediction, i.e., predicted logit-based matching and node representation-based matching. Upon observing direct KD analogs do not perform well for link prediction, we propose a relational KD framework, Linkless Link Prediction (LLP), to distill knowledge for link prediction with MLPs. Unlike simple KD methods that match independent link logits or node representations, LLP distills relational knowledge that is centered around each (anchor) node to the student MLP. Specifically, we propose rank-based matching and distribution-based matching strategies that complement each other. Extensive experiments demonstrate that LLP boosts the link prediction performance of MLPs with significant margins, and even outperforms the teacher GNNs on 7 out of 8 benchmarks. LLP also achieves a 70.68x speedup in link prediction inference compared to GNNs on the large-scale OGB dataset.
·arxiv.org·
Linkless Link Prediction via Relational Distillation
different definitions of knowledge graph lead to radically different experiments in research and to surprisingly diverse tech stacks for products
different definitions of knowledge graph lead to radically different experiments in research and to surprisingly diverse tech stacks for products
The concept of a knowledge graph has, since its (re-)introduction in 2012, come to assume a pivotal role in the development of a range of crucial…
different definitions of knowledge graph lead to radically different experiments in research and to surprisingly diverse tech stacks for products
·linkedin.com·
different definitions of knowledge graph lead to radically different experiments in research and to surprisingly diverse tech stacks for products
Empowering Counterfactual Reasoning over Graph Neural Networks through Inductivity
Empowering Counterfactual Reasoning over Graph Neural Networks through Inductivity
Graph neural networks (GNNs) have various practical applications, such as drug discovery, recommendation engines, and chip design. However, GNNs lack transparency as they cannot provide understandable explanations for their predictions. To address this issue, counterfactual reasoning is used. The main goal is to make minimal changes to the input graph of a GNN in order to alter its prediction. While several algorithms have been proposed for counterfactual explanations of GNNs, most of them have two main drawbacks. Firstly, they only consider edge deletions as perturbations. Secondly, the counterfactual explanation models are transductive, meaning they do not generalize to unseen data. In this study, we introduce an inductive algorithm called INDUCE, which overcomes these limitations. By conducting extensive experiments on several datasets, we demonstrate that incorporating edge additions leads to better counterfactual results compared to the existing methods. Moreover, the inductive modeling approach allows INDUCE to directly predict counterfactual perturbations without requiring instance-specific training. This results in significant computational speed improvements compared to baseline methods and enables scalable counterfactual analysis for GNNs.
·arxiv.org·
Empowering Counterfactual Reasoning over Graph Neural Networks through Inductivity
A Survey on Knowledge Graphs for Healthcare: Resources, Applications, and Promises
A Survey on Knowledge Graphs for Healthcare: Resources, Applications, and Promises
Healthcare knowledge graphs (HKGs) have emerged as a promising tool for organizing medical knowledge in a structured and interpretable way, which provides a comprehensive view of medical concepts and their relationships. However, challenges such as data heterogeneity and limited coverage remain, emphasizing the need for further research in the field of HKGs. This survey paper serves as the first comprehensive overview of HKGs. We summarize the pipeline and key techniques for HKG construction (i.e., from scratch and through integration), as well as the common utilization approaches (i.e., model-free and model-based). To provide researchers with valuable resources, we organize existing HKGs (The resource is available at https://github.com/lujiaying/Awesome-HealthCare-KnowledgeBase) based on the data types they capture and application domains, supplemented with pertinent statistical information. In the application section, we delve into the transformative impact of HKGs across various healthcare domains, spanning from fine-grained basic science research to high-level clinical decision support. Lastly, we shed light on the opportunities for creating comprehensive and accurate HKGs in the era of large language models, presenting the potential to revolutionize healthcare delivery and enhance the interpretability and reliability of clinical prediction.
·arxiv.org·
A Survey on Knowledge Graphs for Healthcare: Resources, Applications, and Promises
Direction Improves Graph Learning
Direction Improves Graph Learning
How a wise use of direction when doing message passing on heterophilic graphs can result in very significant gains.
·towardsdatascience.com·
Direction Improves Graph Learning
Comparing ChatGPT responses using statistical similarity v knowledge representations in the automated text selection process
Comparing ChatGPT responses using statistical similarity v knowledge representations in the automated text selection process
I’ve been comparing ChatGPT responses using statistical similarity v knowledge representations in the automated text selection process. There are token…
comparing ChatGPT responses using statistical similarity v knowledge representations in the automated text selection process
·linkedin.com·
Comparing ChatGPT responses using statistical similarity v knowledge representations in the automated text selection process
Sharing SPARQL queries in Wikibase
Sharing SPARQL queries in Wikibase
Sharing SPARQL queries in Wikibase! Check it out: https://t.co/3FsC4xVRGlWikibase simplifies working with knowledge graphs by allowing users to share predefined SPARQL queries. It seamlessly integrates into the query service, making data exploration easier.#graphdatabase #data pic.twitter.com/bFVaZSh60t— The QA Company (@TheQACompany) May 31, 2023
·twitter.com·
Sharing SPARQL queries in Wikibase
Companies in Multilingual Wikipedia: Articles Quality and Important Sources of Information
Companies in Multilingual Wikipedia: Articles Quality and Important Sources of Information
The scientific work of members of our Department was published in the monograph "Information Technology for Management: Approaches to Improving Business and Society" published by the Springer. The research concerns the automatic assessment of the quality of Wikipedia articles and the reliability of
·kie.ue.poznan.pl·
Companies in Multilingual Wikipedia: Articles Quality and Important Sources of Information