GraphNews

4357 bookmarks
Custom sorting
Imagine the next phase of LLM prompts: a ‘Graph of Thought’
Imagine the next phase of LLM prompts: a ‘Graph of Thought’
The way we engage with Large Language Models (LLMs) is rapidly evolving. We started with prompt engineering and progressed to combining prompts into 'Chains of…
Now, imagine the next phase: a ‘Graph of Thought’
·linkedin.com·
Imagine the next phase of LLM prompts: a ‘Graph of Thought’
Automatic Relation-aware Graph Network Proliferation
Automatic Relation-aware Graph Network Proliferation
Graph neural architecture search has sparked much attention as Graph Neural Networks (GNNs) have shown powerful reasoning capability in many relational tasks. However, the currently used graph search space overemphasizes learning node features and neglects mining hierarchical relational information. Moreover, due to diverse mechanisms in the message passing, the graph search space is much larger than that of CNNs. This hinders the straightforward application of classical search strategies for exploring complicated graph search space. We propose Automatic Relation-aware Graph Network Proliferation (ARGNP) for efficiently searching GNNs with a relation-guided message passing mechanism. Specifically, we first devise a novel dual relation-aware graph search space that comprises both node and relation learning operations. These operations can extract hierarchical node/relational information and provide anisotropic guidance for message passing on a graph. Second, analogous to cell proliferation, we design a network proliferation search paradigm to progressively determine the GNN architectures by iteratively performing network division and differentiation. The experiments on six datasets for four graph learning tasks demonstrate that GNNs produced by our method are superior to the current state-of-the-art hand-crafted and search-based GNNs. Codes are available at https://github.com/phython96/ARGNP.
·arxiv.org·
Automatic Relation-aware Graph Network Proliferation
A Survey of Pretraining on Graphs: Taxonomy, Methods, and Applications
A Survey of Pretraining on Graphs: Taxonomy, Methods, and Applications
Pretrained Language Models (PLMs) such as BERT have revolutionized the landscape of Natural Language Processing (NLP). Inspired by their proliferation, tremendous efforts have been devoted to Pretrained Graph Models (PGMs). Owing to the powerful model architectures of PGMs, abundant knowledge from massive labeled and unlabeled graph data can be captured. The knowledge implicitly encoded in model parameters can benefit various downstream tasks and help to alleviate several fundamental issues of learning on graphs. In this paper, we provide the first comprehensive survey for PGMs. We firstly present the limitations of graph representation learning and thus introduce the motivation for graph pre-training. Then, we systematically categorize existing PGMs based on a taxonomy from four different perspectives. Next, we present the applications of PGMs in social recommendation and drug discovery. Finally, we outline several promising research directions that can serve as a guideline for future research.
·arxiv.org·
A Survey of Pretraining on Graphs: Taxonomy, Methods, and Applications
Linkless Link Prediction via Relational Distillation
Linkless Link Prediction via Relational Distillation
Graph Neural Networks (GNNs) have shown exceptional performance in the task of link prediction. Despite their effectiveness, the high latency brought by non-trivial neighborhood data dependency limits GNNs in practical deployments. Conversely, the known efficient MLPs are much less effective than GNNs due to the lack of relational knowledge. In this work, to combine the advantages of GNNs and MLPs, we start with exploring direct knowledge distillation (KD) methods for link prediction, i.e., predicted logit-based matching and node representation-based matching. Upon observing direct KD analogs do not perform well for link prediction, we propose a relational KD framework, Linkless Link Prediction (LLP), to distill knowledge for link prediction with MLPs. Unlike simple KD methods that match independent link logits or node representations, LLP distills relational knowledge that is centered around each (anchor) node to the student MLP. Specifically, we propose rank-based matching and distribution-based matching strategies that complement each other. Extensive experiments demonstrate that LLP boosts the link prediction performance of MLPs with significant margins, and even outperforms the teacher GNNs on 7 out of 8 benchmarks. LLP also achieves a 70.68x speedup in link prediction inference compared to GNNs on the large-scale OGB dataset.
Graph Neural Networks (GNNs) have shown exceptional performance in the task of link prediction. Despite their effectiveness, the high latency brought by non-trivial neighborhood data dependency limits GNNs in practical deployments. Conversely, the known efficient MLPs are much less effective than GNNs due to the lack of relational knowledge. In this work, to combine the advantages of GNNs and MLPs, we start with exploring direct knowledge distillation (KD) methods for link prediction, i.e., predicted logit-based matching and node representation-based matching. Upon observing direct KD analogs do not perform well for link prediction, we propose a relational KD framework, Linkless Link Prediction (LLP), to distill knowledge for link prediction with MLPs. Unlike simple KD methods that match independent link logits or node representations, LLP distills relational knowledge that is centered around each (anchor) node to the student MLP. Specifically, we propose rank-based matching and distribution-based matching strategies that complement each other. Extensive experiments demonstrate that LLP boosts the link prediction performance of MLPs with significant margins, and even outperforms the teacher GNNs on 7 out of 8 benchmarks. LLP also achieves a 70.68x speedup in link prediction inference compared to GNNs on the large-scale OGB dataset.
·arxiv.org·
Linkless Link Prediction via Relational Distillation
different definitions of knowledge graph lead to radically different experiments in research and to surprisingly diverse tech stacks for products
different definitions of knowledge graph lead to radically different experiments in research and to surprisingly diverse tech stacks for products
The concept of a knowledge graph has, since its (re-)introduction in 2012, come to assume a pivotal role in the development of a range of crucial…
different definitions of knowledge graph lead to radically different experiments in research and to surprisingly diverse tech stacks for products
·linkedin.com·
different definitions of knowledge graph lead to radically different experiments in research and to surprisingly diverse tech stacks for products
Empowering Counterfactual Reasoning over Graph Neural Networks through Inductivity
Empowering Counterfactual Reasoning over Graph Neural Networks through Inductivity
Graph neural networks (GNNs) have various practical applications, such as drug discovery, recommendation engines, and chip design. However, GNNs lack transparency as they cannot provide understandable explanations for their predictions. To address this issue, counterfactual reasoning is used. The main goal is to make minimal changes to the input graph of a GNN in order to alter its prediction. While several algorithms have been proposed for counterfactual explanations of GNNs, most of them have two main drawbacks. Firstly, they only consider edge deletions as perturbations. Secondly, the counterfactual explanation models are transductive, meaning they do not generalize to unseen data. In this study, we introduce an inductive algorithm called INDUCE, which overcomes these limitations. By conducting extensive experiments on several datasets, we demonstrate that incorporating edge additions leads to better counterfactual results compared to the existing methods. Moreover, the inductive modeling approach allows INDUCE to directly predict counterfactual perturbations without requiring instance-specific training. This results in significant computational speed improvements compared to baseline methods and enables scalable counterfactual analysis for GNNs.
·arxiv.org·
Empowering Counterfactual Reasoning over Graph Neural Networks through Inductivity
A Survey on Knowledge Graphs for Healthcare: Resources, Applications, and Promises
A Survey on Knowledge Graphs for Healthcare: Resources, Applications, and Promises
Healthcare knowledge graphs (HKGs) have emerged as a promising tool for organizing medical knowledge in a structured and interpretable way, which provides a comprehensive view of medical concepts and their relationships. However, challenges such as data heterogeneity and limited coverage remain, emphasizing the need for further research in the field of HKGs. This survey paper serves as the first comprehensive overview of HKGs. We summarize the pipeline and key techniques for HKG construction (i.e., from scratch and through integration), as well as the common utilization approaches (i.e., model-free and model-based). To provide researchers with valuable resources, we organize existing HKGs (The resource is available at https://github.com/lujiaying/Awesome-HealthCare-KnowledgeBase) based on the data types they capture and application domains, supplemented with pertinent statistical information. In the application section, we delve into the transformative impact of HKGs across various healthcare domains, spanning from fine-grained basic science research to high-level clinical decision support. Lastly, we shed light on the opportunities for creating comprehensive and accurate HKGs in the era of large language models, presenting the potential to revolutionize healthcare delivery and enhance the interpretability and reliability of clinical prediction.
·arxiv.org·
A Survey on Knowledge Graphs for Healthcare: Resources, Applications, and Promises
Direction Improves Graph Learning
Direction Improves Graph Learning
How a wise use of direction when doing message passing on heterophilic graphs can result in very significant gains.
·towardsdatascience.com·
Direction Improves Graph Learning
Comparing ChatGPT responses using statistical similarity v knowledge representations in the automated text selection process
Comparing ChatGPT responses using statistical similarity v knowledge representations in the automated text selection process
I’ve been comparing ChatGPT responses using statistical similarity v knowledge representations in the automated text selection process. There are token…
comparing ChatGPT responses using statistical similarity v knowledge representations in the automated text selection process
·linkedin.com·
Comparing ChatGPT responses using statistical similarity v knowledge representations in the automated text selection process
Sharing SPARQL queries in Wikibase
Sharing SPARQL queries in Wikibase
Sharing SPARQL queries in Wikibase! Check it out: https://t.co/3FsC4xVRGlWikibase simplifies working with knowledge graphs by allowing users to share predefined SPARQL queries. It seamlessly integrates into the query service, making data exploration easier.#graphdatabase #data pic.twitter.com/bFVaZSh60t— The QA Company (@TheQACompany) May 31, 2023
·twitter.com·
Sharing SPARQL queries in Wikibase
Companies in Multilingual Wikipedia: Articles Quality and Important Sources of Information
Companies in Multilingual Wikipedia: Articles Quality and Important Sources of Information
The scientific work of members of our Department was published in the monograph "Information Technology for Management: Approaches to Improving Business and Society" published by the Springer. The research concerns the automatic assessment of the quality of Wikipedia articles and the reliability of
·kie.ue.poznan.pl·
Companies in Multilingual Wikipedia: Articles Quality and Important Sources of Information
Explore OntoGPT for Schema-based Knowledge Extraction
Explore OntoGPT for Schema-based Knowledge Extraction
The OntoGPT framework and SPIRES tool provide a principled approach to extract knowledge from unstructured text for integration into Knowledge Graphs (KGs), using Large Language Models such as GPT. This methodology enables handling complex relationships, ensures logical consistency, and aligns with predefined ontologies for better KG integration.
The OntoGPT framework and SPIRES tool provide a principled approach to extract knowledge from unstructured text for integration into Knowledge Graphs (KGs), using Large Language Models such as GPT. This methodology enables handling complex relationships, ensures logical consistency, and aligns with predefined ontologies for better KG integration
·apex974.com·
Explore OntoGPT for Schema-based Knowledge Extraction
StandICT.eu_Landscape of Ontologies Standards_V1.0.pdf
StandICT.eu_Landscape of Ontologies Standards_V1.0.pdf

The inclusion of 'Ontology and Graphs' in Gartner's hype cycle report signifies growing maturity and acceptance as a practical solution

Ontology adoption extends beyond managing taxonomy and glossary, encompassing areas such as natural language processing, big data and machine learning, cyber-physical systems, FAIR data, model-based engineering & digital twins

This comprehensive survey of the Landscape of Ontologies Standards presents a curated collection of ontologies that are highly relevant to ICT domains and vertical sectors, considering their maturity, prominence, and suitability for representing linked data in the #semanticweb

·up.raindrop.io·
StandICT.eu_Landscape of Ontologies Standards_V1.0.pdf
A Landscape of Ontologies Standards (Report of TWG Ontologies) | StandICT.eu 2026
A Landscape of Ontologies Standards (Report of TWG Ontologies) | StandICT.eu 2026

The inclusion of 'Ontology and Graphs' in Gartner's hype cycle report signifies growing maturity and acceptance as a practical solution

Ontology adoption extends beyond managing taxonomy and glossary, encompassing areas such as natural language processing, big data and machine learning, cyber-physical systems, FAIR data, model-based engineering & digital twins

This comprehensive survey of the Landscape of Ontologies Standards presents a curated collection of ontologies that are highly relevant to ICT domains and vertical sectors, considering their maturity, prominence, and suitability for representing linked data in the #semanticweb

Pisa, Italy - 24 May 2023] - The StandICT.eu Technical Group for ICT under the European Observatory for ICT Standardisation (EUOS) has formed a special interest group comprising domain experts, ontologists, and researchers from academia and industry. Together, they have conducted a comprehensive survey of the Landscape of Ontologies Standards. The result of their months-long effort is a remarkable report, now released by the StandICT.eu 2026 community. This report presents a curated collection of ontologies that are highly relevant to ICT domains and vertical sectors, considering their maturity, prominence, and suitability for representing linked data in the semantic web. DOWNLOAD   Since their emergence in Gartner's Emerging Technologies report in 2001, Ontology engineering has steadily progressed, primarily through academic efforts to support the semantic web stack. The recent inclusion of 'Ontology and Graphs' in Gartner's "hype cycle" report in 2020 signifies its growing maturity and acceptance as a practical solution for numerous ICT applications. Today, Ontology adoption extends beyond managing taxonomy and glossary, encompassing areas such as natural language processing, big data and machine learning, cyber-physical systems, FAIR data, model-based engineering, digital twin, and thread.
·standict.eu·
A Landscape of Ontologies Standards (Report of TWG Ontologies) | StandICT.eu 2026
Structure-inducing pre-training
Structure-inducing pre-training
Nature Machine Intelligence - Designing methods to induce explicit and deep structural constraints in latent space at the sample level is an open problem in natural language processing-derived...
·nature.com·
Structure-inducing pre-training
A Hyperparametrization Is All You Need - Building a Recommendation System for Telecommunication Packages Using Graph Neural Networks
A Hyperparametrization Is All You Need - Building a Recommendation System for Telecommunication Packages Using Graph Neural Networks
Graph Neural Networks can be used for a variety of applications but do you know what it takes to create a great recommendation system? Dive deep into the math of GNNs, implement a link prediction module and show everyone how stunning graph machine learning can be!
·memgraph.com·
A Hyperparametrization Is All You Need - Building a Recommendation System for Telecommunication Packages Using Graph Neural Networks
TODA, EMS & Graphs – New Enterprise Architectural Tools For A New Age
TODA, EMS & Graphs – New Enterprise Architectural Tools For A New Age
Change & Risk Require New Enterprise Tools As AI systems, bots (both digital and physical), AI leveraged smart digital identities, and IoT devices invade your enterprise, it increases the pace of change and risk. This brief article focuses on why you should be deploying TODA, EMS and graphs in your
TODA, EMS & Graphs – New Enterprise Architectural Tools For A New Age”
·linkedin.com·
TODA, EMS & Graphs – New Enterprise Architectural Tools For A New Age
Interoperable between data and policies is key. On the web, interoperability for pages is HTML. For data, it’s RDF. For people and policies it’s Solid
Interoperable between data and policies is key. On the web, interoperability for pages is HTML. For data, it’s RDF. For people and policies it’s Solid
This week I was very fortunate to be invited to attend the honorary degree ceremony of Sir Tim Berners-Lee by the London School of Economics. The title of…
Interoperable between data and policies is key. On the web, interoperability for pages is HTML. For data, it’s RDF. For people and policies it’s Solid
·linkedin.com·
Interoperable between data and policies is key. On the web, interoperability for pages is HTML. For data, it’s RDF. For people and policies it’s Solid