GraphNews

4503 bookmarks
Custom sorting
Data provenance woth PROV-O
Data provenance woth PROV-O
Data provenance is something people love in theory, but never practice... I have just rewatched an excellent appearance by Jaron Lanier from a couple of… | 52 comments on LinkedIn
Data provenance
·linkedin.com·
Data provenance woth PROV-O
Future Directions in Foundations of Graph Machine Learning
Future Directions in Foundations of Graph Machine Learning
Machine learning on graphs, especially using graph neural networks (GNNs), has seen a surge in interest due to the wide availability of graph data across a broad spectrum of disciplines, from life to social and engineering sciences. Despite their practical success, our theoretical understanding of the properties of GNNs remains highly incomplete. Recent theoretical advancements primarily focus on elucidating the coarse-grained expressive power of GNNs, predominantly employing combinatorial techniques. However, these studies do not perfectly align with practice, particularly in understanding the generalization behavior of GNNs when trained with stochastic first-order optimization techniques. In this position paper, we argue that the graph machine learning community needs to shift its attention to developing a more balanced theory of graph machine learning, focusing on a more thorough understanding of the interplay of expressive power, generalization, and optimization.
·arxiv.org·
Future Directions in Foundations of Graph Machine Learning
Series A Announcement | Orbital Materials
Series A Announcement | Orbital Materials
Orbital Materials (founded by ex-DeepMind researchers) raised $16M Series A led by Radical Ventures and Toyota Ventures. OM focuses on materials science and shed some light on LINUS - the in-house 3D foundation model for material design (apparently, an ML potential and a generative model) with the ambition to become the AlphaFold of materials science. GNNs = 💸
·orbitalmaterials.com·
Series A Announcement | Orbital Materials
MLX-graphs — mlx-graphs 0.0.3 documentation
MLX-graphs — mlx-graphs 0.0.3 documentation
Apple presented MLX-graphs: the GNN library for the MLX framework specifically optimized for Apple Silicon. Since the CPU/GPU memory is shared on M1/M2/M3, you don’t have to worry about moving tensors around and at the same time you can enjoy massive GPU memory of latest M2/M3 chips (64 GB MBPs and MacMinis are still much cheaper than A100 80 GB). For starters, MLX-graphs includes GCN, GAT, GIN, GraphSAGE, and MPNN models and a few standard datasets.
·mlx-graphs.github.io·
MLX-graphs — mlx-graphs 0.0.3 documentation
Artificial Intelligence for Complex Network: Potential, Methodology and Application
Artificial Intelligence for Complex Network: Potential, Methodology and Application
Complex networks pervade various real-world systems, from the natural environment to human societies. The essence of these networks is in their ability to transition and evolve from microscopic disorder-where network topology and node dynamics intertwine-to a macroscopic order characterized by certain collective behaviors. Over the past two decades, complex network science has significantly enhanced our understanding of the statistical mechanics, structures, and dynamics underlying real-world networks. Despite these advancements, there remain considerable challenges in exploring more realistic systems and enhancing practical applications. The emergence of artificial intelligence (AI) technologies, coupled with the abundance of diverse real-world network data, has heralded a new era in complex network science research. This survey aims to systematically address the potential advantages of AI in overcoming the lingering challenges of complex network research. It endeavors to summarize the pivotal research problems and provide an exhaustive review of the corresponding methodologies and applications. Through this comprehensive survey-the first of its kind on AI for complex networks-we expect to provide valuable insights that will drive further research and advancement in this interdisciplinary field.
·arxiv.org·
Artificial Intelligence for Complex Network: Potential, Methodology and Application
Data provenance with PROV-O
Data provenance with PROV-O
Data provenance is something people love in theory, but never practice... I have just rewatched an excellent appearance by Jaron Lanier from a couple of… | 39 comments on LinkedIn
Data provenance
·linkedin.com·
Data provenance with PROV-O
Relational Harmony and a New Hope for Dimensionality
Relational Harmony and a New Hope for Dimensionality
Relational Harmony and a New Hope for Dimensionality ⛓️ In an era where data complexity escalates and the quest for meaningful technology integration… | 30 comments on LinkedIn
Relational Harmony and a New Hope for Dimensionality
·linkedin.com·
Relational Harmony and a New Hope for Dimensionality
Knowledge Graphs: Today's triples just ain't enough | LinkedIn
Knowledge Graphs: Today's triples just ain't enough | LinkedIn
Knowledge hypergraphs are garnering a lot of attention – and deservedly so. You can find my two previous posts on knowledge hypergraphs and on more adaptive conceptualizations for hypergraphs as well as Kurt Cagle's focused and more practically minded pieces on Hypergraphs and RDF, on Named Graphs (
·linkedin.com·
Knowledge Graphs: Today's triples just ain't enough | LinkedIn
Knowledge Engineering using Large Language Models
Knowledge Engineering using Large Language Models
Knowledge engineering is a discipline that focuses on the creation and maintenance of processes that generate and apply knowledge. Traditionally, knowledge engineering approaches have focused on knowledge expressed in formal languages. The emergence of large language models and their capabilities to effectively work with natural language, in its broadest sense, raises questions about the foundations and practice of knowledge engineering. Here, we outline the potential role of LLMs in knowledge engineering, identifying two central directions: 1) creating hybrid neuro-symbolic knowledge systems; and 2) enabling knowledge engineering in natural language. Additionally, we formulate key open research questions to tackle these directions.
·arxiv.org·
Knowledge Engineering using Large Language Models
Global Knowledge Graph Market by Offering (Solutions, Services), By Data Source (Structured, Unstructured, Semi-structured), Industry (BFSI, IT & ITeS, Telecom, Healthcare), Model Type, Application, Type and Region - Forecast to 2028
Global Knowledge Graph Market by Offering (Solutions, Services), By Data Source (Structured, Unstructured, Semi-structured), Industry (BFSI, IT & ITeS, Telecom, Healthcare), Model Type, Application, Type and Region - Forecast to 2028
Rapid Growth in Data Volume and Complexity
·researchandmarkets.com·
Global Knowledge Graph Market by Offering (Solutions, Services), By Data Source (Structured, Unstructured, Semi-structured), Industry (BFSI, IT & ITeS, Telecom, Healthcare), Model Type, Application, Type and Region - Forecast to 2028
Chatbot created based on the prolific writings of Mike Dillinger. This chatbot helps you better digest his posts and articles on Knowledge Graphs, Taxonomy, Ontology and their critical roles in getting LLM technology more accurate and practical
Chatbot created based on the prolific writings of Mike Dillinger. This chatbot helps you better digest his posts and articles on Knowledge Graphs, Taxonomy, Ontology and their critical roles in getting LLM technology more accurate and practical
Check out this chatbot (https://lnkd.in/gv8Afk57) that I created entirely based on the prolific writings of Mike Dillinger, PhD . This chatbot helps you better…
created entirely based on the prolific writings of Mike Dillinger, PhD . This chatbot helps you better digest his posts and articles on Knowledge Graphs, Taxonomy, Ontology and their critical roles in getting LLM technology more accurate and practical
·linkedin.com·
Chatbot created based on the prolific writings of Mike Dillinger. This chatbot helps you better digest his posts and articles on Knowledge Graphs, Taxonomy, Ontology and their critical roles in getting LLM technology more accurate and practical
Language, Graphs, and AI in Industry
Language, Graphs, and AI in Industry
Over the past 5 years, news about AI has been filled with amazing research – at first focused on graph neural networks (GNNs) and more recently about large language models (LLMs). Understand that business tends to use connected data – networks, graphs – whether you’re untangling supply networks in Manufacturing, working on drug discovery for Pharma, or mitigating fraud in Finance. Starting from supplier agreements, bill of materials, internal process docs, sales contracts, etc., there’s a graph inside nearly every business process, one that is defined by language. This talk addresses how to leverage both natural language and graph technologies together for AI applications in industry. We’ll look at how LLMs get used to build and augment graphs, and conversely how graph data gets used to ground LLMs for generative AI use cases in industry – where a kind of “virtuous cycle” is emerging for feedback loops based on graph data. Our team has been engaged, on the one hand, with enterprise use cases in manufacturing. On the other hand we’ve worked as intermediaries between research teams funded by enterprise and open source projects needed by enterprise – particularly in the open source ecosystem for AI models. Also, there are caveats; this work is not simple. Translating from latest research into production-ready code is especially complex and expensive. Let’s examine caveats which other teams should understand, and look toward practical examples.
·derwen.ai·
Language, Graphs, and AI in Industry
Leveraging Structured Knowledge to Automatically Detect Hallucination in Large Language Models
Leveraging Structured Knowledge to Automatically Detect Hallucination in Large Language Models
Leveraging Structured Knowledge to Automatically Detect Hallucination in Large Language Models 🔺 🔻 Large Language Models has sparked a revolution in AI’s… | 25 comments on LinkedIn
Leveraging Structured Knowledge to Automatically Detect Hallucination in Large Language Models
·linkedin.com·
Leveraging Structured Knowledge to Automatically Detect Hallucination in Large Language Models
Extending Taxonomies to Ontologies - Enterprise Knowledge
Extending Taxonomies to Ontologies - Enterprise Knowledge
Sometimes the words “taxonomy” and “ontology” are used interchangeably, and while they are closely related, they are not the same thing. They are both considered kinds of knowledge organization systems to support information and knowledge management.
·enterprise-knowledge.com·
Extending Taxonomies to Ontologies - Enterprise Knowledge
Structural analysis and the sum of nodes' betweenness centrality in complex networks
Structural analysis and the sum of nodes' betweenness centrality in complex networks
Structural analysis in network science is finding the information hidden from the topology structure of complex networks. Many methods have already been proposed in the research on the structural analysis of complex networks to find the different structural information of networks. In this work, the sum of nodes' betweenness centrality (SBC) is used as a new structural index to check how the structure of the complex networks changes in the process of the network's growth. We build two four different processes of network growth to check how the structure change will be manifested by the SBC. We find that when the networks are under Barabási-Albert rule, the value of SBC for each network grows like a logarithmic function. However, when the rule that guides the network's growth is the Erdős-Rényi rule, the value of SBC will converge to a fixed value. It means the rules that guide the network's growth can be illustrated by the change of the SBC in the process of the network's growth. In other words, in the structure analysis of complex networks, the sum of nodes' betweenness centrality can be used as an index to check what kinds of rules guide the network's growth.
·arxiv.org·
Structural analysis and the sum of nodes' betweenness centrality in complex networks
Reasoning Algorithmically in Graph Neural Networks
Reasoning Algorithmically in Graph Neural Networks
The development of artificial intelligence systems with advanced reasoning capabilities represents a persistent and long-standing research question. Traditionally, the primary strategy to address this challenge involved the adoption of symbolic approaches, where knowledge was explicitly represented by means of symbols and explicitly programmed rules. However, with the advent of machine learning, there has been a paradigm shift towards systems that can autonomously learn from data, requiring minimal human guidance. In light of this shift, in latest years, there has been increasing interest and efforts at endowing neural networks with the ability to reason, bridging the gap between data-driven learning and logical reasoning. Within this context, Neural Algorithmic Reasoning (NAR) stands out as a promising research field, aiming to integrate the structured and rule-based reasoning of algorithms with the adaptive learning capabilities of neural networks, typically by tasking neural models to mimic classical algorithms. In this dissertation, we provide theoretical and practical contributions to this area of research. We explore the connections between neural networks and tropical algebra, deriving powerful architectures that are aligned with algorithm execution. Furthermore, we discuss and show the ability of such neural reasoners to learn and manipulate complex algorithmic and combinatorial optimization concepts, such as the principle of strong duality. Finally, in our empirical efforts, we validate the real-world utility of NAR networks across different practical scenarios. This includes tasks as diverse as planning problems, large-scale edge classification tasks and the learning of polynomial-time approximate algorithms for NP-hard combinatorial problems. Through this exploration, we aim to showcase the potential integrating algorithmic reasoning in machine learning models.
·arxiv.org·
Reasoning Algorithmically in Graph Neural Networks
Rethinking Hypergraphs | LinkedIn
Rethinking Hypergraphs | LinkedIn
Copyright 2024. Kurt Cagle When you look at your company, you likely see things - people, clients or customers, products, processes, roles, revenue, etc.
Rethinking Hypergraphs
·linkedin.com·
Rethinking Hypergraphs | LinkedIn
LiGNN: Graph Neural Networks at LinkedIn
LiGNN: Graph Neural Networks at LinkedIn
In this paper, we present LiGNN, a deployed large-scale Graph Neural Networks (GNNs) Framework. We share our insight on developing and deployment of GNNs at large scale at LinkedIn. We present a set of algorithmic improvements to the quality of GNN representation learning including temporal graph architectures with long term losses, effective cold start solutions via graph densification, ID embeddings and multi-hop neighbor sampling. We explain how we built and sped up by 7x our large-scale training on LinkedIn graphs with adaptive sampling of neighbors, grouping and slicing of training data batches, specialized shared-memory queue and local gradient optimization. We summarize our deployment lessons and learnings gathered from A/B test experiments. The techniques presented in this work have contributed to an approximate relative improvements of 1% of Job application hearing back rate, 2% Ads CTR lift, 0.5% of Feed engaged daily active users, 0.2% session lift and 0.1% weekly active user lift from people recommendation. We believe that this work can provide practical solutions and insights for engineers who are interested in applying Graph neural networks at large scale.
·arxiv.org·
LiGNN: Graph Neural Networks at LinkedIn