GraphNews

4336 bookmarks
Custom sorting
Human-centered data networking with interpersonal knowledge graphs - DataScienceCentral.com
Human-centered data networking with interpersonal knowledge graphs - DataScienceCentral.com
In the case of interpersonal knowledge graphs, everyone in the community can contribute to the disambiguation and enrichment of the community’s online presence, and at the same time help with findability, accessibility, interoperability and reuse (the FAIR principles). And that applies not only to someone else finding your path to research discovery, but you being able to retrace your own steps whenever you need to.
·datasciencecentral.com·
Human-centered data networking with interpersonal knowledge graphs - DataScienceCentral.com
Finding Money Launderers Using Heterogeneous Graph Neural Networks
Finding Money Launderers Using Heterogeneous Graph Neural Networks
Current anti-money laundering (AML) systems, predominantly rule-based, exhibit notable shortcomings in efficiently and precisely detecting instances of money laundering. As a result, there has been a recent surge toward exploring alternative approaches, particularly those utilizing machine learning. Since criminals often collaborate in their money laundering endeavors, accounting for diverse types of customer relations and links becomes crucial. In line with this, the present paper introduces a graph neural network (GNN) approach to identify money laundering activities within a large heterogeneous network constructed from real-world bank transactions and business role data belonging to DNB, Norway's largest bank. Specifically, we extend the homogeneous GNN method known as the Message Passing Neural Network (MPNN) to operate effectively on a heterogeneous graph. As part of this procedure, we propose a novel method for aggregating messages across different edges of the graph. Our findings highlight the importance of using an appropriate GNN architecture when combining information in heterogeneous graphs. The performance results of our model demonstrate great potential in enhancing the quality of electronic surveillance systems employed by banks to detect instances of money laundering. To the best of our knowledge, this is the first published work applying GNN on a large real-world heterogeneous network for anti-money laundering purposes.
·arxiv.org·
Finding Money Launderers Using Heterogeneous Graph Neural Networks
RedisGraph End-of-Life Announcement
RedisGraph End-of-Life Announcement
Redis Inc. is phasing out RedisGraph. This blog post explains the motivation behind this decision and the implications for existing customers and community members.
·redis.com·
RedisGraph End-of-Life Announcement
Hierarchical Navigable Small World (HNSW) is one of the most efficient ways to build indexes for vector databases. The idea is to build a similarity graph and traverse that graph to find the nodes that are the closest to a query vector
Hierarchical Navigable Small World (HNSW) is one of the most efficient ways to build indexes for vector databases. The idea is to build a similarity graph and traverse that graph to find the nodes that are the closest to a query vector
We have seen recently a surge in vector databases in this era of generative AI. The idea behind vector databases is to index the data with vectors that relate… | 30 comments on LinkedIn
·linkedin.com·
Hierarchical Navigable Small World (HNSW) is one of the most efficient ways to build indexes for vector databases. The idea is to build a similarity graph and traverse that graph to find the nodes that are the closest to a query vector
OpenFact: Factuality Enhanced Open Knowledge Extraction | Transactions of the Association for Computational Linguistics | MIT Press
OpenFact: Factuality Enhanced Open Knowledge Extraction | Transactions of the Association for Computational Linguistics | MIT Press
Abstract. We focus on the factuality property during the extraction of an OpenIE corpus named OpenFact, which contains more than 12 million high-quality knowledge triplets. We break down the factuality property into two important aspects—expressiveness and groundedness—and we propose a comprehensive framework to handle both aspects. To enhance expressiveness, we formulate each knowledge piece in OpenFact based on a semantic frame. We also design templates, extra constraints, and adopt human efforts so that most OpenFact triplets contain enough details. For groundedness, we require the main arguments of each triplet to contain linked Wikidata1 entities. A human evaluation suggests that the OpenFact triplets are much more accurate and contain denser information compared to OPIEC-Linked (Gashteovski et al., 2019), one recent high-quality OpenIE corpus grounded to Wikidata. Further experiments on knowledge base completion and knowledge base question answering show the effectiveness of OpenFact over OPIEC-Linked as supplementary knowledge to Wikidata as the major KG.
·direct.mit.edu·
OpenFact: Factuality Enhanced Open Knowledge Extraction | Transactions of the Association for Computational Linguistics | MIT Press
NebulaGraph v3.5.0 Release Note
NebulaGraph v3.5.0 Release Note
NebulaGraph v3.5.0 is released, which supports full table scan without index and greatly improves FIND PATH performance.
·nebula-graph.io·
NebulaGraph v3.5.0 Release Note
More Graph DBs in @LangChainAI
More Graph DBs in @LangChainAI
“📈 More Graph DBs in @LangChainAI Graphs can store structured information in a way embeddings can't capture, and we're excited to support even more of them in LangChain: HugeGraph and SPARQL Not only can you query data, but you can also update graph data (!!!) 🧵”
More Graph DBs in @LangChainAI
·twitter.com·
More Graph DBs in @LangChainAI
Knowledge graphs are graph-structured collections of facts. And facts are statements that define and describe subject entities in terms of predicates and their values
Knowledge graphs are graph-structured collections of facts. And facts are statements that define and describe subject entities in terms of predicates and their values
Knowledge graphs are graph-structured collections of facts. And facts are statements that define and describe subject entities in terms of predicates and their…
Knowledge graphs are graph-structured collections of facts. And facts are statements that define and describe subject entities in terms of predicates and their values
·linkedin.com·
Knowledge graphs are graph-structured collections of facts. And facts are statements that define and describe subject entities in terms of predicates and their values
Neosemantics (n10s) reaches the first million all-time downloads
Neosemantics (n10s) reaches the first million all-time downloads
📢 📢 📢 Amazing milestone! 📢 📢 📢 Neosemantics (n10s) reaches the first million all-time downloads 🤯 Let's keep building Knowledge Graphs together! 💪… | 21 comments on LinkedIn
Neosemantics (n10s) reaches the first million all-time downloads
·linkedin.com·
Neosemantics (n10s) reaches the first million all-time downloads
ArtGraph cluster analysis
ArtGraph cluster analysis
This blog post describes how to get semi-automatically interesting insights of an arts knowledge graph using Knime and Neo4j.
·medium.com·
ArtGraph cluster analysis
**Improved** — the BFO Classifier
**Improved** — the BFO Classifier
brief description of our FOIS2023 paper entitled “a method to improve alignments between domain and foundational ontologies”, focusing on BFO-aligned ontologies
·keet.wordpress.com·
**Improved** — the BFO Classifier
pg-schema schemas for property graphs
pg-schema schemas for property graphs
Arrived to SIGMOD in Seattle and it’s an amazing honor that our joint academic/industry work on Property Graph Schema received the Best Industry Paper award.… | 14 comments on LinkedIn
·linkedin.com·
pg-schema schemas for property graphs
Knowledge graphs are becoming an increasing area of interest in respect to explainability and interpretability of AI.
Knowledge graphs are becoming an increasing area of interest in respect to explainability and interpretability of AI.
Knowledge graphs are becoming an increasing area of interest in respect to explainability and interpretability of AI. To consolidate my own research and… | 30 comments on LinkedIn
Knowledge graphs are becoming an increasing area of interest in respect to explainability and interpretability of AI.
·linkedin.com·
Knowledge graphs are becoming an increasing area of interest in respect to explainability and interpretability of AI.
Link prediction on knowledge graphs is a loosing game, IMHO. Without injecting any new info, you'll only find links similar to those you already had. That's why this work is interesting: injecting external knowledge into link prediction is the only way to find truly new links
Link prediction on knowledge graphs is a loosing game, IMHO. Without injecting any new info, you'll only find links similar to those you already had. That's why this work is interesting: injecting external knowledge into link prediction is the only way to find truly new links
“Link prediction on knowledge graphs is a loosing game, IMHO. Without injecting any new info, you'll only find links similar to those you already had. That's why this work is interesting: injecting external knowledge into link prediction is the only way to find truly new links.”
Link prediction on knowledge graphs is a loosing game, IMHO. Without injecting any new info, you'll only find links similar to those you already had. That's why this work is interesting: injecting external knowledge into link prediction is the only way to find truly new links
·twitter.com·
Link prediction on knowledge graphs is a loosing game, IMHO. Without injecting any new info, you'll only find links similar to those you already had. That's why this work is interesting: injecting external knowledge into link prediction is the only way to find truly new links
Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Large Language Models
Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Large Language Models
With the widespread use of large language models (LLMs) in NLP tasks, researchers have discovered the potential of Chain-of-thought (CoT) to assist LLMs in accomplishing complex reasoning tasks by generating intermediate steps. However, human thought processes are often non-linear, rather than simply sequential chains of thoughts. Therefore, we propose Graph-of-Thought (GoT) reasoning, which models human thought processes not only as a chain but also as a graph. By representing thought units as nodes and connections between them as edges, our approach captures the non-sequential nature of human thinking and allows for a more realistic modeling of thought processes. Similar to Multimodal-CoT, we modeled GoT reasoning as a two-stage framework, generating rationales first and then producing the final answer. Specifically, we employ an additional graph-of-thoughts encoder for GoT representation learning and fuse the GoT representation with the original input representation through a gated fusion mechanism. We implement a GoT reasoning model on the T5 pre-trained model and evaluate its performance on a text-only reasoning task (GSM8K) and a multimodal reasoning task (ScienceQA). Our model achieves significant improvement over the strong CoT baseline with 3.41% and 5.08% on the GSM8K test set with T5-base and T5-large architectures, respectively. Additionally, our model boosts accuracy from 84.91% to 91.54% using the T5-base model and from 91.68% to 92.77% using the T5-large model over the state-of-the-art Multimodal-CoT on the ScienceQA test set. Experiments have shown that GoT achieves comparable results to Multimodal-CoT(large) with over 700M parameters, despite having fewer than 250M backbone model parameters, demonstrating the effectiveness of GoT.
·arxiv.org·
Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Large Language Models