GraphNews

3943 bookmarks
Custom sorting
RedisGraph End-of-Life Announcement
RedisGraph End-of-Life Announcement
Redis Inc. is phasing out RedisGraph. This blog post explains the motivation behind this decision and the implications for existing customers and community members.
·redis.com·
RedisGraph End-of-Life Announcement
Hierarchical Navigable Small World (HNSW) is one of the most efficient ways to build indexes for vector databases. The idea is to build a similarity graph and traverse that graph to find the nodes that are the closest to a query vector
Hierarchical Navigable Small World (HNSW) is one of the most efficient ways to build indexes for vector databases. The idea is to build a similarity graph and traverse that graph to find the nodes that are the closest to a query vector
We have seen recently a surge in vector databases in this era of generative AI. The idea behind vector databases is to index the data with vectors that relate… | 30 comments on LinkedIn
·linkedin.com·
Hierarchical Navigable Small World (HNSW) is one of the most efficient ways to build indexes for vector databases. The idea is to build a similarity graph and traverse that graph to find the nodes that are the closest to a query vector
OpenFact: Factuality Enhanced Open Knowledge Extraction | Transactions of the Association for Computational Linguistics | MIT Press
OpenFact: Factuality Enhanced Open Knowledge Extraction | Transactions of the Association for Computational Linguistics | MIT Press
Abstract. We focus on the factuality property during the extraction of an OpenIE corpus named OpenFact, which contains more than 12 million high-quality knowledge triplets. We break down the factuality property into two important aspects—expressiveness and groundedness—and we propose a comprehensive framework to handle both aspects. To enhance expressiveness, we formulate each knowledge piece in OpenFact based on a semantic frame. We also design templates, extra constraints, and adopt human efforts so that most OpenFact triplets contain enough details. For groundedness, we require the main arguments of each triplet to contain linked Wikidata1 entities. A human evaluation suggests that the OpenFact triplets are much more accurate and contain denser information compared to OPIEC-Linked (Gashteovski et al., 2019), one recent high-quality OpenIE corpus grounded to Wikidata. Further experiments on knowledge base completion and knowledge base question answering show the effectiveness of OpenFact over OPIEC-Linked as supplementary knowledge to Wikidata as the major KG.
·direct.mit.edu·
OpenFact: Factuality Enhanced Open Knowledge Extraction | Transactions of the Association for Computational Linguistics | MIT Press
NebulaGraph v3.5.0 Release Note
NebulaGraph v3.5.0 Release Note
NebulaGraph v3.5.0 is released, which supports full table scan without index and greatly improves FIND PATH performance.
·nebula-graph.io·
NebulaGraph v3.5.0 Release Note
More Graph DBs in @LangChainAI
More Graph DBs in @LangChainAI
“📈 More Graph DBs in @LangChainAI Graphs can store structured information in a way embeddings can't capture, and we're excited to support even more of them in LangChain: HugeGraph and SPARQL Not only can you query data, but you can also update graph data (!!!) 🧵”
More Graph DBs in @LangChainAI
·twitter.com·
More Graph DBs in @LangChainAI
Knowledge graphs are graph-structured collections of facts. And facts are statements that define and describe subject entities in terms of predicates and their values
Knowledge graphs are graph-structured collections of facts. And facts are statements that define and describe subject entities in terms of predicates and their values
Knowledge graphs are graph-structured collections of facts. And facts are statements that define and describe subject entities in terms of predicates and their…
Knowledge graphs are graph-structured collections of facts. And facts are statements that define and describe subject entities in terms of predicates and their values
·linkedin.com·
Knowledge graphs are graph-structured collections of facts. And facts are statements that define and describe subject entities in terms of predicates and their values
Neosemantics (n10s) reaches the first million all-time downloads
Neosemantics (n10s) reaches the first million all-time downloads
📢 📢 📢 Amazing milestone! 📢 📢 📢 Neosemantics (n10s) reaches the first million all-time downloads 🤯 Let's keep building Knowledge Graphs together! 💪… | 21 comments on LinkedIn
Neosemantics (n10s) reaches the first million all-time downloads
·linkedin.com·
Neosemantics (n10s) reaches the first million all-time downloads
ArtGraph cluster analysis
ArtGraph cluster analysis
This blog post describes how to get semi-automatically interesting insights of an arts knowledge graph using Knime and Neo4j.
·medium.com·
ArtGraph cluster analysis
**Improved** — the BFO Classifier
**Improved** — the BFO Classifier
brief description of our FOIS2023 paper entitled “a method to improve alignments between domain and foundational ontologies”, focusing on BFO-aligned ontologies
·keet.wordpress.com·
**Improved** — the BFO Classifier
pg-schema schemas for property graphs
pg-schema schemas for property graphs
Arrived to SIGMOD in Seattle and it’s an amazing honor that our joint academic/industry work on Property Graph Schema received the Best Industry Paper award.… | 14 comments on LinkedIn
·linkedin.com·
pg-schema schemas for property graphs
Knowledge graphs are becoming an increasing area of interest in respect to explainability and interpretability of AI.
Knowledge graphs are becoming an increasing area of interest in respect to explainability and interpretability of AI.
Knowledge graphs are becoming an increasing area of interest in respect to explainability and interpretability of AI. To consolidate my own research and… | 30 comments on LinkedIn
Knowledge graphs are becoming an increasing area of interest in respect to explainability and interpretability of AI.
·linkedin.com·
Knowledge graphs are becoming an increasing area of interest in respect to explainability and interpretability of AI.
Link prediction on knowledge graphs is a loosing game, IMHO. Without injecting any new info, you'll only find links similar to those you already had. That's why this work is interesting: injecting external knowledge into link prediction is the only way to find truly new links
Link prediction on knowledge graphs is a loosing game, IMHO. Without injecting any new info, you'll only find links similar to those you already had. That's why this work is interesting: injecting external knowledge into link prediction is the only way to find truly new links
“Link prediction on knowledge graphs is a loosing game, IMHO. Without injecting any new info, you'll only find links similar to those you already had. That's why this work is interesting: injecting external knowledge into link prediction is the only way to find truly new links.”
Link prediction on knowledge graphs is a loosing game, IMHO. Without injecting any new info, you'll only find links similar to those you already had. That's why this work is interesting: injecting external knowledge into link prediction is the only way to find truly new links
·twitter.com·
Link prediction on knowledge graphs is a loosing game, IMHO. Without injecting any new info, you'll only find links similar to those you already had. That's why this work is interesting: injecting external knowledge into link prediction is the only way to find truly new links
Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Large Language Models
Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Large Language Models
With the widespread use of large language models (LLMs) in NLP tasks, researchers have discovered the potential of Chain-of-thought (CoT) to assist LLMs in accomplishing complex reasoning tasks by generating intermediate steps. However, human thought processes are often non-linear, rather than simply sequential chains of thoughts. Therefore, we propose Graph-of-Thought (GoT) reasoning, which models human thought processes not only as a chain but also as a graph. By representing thought units as nodes and connections between them as edges, our approach captures the non-sequential nature of human thinking and allows for a more realistic modeling of thought processes. Similar to Multimodal-CoT, we modeled GoT reasoning as a two-stage framework, generating rationales first and then producing the final answer. Specifically, we employ an additional graph-of-thoughts encoder for GoT representation learning and fuse the GoT representation with the original input representation through a gated fusion mechanism. We implement a GoT reasoning model on the T5 pre-trained model and evaluate its performance on a text-only reasoning task (GSM8K) and a multimodal reasoning task (ScienceQA). Our model achieves significant improvement over the strong CoT baseline with 3.41% and 5.08% on the GSM8K test set with T5-base and T5-large architectures, respectively. Additionally, our model boosts accuracy from 84.91% to 91.54% using the T5-base model and from 91.68% to 92.77% using the T5-large model over the state-of-the-art Multimodal-CoT on the ScienceQA test set. Experiments have shown that GoT achieves comparable results to Multimodal-CoT(large) with over 700M parameters, despite having fewer than 250M backbone model parameters, demonstrating the effectiveness of GoT.
·arxiv.org·
Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Large Language Models
LLM Ontology-prompting for Knowledge Graph Extraction
LLM Ontology-prompting for Knowledge Graph Extraction
Prompting an LLM with an ontology to drive Knowledge Graph extraction from unstructured documents
I make no apology for saying that a graph is the best organization of structured data. However, the vast majority of data is unstructured text. Therefore, data needs to be transformed from its original format using an Extract-Transform-Load (ETL) or Extract-Load-Transform (ELT) into a Knowledge Graph format. There is no problem when the original format is structured, such as SQL tables, spreadsheets, etc, or at least semi-structured, such as tweets. However, when the source data is unstructured text the task of ETL/ELT to a graph is far more challenging.This article shows how an LLM can be prompted with an unstructured document and asked to extract a graph corresponding to a specific ontology/schema. This is demonstrated with a Kennedy ontology in conjunction with a publicly available description of the Kennedy family tree.
·medium.com·
LLM Ontology-prompting for Knowledge Graph Extraction