GraphNews

4357 bookmarks
Custom sorting
OpenFact: Factuality Enhanced Open Knowledge Extraction | Transactions of the Association for Computational Linguistics | MIT Press
OpenFact: Factuality Enhanced Open Knowledge Extraction | Transactions of the Association for Computational Linguistics | MIT Press
Abstract. We focus on the factuality property during the extraction of an OpenIE corpus named OpenFact, which contains more than 12 million high-quality knowledge triplets. We break down the factuality property into two important aspects—expressiveness and groundedness—and we propose a comprehensive framework to handle both aspects. To enhance expressiveness, we formulate each knowledge piece in OpenFact based on a semantic frame. We also design templates, extra constraints, and adopt human efforts so that most OpenFact triplets contain enough details. For groundedness, we require the main arguments of each triplet to contain linked Wikidata1 entities. A human evaluation suggests that the OpenFact triplets are much more accurate and contain denser information compared to OPIEC-Linked (Gashteovski et al., 2019), one recent high-quality OpenIE corpus grounded to Wikidata. Further experiments on knowledge base completion and knowledge base question answering show the effectiveness of OpenFact over OPIEC-Linked as supplementary knowledge to Wikidata as the major KG.
·direct.mit.edu·
OpenFact: Factuality Enhanced Open Knowledge Extraction | Transactions of the Association for Computational Linguistics | MIT Press
NebulaGraph v3.5.0 Release Note
NebulaGraph v3.5.0 Release Note
NebulaGraph v3.5.0 is released, which supports full table scan without index and greatly improves FIND PATH performance.
·nebula-graph.io·
NebulaGraph v3.5.0 Release Note
More Graph DBs in @LangChainAI
More Graph DBs in @LangChainAI
“📈 More Graph DBs in @LangChainAI Graphs can store structured information in a way embeddings can't capture, and we're excited to support even more of them in LangChain: HugeGraph and SPARQL Not only can you query data, but you can also update graph data (!!!) 🧵”
More Graph DBs in @LangChainAI
·twitter.com·
More Graph DBs in @LangChainAI
Knowledge graphs are graph-structured collections of facts. And facts are statements that define and describe subject entities in terms of predicates and their values
Knowledge graphs are graph-structured collections of facts. And facts are statements that define and describe subject entities in terms of predicates and their values
Knowledge graphs are graph-structured collections of facts. And facts are statements that define and describe subject entities in terms of predicates and their…
Knowledge graphs are graph-structured collections of facts. And facts are statements that define and describe subject entities in terms of predicates and their values
·linkedin.com·
Knowledge graphs are graph-structured collections of facts. And facts are statements that define and describe subject entities in terms of predicates and their values
Neosemantics (n10s) reaches the first million all-time downloads
Neosemantics (n10s) reaches the first million all-time downloads
📢 📢 📢 Amazing milestone! 📢 📢 📢 Neosemantics (n10s) reaches the first million all-time downloads 🤯 Let's keep building Knowledge Graphs together! 💪… | 21 comments on LinkedIn
Neosemantics (n10s) reaches the first million all-time downloads
·linkedin.com·
Neosemantics (n10s) reaches the first million all-time downloads
ArtGraph cluster analysis
ArtGraph cluster analysis
This blog post describes how to get semi-automatically interesting insights of an arts knowledge graph using Knime and Neo4j.
·medium.com·
ArtGraph cluster analysis
**Improved** — the BFO Classifier
**Improved** — the BFO Classifier
brief description of our FOIS2023 paper entitled “a method to improve alignments between domain and foundational ontologies”, focusing on BFO-aligned ontologies
·keet.wordpress.com·
**Improved** — the BFO Classifier
pg-schema schemas for property graphs
pg-schema schemas for property graphs
Arrived to SIGMOD in Seattle and it’s an amazing honor that our joint academic/industry work on Property Graph Schema received the Best Industry Paper award.… | 14 comments on LinkedIn
·linkedin.com·
pg-schema schemas for property graphs
Knowledge graphs are becoming an increasing area of interest in respect to explainability and interpretability of AI.
Knowledge graphs are becoming an increasing area of interest in respect to explainability and interpretability of AI.
Knowledge graphs are becoming an increasing area of interest in respect to explainability and interpretability of AI. To consolidate my own research and… | 30 comments on LinkedIn
Knowledge graphs are becoming an increasing area of interest in respect to explainability and interpretability of AI.
·linkedin.com·
Knowledge graphs are becoming an increasing area of interest in respect to explainability and interpretability of AI.
Link prediction on knowledge graphs is a loosing game, IMHO. Without injecting any new info, you'll only find links similar to those you already had. That's why this work is interesting: injecting external knowledge into link prediction is the only way to find truly new links
Link prediction on knowledge graphs is a loosing game, IMHO. Without injecting any new info, you'll only find links similar to those you already had. That's why this work is interesting: injecting external knowledge into link prediction is the only way to find truly new links
“Link prediction on knowledge graphs is a loosing game, IMHO. Without injecting any new info, you'll only find links similar to those you already had. That's why this work is interesting: injecting external knowledge into link prediction is the only way to find truly new links.”
Link prediction on knowledge graphs is a loosing game, IMHO. Without injecting any new info, you'll only find links similar to those you already had. That's why this work is interesting: injecting external knowledge into link prediction is the only way to find truly new links
·twitter.com·
Link prediction on knowledge graphs is a loosing game, IMHO. Without injecting any new info, you'll only find links similar to those you already had. That's why this work is interesting: injecting external knowledge into link prediction is the only way to find truly new links
Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Large Language Models
Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Large Language Models
With the widespread use of large language models (LLMs) in NLP tasks, researchers have discovered the potential of Chain-of-thought (CoT) to assist LLMs in accomplishing complex reasoning tasks by generating intermediate steps. However, human thought processes are often non-linear, rather than simply sequential chains of thoughts. Therefore, we propose Graph-of-Thought (GoT) reasoning, which models human thought processes not only as a chain but also as a graph. By representing thought units as nodes and connections between them as edges, our approach captures the non-sequential nature of human thinking and allows for a more realistic modeling of thought processes. Similar to Multimodal-CoT, we modeled GoT reasoning as a two-stage framework, generating rationales first and then producing the final answer. Specifically, we employ an additional graph-of-thoughts encoder for GoT representation learning and fuse the GoT representation with the original input representation through a gated fusion mechanism. We implement a GoT reasoning model on the T5 pre-trained model and evaluate its performance on a text-only reasoning task (GSM8K) and a multimodal reasoning task (ScienceQA). Our model achieves significant improvement over the strong CoT baseline with 3.41% and 5.08% on the GSM8K test set with T5-base and T5-large architectures, respectively. Additionally, our model boosts accuracy from 84.91% to 91.54% using the T5-base model and from 91.68% to 92.77% using the T5-large model over the state-of-the-art Multimodal-CoT on the ScienceQA test set. Experiments have shown that GoT achieves comparable results to Multimodal-CoT(large) with over 700M parameters, despite having fewer than 250M backbone model parameters, demonstrating the effectiveness of GoT.
·arxiv.org·
Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Large Language Models
LLM Ontology-prompting for Knowledge Graph Extraction
LLM Ontology-prompting for Knowledge Graph Extraction
Prompting an LLM with an ontology to drive Knowledge Graph extraction from unstructured documents
I make no apology for saying that a graph is the best organization of structured data. However, the vast majority of data is unstructured text. Therefore, data needs to be transformed from its original format using an Extract-Transform-Load (ETL) or Extract-Load-Transform (ELT) into a Knowledge Graph format. There is no problem when the original format is structured, such as SQL tables, spreadsheets, etc, or at least semi-structured, such as tweets. However, when the source data is unstructured text the task of ETL/ELT to a graph is far more challenging.This article shows how an LLM can be prompted with an unstructured document and asked to extract a graph corresponding to a specific ontology/schema. This is demonstrated with a Kennedy ontology in conjunction with a publicly available description of the Kennedy family tree.
·medium.com·
LLM Ontology-prompting for Knowledge Graph Extraction
How we delimit and develop the concept of facts is the key to a deeper, more detailed understanding of knowledge graphs
How we delimit and develop the concept of facts is the key to a deeper, more detailed understanding of knowledge graphs
How we delimit and develop the concept of facts is the key to a deeper, more detailed understanding of knowledge graphs because facts are crucial defining…
How we delimit and develop the concept of facts is the key to a deeper, more detailed understanding of knowledge graphs
·linkedin.com·
How we delimit and develop the concept of facts is the key to a deeper, more detailed understanding of knowledge graphs
This new paper is a wonderful story on how generative AI can be used to help curriculum designers build a Knowledge Space dependency graph
This new paper is a wonderful story on how generative AI can be used to help curriculum designers build a Knowledge Space dependency graph
This new paper is a wonderful story on how generative AI can be used to help curriculum designers build a Knowledge Space dependency graph: Exploring the MIT… | 11 comments on LinkedIn
·linkedin.com·
This new paper is a wonderful story on how generative AI can be used to help curriculum designers build a Knowledge Space dependency graph
Ivo Velitchkov on LinkedIn: #sparql #shacl #knowledgegraphs #quality
Ivo Velitchkov on LinkedIn: #sparql #shacl #knowledgegraphs #quality
Now apart from #SPARQL (https://lnkd.in/ey_6S_6B) there is also a #SHACL wiki https://lnkd.in/ek9PHN9z It is a first release with much to fix, improve and…
Now apart from #SPARQL (https://lnkd.in/ey_6S_6B) there is also a #SHACL wiki
·linkedin.com·
Ivo Velitchkov on LinkedIn: #sparql #shacl #knowledgegraphs #quality
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
·arxiv.org·
Unifying Large Language Models and Knowledge Graphs: A Roadmap