Found 112 bookmarks
Newest
Orionbelt Ontology Builder
Orionbelt Ontology Builder
After a vivid conversation with Juan Sequeda and others at Connected Data London 2025 about how to start with Ontologies at business clients w/o relying on another KG platform, I have now started to roll (eh vibe coding 🤓) my own Ontology Builder as a simple Streamlit app! Have a look and collaborate if you like. https://lnkd.in/egGZJHiP
·linkedin.com·
Orionbelt Ontology Builder
Introducing the ONTO-TRON-5000. A personal project that allows users to build their ontologies right from their data
Introducing the ONTO-TRON-5000. A personal project that allows users to build their ontologies right from their data
Introducing the ONTO-TRON-5000. A personal project that allows users to build their ontologies right from their data! The onto-tron is built with the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO) as semantic frameworks for classification. This program emphasizes the importance of design patterns as best practices for ontology documentation and combines it with machine readability. Simply upload your CSV, set the semantic types of your columns and continuously build your ontology above. The program has 3 options for extraction: RDF, R2RML, and the Mermaid Live Editor syntax if you would like to further develop your design pattern there. Included is a BFO/CCO ontology viewer, allowing you to explore the hierarchy and understand how terms are used- no protege required. This is the alpha version and would love feedback as there is a growing list of features to be added. Included in the README are instructions for manual installation and Docker. Enjoy! https://lnkd.in/ehrDwVrf | 13 comments on LinkedIn
Introducing the ONTO-TRON-5000. A personal project that allows users to build their ontologies right from their data
·linkedin.com·
Introducing the ONTO-TRON-5000. A personal project that allows users to build their ontologies right from their data
ATOM is finally here! A scalable and fast approach that can build and continuously update temporal knowledge graphs, inspired by atomic bonds.
ATOM is finally here! A scalable and fast approach that can build and continuously update temporal knowledge graphs, inspired by atomic bonds.
Alhamdulillah, ATOM is finally here! A scalable and fast approach that can build and continuously update temporal knowledge graphs, inspired by atomic bonds. Just as matter is formed from atoms, and galaxies are formed from stars, knowledge is likely to be formed from atomic knowledge graphs. Atomic knowledge graphs were born from our intention to solve a common problem in LLM-based KG construction methods: exhaustivity and stability. Often, these methods produce unstable KGs that change when rerunning the construction process, even without changing anything. Moreover, they fail to capture all facts in the input documents and usually overlook the temporal and dynamic aspects of real-world data. What is the solution? Atomic facts that are temporally aware. Instead of constructing knowledge graphs from raw documents, we split them into atomic facts, which are self-contained and concise propositions. Temporal atomic KGs are constructed from each atomic fact. Then, we defined how the temporal atomic KGs would be merged at the atomic level and how the temporal aspects would be handled. We designed a binary merge algorithm that combines two TKGs and a parallel merge process that merges all TKGs simultaneously. The entire architecture operates in parallel. ATOM employs dual-time modeling that distinguishes observation time from validity time and has 3 main modules: - Module 1 (Atomic Fact Decomposition) splits input documents observed at time t into atomic facts using LLM-based prompting, where each temporal atomic fact is a short, self-contained snippet that conveys exactly one piece of information. - Module 2 (Atomic TKGs Construction) extracts 5-tuples in parallel from each atomic fact to construct atomic temporal KGs, while embedding nodes and relations and addressing temporal resolution during extraction. - Module 3 (Parallel Atomic Merge) employs a binary merge algorithm to merge pairs of atomic TKGs through iterative pairwise merging in parallel until convergence, with three resolution phases: (1) entity resolution, (2) relation name resolution, and (3) temporal resolution that merges observation and validity time sets for relations with similar (e_s, r_p, e_o). The resulting TKG snapshot is then merged with the previous DTKG to yield the updated DTKG. Results: Empirical evaluations demonstrate that ATOM achieves ~18% higher exhaustivity, ~17% better stability, and over 90% latency reduction compared to baseline methods (including iText2KG), demonstrating strong scalability potential for dynamic TKG construction. Check our ATOM's architecture and code: Preprint Paper: https://lnkd.in/dsJzDaQc Code: https://lnkd.in/drZUyisV Website: (coming soon) Example use cases: (coming soon) Special thanks to the dream team: Ludovic Moncla, Khalid Benabdeslem, Rémy Cazabet, Pierre Cléau 📚📡 | 14 comments on LinkedIn
ATOM is finally here! A scalable and fast approach that can build and continuously update temporal knowledge graphs, inspired by atomic bonds.
·linkedin.com·
ATOM is finally here! A scalable and fast approach that can build and continuously update temporal knowledge graphs, inspired by atomic bonds.
A Graph RAG (Retrieval-Augmented Generation) chat application that combines OpenAI GPT with knowledge graphs stored in GraphDB
A Graph RAG (Retrieval-Augmented Generation) chat application that combines OpenAI GPT with knowledge graphs stored in GraphDB
After seeing yet another Graph RAG demo using Neo4j with no ontology, I decided to show what real semantic Graph RAG looks like. The Problem with Most Graph RAG Demos: Everyone's building Graph RAG with LPG databases (Neo4j, TigerGraph, Arrango etc.) and calling it "knowledge graphs." But here's the thing: Without formal ontologies, you don't have a knowledge graph—you just have a graph database. The difference? ❌ LPG: Nodes and edges are just strings. No semantics. No reasoning. No standards. ✅ RDF/SPARQL: Formal ontologies (RDFS/OWL) that define domain knowledge. Machine-readable semantics. W3C standards. Built-in reasoning. So I Built a Real Semantic Graph RAG Using: - Microsoft Agent Framework - AI orchestration - Formal ontologies - RDFS/OWL knowledge representation - Ontotext GraphDB - RDF triple store - SPARQL - semantic querying - GPT-5 - ontology-aware extraction It's all on github, a simple template as boilerplate for you project: The "Jaguar problem": What does "Yesterday I was hit by a Jaguar" really mean? It is impossible to know without concept awareness. To demonstrate why ontologies matter, I created a corpus with mixed content: 🐆 Wildlife jaguars (Panthera onca) 🚗 Jaguar cars (E-Type, XK-E) 🎸 Fender Jaguar guitars I fed this to GPT-5 along with a jaguar conservation ontology. The result? The LLM automatically extracted ONLY wildlife-related entities—filtering out cars and guitars—because it understood the semantic domain from the ontology. No post-processing. No manual cleanup. Just intelligent, concept-aware extraction. This is impossible with LPG databases because they lack formal semantic structure. Labels like (:Jaguar) are just strings—the LLM has no way to know if you mean the animal, car, or guitar. Knowledge Graphs = "Data for AI" LLMs don't need more data—they need structured, semantic data they can reason over. That's what formal ontologies provide: ✅ Domain context ✅ Class hierarchies ✅ Property definitions ✅ Relationship semantics ✅ Reasoning rules This transforms Graph RAG from keyword matching into true semantic retrieval. Check Out the Full Implementation, the repo includes: Complete Graph RAG implementation with Microsoft Agent Framework Working jaguar conservation knowledge graph Jupyter notebook: ontology-aware extraction from mixed-content text https://lnkd.in/dmf5HDRm And if you have gotten this far, you realize that most of this post is written by Cursor ... That goes for the code too. 😁 Your Turn: I know this is a contentious topic. Many teams are heavily invested in LPG-based Graph RAG. What are your thoughts on RDF vs. LPG for Graph RAG? Drop a comment below! #GraphRAG #KnowledgeGraphs #SemanticWeb #RDF #SPARQL #AI #MachineLearning #LLM #Ontology #KnowledgeRepresentation #OpenSource #neo4j #graphdb #agentic-framework #ontotext #agenticai | 148 comments on LinkedIn
·linkedin.com·
A Graph RAG (Retrieval-Augmented Generation) chat application that combines OpenAI GPT with knowledge graphs stored in GraphDB
ATOM: AdapTive and OptiMized dynamic temporal knowledge graph construction using LLMs
ATOM: AdapTive and OptiMized dynamic temporal knowledge graph construction using LLMs
✅ Some state-of-the-art methods for knowledge graph (KG) construction that implement incrementality build a graph from around 3k atomic facts in 4–7 hours, while ATOM achieves the same in just 20 minutes using only 8 parallel threads and a batch size of 40 for asynchronous LLM API calls. ❓ What’s the secret behind this performance? 👉 The architecture. The parallel design. ❌ Incrementality in KG construction was key, but it significantly limits scalability. This is because the method must first build the KG and compare it with the previous one before moving on to the next chunk. That’s why we eliminated this in iText2KG. ❓ Why is scalability so important? The short answer: real-time analytics. Fast dynamic TKG construction enables LLMs to reason over them and generate responses instantly, in real time. Discover more secrets behind this parallel architecture by reading the full paper (link in the first comment).
ATOM: AdapTive and OptiMized dynamic temporal knowledge graph construction using LLMs
·linkedin.com·
ATOM: AdapTive and OptiMized dynamic temporal knowledge graph construction using LLMs
Clinical Knowledge Graph
Clinical Knowledge Graph
Clinical Knowledge Graph (CKG) is a platform with twofold objective: 1) build a graph database with experimental data and data imported from diverse biomedical databases 2) automate knowledge disco...
Clinical Knowledge Graph
·github.com·
Clinical Knowledge Graph
Open-source Graph Explorer v2.4.0 is now released, and it includes a new SPARQL editor
Open-source Graph Explorer v2.4.0 is now released, and it includes a new SPARQL editor
Calling all Graph Explorers! 📣 I'm excited to share that open-source Graph Explorer v2.4.0 is now released, and it includes a new SPARQL editor! Release notes: https://lnkd.in/ePhwPQ5W This means that in addition to being a powerful no-code exploration tool, you can now start your visualization and exploration by writing queries directly in SPARQL. (Gremlin & openCypher too for Property Graph workloads). This makes Graph Explorer an ideal companion for Amazon Neptune, as it supports connections via all three query languages, but you can connect to other graph databases that support these languages too. 🔹 Run it anywhere (it's open source): https://lnkd.in/ehbErxMV 🔹 Access through the AWS console in a Neptune graph notebook: https://lnkd.in/gZ7CJT8D Special thanks go to Kris McGinnes for his efforts. #AWS #AmazonNeptune #GraphExplorer #SPARQL #Gremlin #openCypher #KnowledgeGraph #OpenSource #RDF #LPG
open-source Graph Explorer v2.4.0 is now released, and it includes a new SPARQL editor
·linkedin.com·
Open-source Graph Explorer v2.4.0 is now released, and it includes a new SPARQL editor
SuperMemory is just one example of a growing ecosystem of knowledge graph systems
SuperMemory is just one example of a growing ecosystem of knowledge graph systems
SuperMemory is just one example of a growing ecosystem of knowledge graph systems (Graphiti- Zep, Fast Graph RAG, TrustGraph...) Some in Python, some TypeScript with the added benefit of having graph visualization. Even in Rust and Go there is a growing list of open source graph-RAG. Ontology (LLM generated in particular) seems to have its own moment in the sun with a growing interest in RDF, OWL, SHACL and whatnot. Whether the big guys (OpenAI, Microsoft...) will launch something ontological remains to be seen. They likely leave it to triple-store vendors to figure it out. https://lnkd.in/e3HAiC8c #KnowledgeGraph #GraphRAG
SuperMemory is just one example of a growing ecosystem of knowledge graph systems
·linkedin.com·
SuperMemory is just one example of a growing ecosystem of knowledge graph systems
Building Intelligent AI Memory Systems with Cognee: A Python Development Knowledge Graph
Building Intelligent AI Memory Systems with Cognee: A Python Development Knowledge Graph
Building AI agents that can synthesize scattered knowledge like expert developers 🧠 I have a tutorial about building intelligent AI memory systems with Cognee in my 'Agents Towards Production' repo that solves a critical problem - developers navigate between documentation, community practices, and personal experience, but traditional approaches treat these as isolated resources. This tutorial shows how to build a unified knowledge graph that connects Python's design philosophy, real-world implementations from its creator, and your specific development patterns. The tutorial covers 3 key capabilities: - Knowledge Graph Construction: Building interconnected networks from Guido van Rossum's actual commits, PEP guidelines, and personal conversations - Temporal Analysis: Understanding how solutions evolved over time with time-aware queries - Dynamic Memory Layer: Inferring implicit rules and discovering non-obvious connections across knowledge domains The cross-domain discovery is particularly impressive - it connects your validation issues from January 2024 with Guido van Rossum's actual solutions from mypy and CPython. Rather than keyword matching, it understands semantic relationships between your type hinting challenges and historical solutions, even when terminology differs. Tech stack: - Cognee for knowledge graph construction - OpenAI GPT-4o-mini for entity extraction - Graph algorithms for pattern recognition - Vector embeddings for semantic search The system uses semantic graph traversal with deep relationship understanding for contextually aware responses. Includes working Python code, complete Jupyter notebook with interactive visualizations, and production-ready patterns. Part of the collection of practical guides for building production-ready AI systems. Direct link to the tutorial: https://lnkd.in/eSsjwbuh Ever wish you could query all your development knowledge as one unified intelligent system? ♻️ Repost to let your network learn about this too!
·linkedin.com·
Building Intelligent AI Memory Systems with Cognee: A Python Development Knowledge Graph
Flexible-GraphRAG
Flexible-GraphRAG
𝗙𝗹𝗲𝘅𝗶𝗯𝗹𝗲 𝗚𝗿𝗮𝗽𝗵𝗥𝗔𝗚 𝗼𝗿 𝗥𝗔𝗚 is now flexing to the max using LlamaIndex, supports 𝟳 𝗴𝗿𝗮𝗽𝗵 𝗱𝗮𝘁𝗮𝗯𝗮𝘀𝗲𝘀, 𝟭𝟬 𝘃𝗲𝗰𝘁𝗼𝗿 𝗱𝗮𝘁𝗮𝗯𝗮𝘀𝗲𝘀, 𝟭𝟯 𝗱𝗮𝘁𝗮 𝘀𝗼𝘂𝗿𝗰𝗲𝘀, 𝗟𝗟𝗠𝘀, Docling 𝗱𝗼𝗰 𝗽𝗿𝗼𝗰𝗲𝘀𝘀𝗶𝗻𝗴, 𝗮𝘂𝘁𝗼 𝗰𝗿𝗲𝗮𝘁𝗲 𝗞𝗚𝘀, 𝗚𝗿𝗮𝗽𝗵𝗥𝗔𝗚, 𝗛𝘆𝗯𝗿𝗶𝗱 𝗦𝗲𝗮𝗿𝗰𝗵, 𝗔𝗜 𝗖𝗵𝗮𝘁 (shown Hyland products web page data src) 𝗔𝗽𝗮𝗰𝗵𝗲 𝟮.𝟬 𝗢𝗽𝗲𝗻 𝗦𝗼𝘂𝗿𝗰𝗲 𝗚𝗿𝗮𝗽𝗵: Neo4j ArcadeDB FalkorDB Kuzu NebulaGraph, powered by Vesoft (coming Memgraph and 𝗔𝗺𝗮𝘇𝗼𝗻 𝗡𝗲𝗽𝘁𝘂𝗻𝗲) 𝗩𝗲𝗰𝘁𝗼𝗿: Qdrant, Elastic, OpenSearch Project, Neo4j 𝘃𝗲𝗰𝘁𝗼𝗿, Milvus, created by Zilliz (coming Weaviate, Chroma, Pinecone, 𝗣𝗼𝘀𝘁𝗴𝗿𝗲𝗦𝗤𝗟 + 𝗽𝗴𝘃𝗲𝗰𝘁𝗼𝗿, LanceDB) Docling 𝗱𝗼𝗰𝘂𝗺𝗲𝗻𝘁 𝗽𝗿𝗼𝗰𝗲𝘀𝘀𝗶𝗻𝗴 𝗗𝗮𝘁𝗮 𝗦𝗼𝘂𝗿𝗰𝗲𝘀: using LlamaIndex readers: working: Web Pages, Wikipedia, Youtube, untested: Google Drive, Msft OneDrive, S3, Azure Blob, GCS, Box, SharePoint, previous: filesystem, Alfresco, CMIS. 𝗟𝗟𝗠𝘀: 𝗟𝗹𝗮𝗺𝗮𝗜𝗻𝗱𝗲𝘅 𝗟𝗟𝗠𝘀 (OpenAI, Ollama, Claude, Gemini, etc.) 𝗥𝗲𝗮𝗰𝘁, 𝗩𝘂𝗲, 𝗔𝗻𝗴𝘂𝗹𝗮𝗿 𝗨𝗜𝘀, 𝗠𝗖𝗣 𝘀𝗲𝗿𝘃𝗲𝗿, 𝗙𝗮𝘀𝘁𝗔𝗣𝗜 𝘀𝗲𝗿𝘃𝗲𝗿 𝗚𝗶𝘁𝗛𝘂𝗯 𝘀𝘁𝗲𝘃𝗲𝗿𝗲𝗶𝗻𝗲𝗿/𝗳𝗹𝗲𝘅𝗶𝗯𝗹𝗲-𝗴𝗿𝗮𝗽𝗵𝗿𝗮𝗴: https://lnkd.in/eUEeF2cN 𝗫.𝗰𝗼𝗺 𝗣𝗼𝘀𝘁 𝗼𝗻 𝗙𝗹𝗲𝘅𝗶𝗯𝗹𝗲 𝗚𝗿𝗮𝗽𝗵𝗥𝗔𝗚 𝗼𝗿 𝗥𝗔𝗚 𝗺𝗮𝘅 𝗳𝗹𝗲𝘅𝗶𝗻𝗴 https://lnkd.in/gHpTupAr 𝗜𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗲𝗱 𝗦𝗲𝗺𝗮𝗻𝘁𝗶𝗰𝘀 𝗕𝗹𝗼𝗴: https://lnkd.in/ehpjTV7d
·linkedin.com·
Flexible-GraphRAG
Announcing the formation of a Data Façades W3C Community Group
Announcing the formation of a Data Façades W3C Community Group
I am excited to announce the formation of a Data Façades W3C Community Group. Façade-X, initially introduced at SEMANTICS 2021 and successfully implemented by the SPARQL Anything project, provides a simple yet powerful, homogeneous view over diverse and heterogeneous data sources (e.g., CSV, JSON, XML, and many others). With the recent v1.0.0 release of SPARQL Anything, the time was right to work on the long-term stability and widespread adoption of this approach by developing an open, vendor-neutral technology. The Façade-X concept was born to allow SPARQL users to query data in any structured format in plain SPARQL. Therefore, the choice of a W3C community group to lead efforts on specifications is just natural. Specifications will enhance its reliability, foster innovation, and encourage various vendors and projects—including graph database developers — to provide their own compatible implementations. The primary goals of the Data Façades Community Group is to: Define the core specification of the Façade-X method. Define Standard Mappings: Formalize the required mappings and profiles for connecting Façade-X to common data formats. Define the specification of the query dialect: Provide a reference for the SPARQL dialect, configuration conventions (like SERVICE IRIs), and the functions/magic properties used. Establish Governance: Create a monitored, robust process for adding support for new data formats. Foster Collaboration: Build connections with relevant W3C groups (e.g., RDF & SPARQL, Data Shapes) and encourage involvement from developers, businesses, and adopters. Join us! With Luigi Asprino Ivo Velitchkov Justin Dowdy Paul Mulholland Andy Seaborne Ryan Shaw ... CG: https://lnkd.in/eSxuqsvn Github: https://lnkd.in/dkHGT8N3 SPARQL Anything #RDF #SPARQL #W3C #FX
announce the formation of a Data Façades W3C Community Group
·linkedin.com·
Announcing the formation of a Data Façades W3C Community Group
Introducing the GitLab Knowledge Graph
Introducing the GitLab Knowledge Graph
Today, I'd like to introduce the GitLab Knowledge Graph. This release includes a code indexing engine, written in Rust, that turns your codebase into a live, embeddable graph database for LLM RAG. You can install it with a simple one-line script, parse local repositories directly in your editor, and connect via MCP to query your workspace and over 50,000 files in under 100 milliseconds. We also saw GKG agents scoring up to 10% higher on the SWE-Bench-lite benchmarks, with just a few tools and a small prompt added to opencode (an open-source coding agent). On average, we observed a 7% accuracy gain across our eval runs, and GKG agents were able to solve new tasks compared to the baseline agents. You can read more from the team's research here https://lnkd.in/egiXXsaE. This release is just the first step: we aim for this local version to serve as the backbone of a Knowledge Graph service that enables you to query the entire GitLab Software Development Life Cycle—from an Issue down to a single line of code. I am incredibly proud of the work the team has done. Thank you, Michael U., Jean-Gabriel Doyon, Bohdan Parkhomchuk, Dmitry Gruzd, Omar Qunsul, and Jonathan Shobrook. You can watch Bill Staples and I present this and more in the GitLab 18.4 release here: https://lnkd.in/epvjrhqB Try today at: https://lnkd.in/eAypneFA Roadmap: https://lnkd.in/eXNYQkEn Watch more below for a complete, in-depth tutorial on what we've built: | 19 comments on LinkedIn
introduce the GitLab Knowledge Graph
·linkedin.com·
Introducing the GitLab Knowledge Graph
WebKnoGraph: THE FIRST FULLY transparent, AI-driven framework for improving SEO and site navigation through reproducible methods.
WebKnoGraph: THE FIRST FULLY transparent, AI-driven framework for improving SEO and site navigation through reproducible methods.
I had my first presentation at SEO Wiesn (SEO conference at SIXT Munich) yesterday and WOW, what an experience it has been! This is not a sales pitch, nor a product demo: we're talking about an open source project that is rooted in science, yet applicable in practical industry scenarios as already tested. No APIs, no vendor lock-in, no tricks. It's our duty as SEOs to produce NEW INSIGHTS, not just rewrite stuff, digest information or promote ourselves. Big thanks goes to our sponsors WordLift and Kalicube for supporting this research and believing in me and my team to deliver WebKnoGraph: THE FIRST FULLY transparent, AI-driven framework for improving SEO and site navigation through reproducible methods. We plan on deepening this research and iterating with additional industry & research partners. If you’d like to try this on your website, DM me. Full project repo: https://lnkd.in/d-dvHiCc. A scientific paper will follow. More pics and detailed experience retrospective with the amazing crew will be shared in the upcoming days too 💙💙💙 Until then, have a sneak peek at the deck. SEO WIESN TO THE WIIIIIIN!
WebKnoGraph: THE FIRST FULLY transparent, AI-driven framework for improving SEO and site navigation through reproducible methods.
·linkedin.com·
WebKnoGraph: THE FIRST FULLY transparent, AI-driven framework for improving SEO and site navigation through reproducible methods.
ApeRAG: a production-ready RAG that combines Graph RAG, vector search, and full-text search
ApeRAG: a production-ready RAG that combines Graph RAG, vector search, and full-text search
ApeRAG: a production-ready RAG that combines Graph RAG, vector search, and full-text search. Looks pretty cool. There are a lot of use cases where a "knowledge graph" would help a lot. I still think this is still one of the most powerful way to understand "connections" and "hierarchy" the easiest. 🔤 Github: https://lnkd.in/gdYuShgX | 18 comments on LinkedIn
ApeRAG: a production-ready RAG that combines Graph RAG, vector search, and full-text search
·linkedin.com·
ApeRAG: a production-ready RAG that combines Graph RAG, vector search, and full-text search
Integrating Knowledge Graphs into the Debian Ecosystem | Alexander Belikov
Integrating Knowledge Graphs into the Debian Ecosystem | Alexander Belikov
In an era where software systems are increasingly complex and interconnected, effectively managing the relationships between packages, maintainers, dependencies, and vulnerabilities is both a challenge and a necessity. This paper explores the integration of knowledge graphs into the Debian ecosystem as a powerful means to bring structure, semantics, and coherence to diverse sources of package-related data. By unifying information such as package metadata, security advisories, and reproducibility reports into a single graph-based representation, we enable richer visibility into the ecosystem's structure and behavior. Beyond constructing the DebKG graph, we demonstrate how it supports practical, high-impact applications — such as tracing vulnerability propagation and identifying gaps between community needs and development activity — thereby offering a foundation for smarter, data-informed decision-making within Debian.
·alexander-belikov.github.io·
Integrating Knowledge Graphs into the Debian Ecosystem | Alexander Belikov
A new notebook exploring Semantic Entity Resolution & Extraction using DSPy and Google's new LangExtract library.
A new notebook exploring Semantic Entity Resolution & Extraction using DSPy and Google's new LangExtract library.
Just released a new notebook exploring Semantic Entity Resolution & Extraction using DSPy (Community) and Google's new LangExtract library. Inspired by Russell Jurney’s excellent work on semantic entity resolution, this demo follows his approach of combining: ✅ embeddings, ✅ kNN blocking, ✅ and LLM matching with DSPy (Community). On top of that, I added a general extraction layer to test-drive LangExtract, a Gemini-powered, open-source Python library for reliable structured information extraction. The goal? Detect and merge mentions of the same real-world entities across text. It’s an end-to-end flow tackling one of the most persistent data challenges. Check it out, experiment with your own data, 𝐞𝐧𝐣𝐨𝐲 𝐭𝐡𝐞 𝐬𝐮𝐦𝐦𝐞𝐫 and let me know your thoughts! cc Paco Nathan you might like this 😉 https://wor.ai/8kQ2qa
a new notebook exploring Semantic Entity Resolution & Extraction using DSPy (Community) and Google's new LangExtract library.
·linkedin.com·
A new notebook exploring Semantic Entity Resolution & Extraction using DSPy and Google's new LangExtract library.
barnard59 is a toolkit to automate extract, transform and load (ETL) tasks. It allows you to generate RDF out of non-RDF data sources
barnard59 is a toolkit to automate extract, transform and load (ETL) tasks. It allows you to generate RDF out of non-RDF data sources
Reliability in data pipelines depends on knowing what went wrong before your users do. With the new OpenTelemetry integration in our RDF ETL framework barnard59, every pipeline and API integration is now fully traceable! Errors, validation results and performance metrics are automatically collected and visualised in Grafana. Instead of hunting through logs, you immediately see where time was spent and where an error occurred. This makes RDF-based ETL pipelines far more transparent and easier to operate at scale.
·linkedin.com·
barnard59 is a toolkit to automate extract, transform and load (ETL) tasks. It allows you to generate RDF out of non-RDF data sources
From raw data to a knowledge graph with SynaLinks
From raw data to a knowledge graph with SynaLinks
SynaLinks is an open-source framework designed to make it easier to partner language models (LMs) with your graph technologies. Since most companies are not in a position to train their own language models from scratch, SynaLinks empowers you to adapt existing LMs on the market to specialized tasks.
·gdotv.com·
From raw data to a knowledge graph with SynaLinks