GraphNews

Zep packs Temporal Knowledge Graphs + Semantic Entity Extraction + Cypher Queries storage—Outperforming MemGPT with 94.8% Accuracy
Zep packs Temporal Knowledge Graphs + Semantic Entity Extraction + Cypher Queries storage—Outperforming MemGPT with 94.8% Accuracy
🎁⏳ Zep packs Temporal Knowledge Graphs + Semantic Entity Extraction + Cypher Queries storage—Outperforming MemGPT with 94.8% Accuracy. Build Personalized AI… | 46 comments on LinkedIn
Zep packs Temporal Knowledge Graphs + Semantic Entity Extraction + Cypher Queries storage—Outperforming MemGPT with 94.8% Accuracy
·linkedin.com·
Zep packs Temporal Knowledge Graphs + Semantic Entity Extraction + Cypher Queries storage—Outperforming MemGPT with 94.8% Accuracy
Synalinks is an open-source framework designed to streamline the creation, evaluation, training, and deployment of industry-standard Language Models (LMs) applications
Synalinks is an open-source framework designed to streamline the creation, evaluation, training, and deployment of industry-standard Language Models (LMs) applications
🎉 We're thrilled to unveil Synalinks (🧠🔗), an open-source framework designed to streamline the creation, evaluation, training, and deployment of…
Synalinks (🧠🔗), an open-source framework designed to streamline the creation, evaluation, training, and deployment of industry-standard Language Models (LMs) applications
·linkedin.com·
Synalinks is an open-source framework designed to streamline the creation, evaluation, training, and deployment of industry-standard Language Models (LMs) applications
GiGL: Large-Scale Graph Neural Networks at Snapchat
GiGL: Large-Scale Graph Neural Networks at Snapchat
Recent advances in graph machine learning (ML) with the introduction of Graph Neural Networks (GNNs) have led to a widespread interest in applying these approaches to business applications at...
GiGL: Large-Scale Graph Neural Networks at Snapchat
·arxiv.org·
GiGL: Large-Scale Graph Neural Networks at Snapchat
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
🏆🚣MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage. Achieving that by Semantic-Aware Heterogeneous Graph…
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
·linkedin.com·
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
Announcing QLeverize: The Future of Open-Source Knowledge Graphs at Unlimited Scale | LinkedIn
Announcing QLeverize: The Future of Open-Source Knowledge Graphs at Unlimited Scale | LinkedIn
Biel/Bienne, Switzerland – February 24, 2025 – Knowledge graphs are becoming critical infrastructure for enterprises handling large-scale, interconnected data. Yet, many existing solutions struggle with scalability, performance, and cost—forcing organizations into proprietary ecosystems with high op
·linkedin.com·
Announcing QLeverize: The Future of Open-Source Knowledge Graphs at Unlimited Scale | LinkedIn
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
🏆🚣MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage. Achieving that by Semantic-Aware Heterogeneous Graph…
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
·linkedin.com·
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
What makes an ontology fail? 9 reasons
What makes an ontology fail? 9 reasons
What makes an ontology fail? 9 reasons. At the inauguration of SCOR (Swiss Center for Ontological Research), I had the opportunity to speak alongside Barry… | 154 comments on LinkedIn
What makes an ontology fail? 9 reasons
·linkedin.com·
What makes an ontology fail? 9 reasons
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval This Multi-Granular Graph Framework uses PageRank and Keyword-Chunk Graph to have the Best Cost-Quality Tradeoff ﹌﹌﹌﹌﹌﹌﹌﹌﹌ 》The Problem: Knowledge Graphs Are Expensive (and Clunky) AI agents need context to answer complex questions—like connecting “COVID vaccines” to “myocarditis risks” across research papers. But today’s solutions face two nightmares: ✸ Cost: Building detailed knowledge graphs with LLMs can cost $33,000 for a 5GB legal case. ✸ Quality: Cheap methods (like KNN graphs) miss key relationships, leading to 32% worse answers. ☆ Imagine training an AI doctor that either bankrupts you or misdiagnoses patients. Ouch. ﹌﹌﹌﹌﹌﹌﹌﹌﹌ 》The Fix: KET-RAG’s Two-Layer Brain KET-RAG merges precision (knowledge graphs) and efficiency (keyword-text maps) into one system: ✸ Layer 1: Knowledge Graph Skeleton ☆ Uses PageRank to find core text chunks (like “vaccine side effects” in medical docs). ☆ Builds a sparse graph only on these chunks with LLMs—saving 80% of indexing costs. ✸ Layer 2: Keyword-Chunk Bipartite Graph ☆ Links keywords (e.g., “myocarditis”) to all related text snippets—no LLM needed. ☆ Acts as a “fast lane” for retrieving context without expensive entity extraction. ﹌﹌﹌﹌﹌﹌﹌﹌﹌ 》Results: Beating Microsoft’s Graph-RAG with Pennies On HotpotQA and MuSiQue benchmarks, KET-RAG: ✸ Retrieves 81.6% of critical info vs. Microsoft’s 74.6%—with 10x lower cost. ✸ Boosts answer accuracy (F1 score) by 32.4% while cutting indexing bills by 20%. ✸ Scales to terabytes of data without melting budgets. ☆ Think of it as a Tesla Model 3 outperforming a Lamborghini at 1/10th the price. ﹌﹌﹌﹌﹌﹌﹌﹌﹌ 》Why AI Agents Need This AI agents aren’t just chatbots—they’re problem solvers for medicine, law, and customer service. KET-RAG gives them: ✸ Real-time, multi-hop reasoning: Connecting “drug A → gene B → side effect C” in milliseconds. ✸ Cost-effective scalability: Deploying agents across millions of documents without going broke. ✸ Adaptability: Mixing precise knowledge graphs (for critical data) with keyword maps (for speed). Paper in comments ≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣ 》Build Your Own Supercharged AI Agent? 🔮 Join My 𝐇𝐚𝐧𝐝𝐬-𝐎𝐧 𝐀𝐈 𝐀𝐠𝐞𝐧𝐭𝐬 𝐓𝐫𝐚𝐢𝐧𝐢𝐧𝐠 TODAY! and Learn Building AI Agent with Langgraph/Langchain, CrewAI and OpenAI Swarm + RAG Pipelines 𝐄𝐧𝐫𝐨𝐥𝐥 𝐍𝐎𝐖 [34% discount]: 👉 https://lnkd.in/eGuWr4CH | 10 comments on LinkedIn
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
·linkedin.com·
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
Adaptive Graph of Thoughts (AGoT), a test-time framework that replaces rigid prompting strategies (like Chain/Tree of Thought) with dynamic directed acyclic graphs
Adaptive Graph of Thoughts (AGoT), a test-time framework that replaces rigid prompting strategies (like Chain/Tree of Thought) with dynamic directed acyclic graphs
Dynamic Reasoning Graphs + LLMs = 🤝 Large Language Models (LLMs) often stumble on complex tasks when confined to linear reasoning. What if they could dynamically restructure their thought process like humans? A new paper introduces Adaptive Graph of Thoughts (AGoT), a test-time framework that replaces rigid prompting strategies (like Chain/Tree of Thought) with dynamic directed acyclic graphs (DAGs). Instead of forcing fixed reasoning steps, AGoT recursively decomposes problems into sub-tasks, selectively expanding only the most critical pathways. This is crucial for industries like scientific research or legal analysis, where problems demand non-linear, nested reasoning. The key innovation lies in complexity checks: AGoT assesses each reasoning node, spawning sub-graphs for intricate subtasks while resolving simpler ones directly. This mirrors how experts allocate mental effort—drilling into uncertainties while streamlining obvious steps. The framework achieved 46.2% improvement on GPQA (a notoriously hard science QA benchmark), rivaling gains from compute-heavy fine-tuning. By unifying chain, tree, and graph paradigms, AGoT retains CoT’s clarity, ToT’s exploration, and GoT’s flexibility without manual tuning. The result? LLMs that self-adapt their reasoning depth based on problem complexity—no architectural changes needed. For AI practitioners, AGoT’s DAG structure offers a principled interface to scale reasoning modularly. ↓ 𝐖𝐚𝐧𝐧𝐚 𝐤𝐧𝐨𝐰 𝐰𝐡𝐚𝐭 𝐲𝐨𝐮 𝐦𝐢𝐬𝐬𝐞𝐝? Join my newsletter with 50k+ readers that breaks down all you need to know about the latest LLM research: llmwatch.com 💡
Adaptive Graph of Thoughts (AGoT), a test-time framework that replaces rigid prompting strategies (like Chain/Tree of Thought) with dynamic directed acyclic graphs
·linkedin.com·
Adaptive Graph of Thoughts (AGoT), a test-time framework that replaces rigid prompting strategies (like Chain/Tree of Thought) with dynamic directed acyclic graphs
yfiles jupyter graphs for sparql: The open-source adapter for working with RDF databases
yfiles jupyter graphs for sparql: The open-source adapter for working with RDF databases
📣Hey Semantic Web/SPARQL/RDF/OWL/Knowledge graph community: Finally! We heard you! I just got this fresh from the dev kitchen: 🎉 Try our free SPARQL query result visualization widget for Jupyter Notebooks! Based on our popular generic graph visualization widget for Jupyter, this widget makes it super convenient to add beautiful graph visualizations of your SPARQL queries to your Jupyter Notebooks. Check out the example notebooks for Google Colab in the GitHub repo https://lnkd.in/e8JP-eiM ✨ This is a pre-1.0-release but already quite capable, as it builds on the well-tested generic widget. We are looking to get your feedback on the features for the final release, so please do take a look and let me know your feedback here, or tell us on GitHub! What features are you missing? What do you like best about the widget? Let me know in the comments and I'll talk to the devs 😊 #sparql #rdf #owl #semanticweb #knowledgegraphs #visualization
GitHub - yWorks/yfiles-jupyter-graphs-for-sparql: The open-source adapter for working with RDF databas
·linkedin.com·
yfiles jupyter graphs for sparql: The open-source adapter for working with RDF databases
RDF-to-Gephi
RDF-to-Gephi
I have never been a fan of the "bubble and arrows" kind of graph visualizations. It is generaly useless. But when you can see the entire graph, and can tune the rendering, you start understanding the topology and structure - and ultimately you can tell a story with your graph (and that's what we all love, stories). Gephi is a graph visualization tool to tell these sort of stories with graphs, that has been around for 15 (20 ?) years. Interestingly, while quite a number of Gephi plugins exist to load data (including from neo4j), no decent working plugin exist to load RDF data (yes, there was a "SemanticWebImport" plugin, but it looks outdated, with an old documentation, and does not work with latest - 0.10 - version of Gephi). This doesn't tell anything good for the semantic knowledge graph community. A few weeks ago I literally stumbled upon an old project we developed in 2017 to convert RDF graphs into the GEXF format that can be loaded in Gephi. Time for a serious cleaning, reengineering, and packaging ! So here is a v1.0.0 of the rebranded rdf2gephi utility tool ! The tool runs as a command line that can read an RDF knowledge graph (from files or a SPARQL endpoint), execute a set of SPARQL queries, and turn that into a set of nodes and edges in a GEXF file. rdf2gephi provides default queries to run a simple conversion without any parameters, but most of the time you will want to tune how your graph is turned into GEXF nodes and edges (for example, in my case, `org:Membership` entities relating `foaf:Persons` with `org:Organizations` are not turned into nodes, but into edges, and I want to ignore some other entities). And then what ? then you can load the GEXF file in Gephi, and run a few operations to showcase your graph (see the little screencast video I recorded) : run a layout algorithm, color nodes based on their rdf:type or another attribute you converted, change their size according to the (in-)degree, detect clusters based on a modularity algorithm, etc. etc. - and then export as SVG, PNG, or another format. Also, one of the cool feature supported by the GEXF format are dynamic graphs, where each nodes and edges can be associated to a date range. You can then see your graph evolving through time, like in a movie ! I hope I will be able to tell a more concrete Gephi-powered, RDF-backed graph-story in a future post ! All links in comments.
·linkedin.com·
RDF-to-Gephi
Specifications to define data assets managed as products
Specifications to define data assets managed as products
📚 In recent years, several specifications have emerged to define data assets managed as products. Today, two main types of specifications exist: 1️⃣ 𝗗𝗮𝘁𝗮 𝗖𝗼𝗻𝘁𝗿𝗮𝗰𝘁 𝗦𝗽𝗲𝗰𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻 (𝗗𝗖𝗦): Focused on describing the data asset and its associated metadata. 2️⃣ 𝗗𝗮𝘁𝗮 𝗣𝗿𝗼𝗱𝘂𝗰𝘁 𝗦𝗽𝗲𝗰𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻 (𝗗𝗣𝗦): Focused on describing the data product that manages and exposes the data asset. 👉 The 𝗢𝗽𝗲𝗻 𝗗𝗮𝘁𝗮 𝗖𝗼𝗻𝘁𝗿𝗮𝗰𝘁 𝗦𝘁𝗮𝗻𝗱𝗮𝗿𝗱 (𝗢𝗗𝗖𝗦) by Bitol is an example of the first specification type, while the 𝗗𝗮𝘁𝗮 𝗣𝗿𝗼𝗱𝘂𝗰𝘁 𝗗𝗲𝘀𝗰𝗿𝗶𝗽𝘁𝗼𝗿 𝗦𝗽𝗲𝗰𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻 (𝗗𝗣𝗗𝗦) by the Open Data Mesh Initiative represents the second. 🤔 But what are the key differences between these two approaches? Where do they overlap, and how can they complement each other? More broadly, are they friends, enemies, or frenemies? 🔎 I explored these questions in my latest blog post. The image below might give away some spoilers, but if you're curious about the full reasoning, read the post. ❤️ I'd love to hear your thoughts! #TheDataJoy #DataContracts #DataProducts #DataGovernance | 29 comments on LinkedIn
specifications have emerged to define data assets managed as products
·linkedin.com·
Specifications to define data assets managed as products
SymAgent: A Neural-Symbolic Self-Learning Agent Framework for Complex Reasoning over Knowledge Graphs
SymAgent: A Neural-Symbolic Self-Learning Agent Framework for Complex Reasoning over Knowledge Graphs
LLMs that automatically fill knowledge gaps - too good to be true? Large Language Models (LLMs) often stumble in logical tasks due to hallucinations, especially when relying on incomplete Knowledge Graphs (KGs). Current methods naively trust KGs as exhaustive truth sources - a flawed assumption in real-world domains like healthcare or finance where gaps persist. SymAgent is a new framework that approaches this problem by making KGs active collaborators, not passive databases. Its dual-module design combines symbolic logic with neural flexibility: 1. Agent-Planner extracts implicit rules from KGs (e.g., "If drug X interacts with Y, avoid co-prescription") to decompose complex questions into structured steps. 2. Agent-Executor dynamically pulls external data when KG triples are missing, bypassing the "static repository" limitation. Perhaps most impressively, SymAgent’s self-learning observes failed reasoning paths to iteratively refine its strategy and flag missing KG connections - achieving 20-30% accuracy gains over raw LLMs. Equipped with SymAgent, even 7B models rival their much larger counterparts by leveraging this closed-loop system. It would be great if LLMs were able to autonomously curate knowledge and adapt to domain shifts without costly retraining. But are we there yet? Are hybrid architectures like SymAgent the future? ↓ Liked this post? Join my newsletter with 50k+ readers that breaks down all you need to know about the latest LLM research: llmwatch.com 💡
·linkedin.com·
SymAgent: A Neural-Symbolic Self-Learning Agent Framework for Complex Reasoning over Knowledge Graphs
key ontology standards
key ontology standards
What are the key ontology standards you should have in mind? Ontology standards are crucial for knowledge representation and reasoning in AI and data… | 32 comments on LinkedIn
key ontology standards
·linkedin.com·
key ontology standards