Found 2587 bookmarks
Newest
Introducing the GitLab Knowledge Graph
Introducing the GitLab Knowledge Graph
Today, I'd like to introduce the GitLab Knowledge Graph. This release includes a code indexing engine, written in Rust, that turns your codebase into a live, embeddable graph database for LLM RAG. You can install it with a simple one-line script, parse local repositories directly in your editor, and connect via MCP to query your workspace and over 50,000 files in under 100 milliseconds. We also saw GKG agents scoring up to 10% higher on the SWE-Bench-lite benchmarks, with just a few tools and a small prompt added to opencode (an open-source coding agent). On average, we observed a 7% accuracy gain across our eval runs, and GKG agents were able to solve new tasks compared to the baseline agents. You can read more from the team's research here https://lnkd.in/egiXXsaE. This release is just the first step: we aim for this local version to serve as the backbone of a Knowledge Graph service that enables you to query the entire GitLab Software Development Life Cycle—from an Issue down to a single line of code. I am incredibly proud of the work the team has done. Thank you, Michael U., Jean-Gabriel Doyon, Bohdan Parkhomchuk, Dmitry Gruzd, Omar Qunsul, and Jonathan Shobrook. You can watch Bill Staples and I present this and more in the GitLab 18.4 release here: https://lnkd.in/epvjrhqB Try today at: https://lnkd.in/eAypneFA Roadmap: https://lnkd.in/eXNYQkEn Watch more below for a complete, in-depth tutorial on what we've built: | 19 comments on LinkedIn
introduce the GitLab Knowledge Graph
·linkedin.com·
Introducing the GitLab Knowledge Graph
GraphSearch: An Agentic Deep‑Search Workflow for Graph Retrieval‑Augmented Generation
GraphSearch: An Agentic Deep‑Search Workflow for Graph Retrieval‑Augmented Generation
GraphSearch: An Agentic Deep‑Search Workflow for Graph Retrieval‑Augmented Generation ... Why Current AI Search Falls Short When You Need Real Answers What happens when you ask an AI system a complex question that requires connecting multiple pieces of information? Most current approaches retrieve some relevant documents, generate an answer, and call it done. But this single-pass strategy often misses critical evidence. 👉 The Problem with Shallow Retrieval Traditional retrieval-augmented generation (RAG) systems work like a student who only skims the first few search results before writing an essay. They grab what seems relevant on the surface but miss deeper connections that would lead to better answers. When researchers tested these systems on complex multi-hop questions, they found a consistent pattern: the AI would confidently provide answers based on incomplete evidence, leading to logical gaps and missing key facts. 👉 A New Approach: Deep Searching with Dual Channels Researchers from IDEA Research and Hong Kong University of Science and Technology developed GraphSearch, which works more like a thorough investigator than a quick searcher. The system breaks down complex questions into smaller, manageable pieces, then searches through both text documents and structured knowledge graphs. Think of it as having two different research assistants: one excellent at finding descriptive information in documents, another skilled at tracing relationships between entities. 👉 How It Actually Works Instead of one search-and-answer cycle, GraphSearch uses six coordinated modules: Query decomposition splits complex questions into atomic sub-questions Context refinement filters out noise from retrieved information Query grounding fills in missing details from previous searches Logic drafting organizes evidence into coherent reasoning chains Evidence verification checks if the reasoning holds up Query expansion generates new searches to fill identified gaps The system continues this process until it has sufficient evidence to provide a well-grounded answer. 👉 Real Performance Gains Testing across six different question-answering benchmarks showed consistent improvements. On the MuSiQue dataset, for example, answer accuracy jumped from 35% to 51% when GraphSearch was integrated with existing graph-based systems. The approach works particularly well under constrained conditions - when you have limited computational resources for retrieval, the iterative searching strategy maintains performance better than single-pass methods. This research points toward more reliable AI systems that can handle the kind of complex reasoning we actually need in practice. Paper: "GraphSearch: An Agentic Deep Searching Workflow for Graph Retrieval-Augmented Generation" by Yang et al.
GraphSearch: An Agentic Deep‑Search Workflow for Graph Retrieval‑Augmented Generation
·linkedin.com·
GraphSearch: An Agentic Deep‑Search Workflow for Graph Retrieval‑Augmented Generation
Product management makes or breaks AI. The role of graph
Product management makes or breaks AI. The role of graph
Product management makes or breaks AI. That includes 𝐝𝐚𝐭𝐚. The role of 𝐝𝐚𝐭𝐚 𝐏𝐌 is shifting. For years, the focus was BI - dashboards, reports, warehouses. But AI demands more: context, retrieval, real time, and integration into the flow of work. Data PMs who understand AI requirements will define the next generation of enterprise success. Here’s how my team thinks about BI-ready vs AI-ready data 👇
Product management makes or breaks AI.
·linkedin.com·
Product management makes or breaks AI. The role of graph
WebKnoGraph: THE FIRST FULLY transparent, AI-driven framework for improving SEO and site navigation through reproducible methods.
WebKnoGraph: THE FIRST FULLY transparent, AI-driven framework for improving SEO and site navigation through reproducible methods.
I had my first presentation at SEO Wiesn (SEO conference at SIXT Munich) yesterday and WOW, what an experience it has been! This is not a sales pitch, nor a product demo: we're talking about an open source project that is rooted in science, yet applicable in practical industry scenarios as already tested. No APIs, no vendor lock-in, no tricks. It's our duty as SEOs to produce NEW INSIGHTS, not just rewrite stuff, digest information or promote ourselves. Big thanks goes to our sponsors WordLift and Kalicube for supporting this research and believing in me and my team to deliver WebKnoGraph: THE FIRST FULLY transparent, AI-driven framework for improving SEO and site navigation through reproducible methods. We plan on deepening this research and iterating with additional industry & research partners. If you’d like to try this on your website, DM me. Full project repo: https://lnkd.in/d-dvHiCc. A scientific paper will follow. More pics and detailed experience retrospective with the amazing crew will be shared in the upcoming days too 💙💙💙 Until then, have a sneak peek at the deck. SEO WIESN TO THE WIIIIIIN!
WebKnoGraph: THE FIRST FULLY transparent, AI-driven framework for improving SEO and site navigation through reproducible methods.
·linkedin.com·
WebKnoGraph: THE FIRST FULLY transparent, AI-driven framework for improving SEO and site navigation through reproducible methods.
LDBC to GDC: A Landmark Shift in the Graph World — GraphGeeks
LDBC to GDC: A Landmark Shift in the Graph World — GraphGeeks
The Linked Data Benchmark Council has, for over a decade, been the quiet force behind much of the progress in graph technology. Their mission is deceptively simple: to design, maintain, and promote standard benchmarks for graph data management systems. Read about the recent meeting and the announcem
·graphgeeks.org·
LDBC to GDC: A Landmark Shift in the Graph World — GraphGeeks
ApeRAG: a production-ready RAG that combines Graph RAG, vector search, and full-text search
ApeRAG: a production-ready RAG that combines Graph RAG, vector search, and full-text search
ApeRAG: a production-ready RAG that combines Graph RAG, vector search, and full-text search. Looks pretty cool. There are a lot of use cases where a "knowledge graph" would help a lot. I still think this is still one of the most powerful way to understand "connections" and "hierarchy" the easiest. 🔤 Github: https://lnkd.in/gdYuShgX | 18 comments on LinkedIn
ApeRAG: a production-ready RAG that combines Graph RAG, vector search, and full-text search
·linkedin.com·
ApeRAG: a production-ready RAG that combines Graph RAG, vector search, and full-text search
The rise of Context Engineering
The rise of Context Engineering
The field is evolving from Prompt Engineering, treating context as a single, static string, to Contextual Engineering, which views context as a dynamic system of structured components (instructions, tools, memory, knowledge) orchestrated to solve complex tasks. 🔎 Nearly all innovation is a response to the primary limitation of Transformer models: the quadratic (O(n2)) computational cost of the self-attention mechanism as the context length (n) increases. All techniques for managing this challenge can be organized into three areas: 1. Context Generation & Retrieval (Sourcing Ingredients) Advanced Reasoning: Chain-of-Thought (CoT), Tree-of-Thoughts (ToT). External Knowledge: Advanced Retrieval-Augmented Generation (RAG) like GraphRAG, which uses knowledge graphs for more structured retrieval. 2. Context Processing (Cooking the Ingredients) Refinement: Using the LLM to iterate and improve its own output (Self-Refine). Architectural Changes: Exploring models beyond Transformers (e.g., Mamba) to escape the quadratic bottleneck. 3. Context Management (The Pantry System) Memory: Creating stateful interactions using hierarchical memory systems (e.g., MemGPT) that manage information between the active context window and external storage. Key Distinction: RAG is stateless I/O to the world; Memory is the agent's stateful internal history. The most advanced applications integrate these pillars to create sophisticated agents, with an added layer of dynamic adaptation: Tool-Integrated Reasoning: Empowering LLMs to use external tools (APIs, databases, code interpreters) to interact with the real world. Multi-Agent Systems: Designing "organizations" of specialized LLM agents that communicate and collaborate to solve multi-faceted problems, mirroring the structure of human teams. Adaptive Context Optimization: Leveraging Reinforcement Learning (RL) to dynamically optimize context selection and construction for specific environments and tasks, ensuring efficient and effective performance. Contextual Engineering is the emerging science of building robust, scalable, and stateful applications by systematically managing the flow of information to and from an LLM. | 16 comments on LinkedIn
·linkedin.com·
The rise of Context Engineering
SHACL Practicioner pre-order
SHACL Practicioner pre-order
Help !! I just let the pre-order live! 😬 All of you who signed up for more information shall have received an e-mail with pre-order option now. This is the most scariest sh*t I've done in a long time. Seeing pre-orders ticking in to something I've created---together with the most amazing guest authors---is a super weird feeling. THANK YOU! ❤️ It's been quite a process. From an idea planted in 2022, to now seeing the light at the end of the tunnel. I've spend hours and hours inside TeXworks, nerding around with LaTeX and Ti𝘬Z. Numerous moments at pubs, while waiting for someone, to edit, edit and edit. Taking vacation off work to isolate myself to write (so effective, btw!). Having a gold team of proofreaders, providing with super valuable feedback. Working with awesome SHACL practitioners to tell great SHACL stories to you! IT HAS BEEN GREAT FUN! This week, I have been focusing on final touches (thank you Data Treehouse for letting me do this!!). Indexing like a hero. Soon the words, bits and blobs will hit the printing press, and the first copies will ship on 𝐍𝐨𝐯𝐞𝐦𝐛𝐞𝐫 3𝐫𝐝. If you want to pre-order, tag along to https://lnkd.in/dER72USX for more information. All pre-orders will get a tiny SHACL surprise inside their book. 😇 Btw: the final product will probably not look like this, I got good help from our mutual friend ChatGPT, but I know it will be yellow at least. 💛
·linkedin.com·
SHACL Practicioner pre-order
Semantic Quality Is the Missing Risk Control in Financial AI and GraphRAG | LinkedIn
Semantic Quality Is the Missing Risk Control in Financial AI and GraphRAG | LinkedIn
by Timothy Coleman and J Bittner Picture this: an AI system confidently delivers a financial report, but it misclassifies $100M in assets as liabilities. Errors of this kind are already appearing in financial AI systems, and the stakes only grow as organizations adopt Retrieval-Augmented Generation
·linkedin.com·
Semantic Quality Is the Missing Risk Control in Financial AI and GraphRAG | LinkedIn
Building a structured knowledge graph from Yelp data and training Graph Neural Networks to reason through connections
Building a structured knowledge graph from Yelp data and training Graph Neural Networks to reason through connections
Everyone's talking about LLMs. I went a different direction 🧠 While everyone's building RAG systems with document chunking and vector search, I got curious about something else after Prof Alsayed Algergawy and his assistant Vishvapalsinhji Parmar's Knowledge Graphs seminar. What if the problem isn't just retrieval - but how we structure knowledge itself? 🤔 Traditional RAG's limitation: Chop documents into chunks, embed them, hope semantic search finds the right pieces. But what happens when you need to connect information across chunks? Or when relationships matter more than text similarity? 📄➡️❓ My approach: Instead of chunking, I built a structured knowledge graph from Yelp data (220K+ entities, 555K+ relationships) and trained Graph Neural Networks to reason through connections. 🕸️ The attached visualization shows exactly why this works - see how information naturally exists as interconnected webs, not isolated chunks. 👇🏻 The difference in action: ⚡ Traditional RAG: "Find similar text about Italian restaurants" 🔍 My system: "Traverse user→review→business→category→location→hours and explain why" 🗺️ Result: 94% AUC-ROC performance with explainable reasoning paths. Ask "Find family-friendly Italian restaurants in Philadelphia open Sunday" and get answers that show exactly how the AI connected reviews mentioning kids, atmosphere ratings, location data, and business hours. 🎯 Why this matters: While others optimize chunking strategies, maybe we should question whether chunking is the right approach at all. Sometimes the breakthrough isn't better embeddings - it's fundamentally rethinking how we represent knowledge. 💡 Check my script here 🔗: https://lnkd.in/dwNcS5uM The journey from that seminar to building this alternative has been incredibly rewarding. Excited to continue exploring how structured knowledge can transform AI systems beyond what traditional approaches achieve. ✨ #AI #MachineLearning #RAG #KnowledgeGraphs #GraphNeuralNetworks #NLP #DataScience  | 36 comments on LinkedIn
#AI hashtag#MachineLearning hashtag#RAG hashtag#KnowledgeGraphs hashtag#GraphNeuralNetworks hashtag#NLP hashtag#DataScience
·linkedin.com·
Building a structured knowledge graph from Yelp data and training Graph Neural Networks to reason through connections
1...2...3 Knowledge Graph!!!
1...2...3 Knowledge Graph!!!
1...2...3 Knowledge Graph!!! 🧠 Build Knowledge Graphs in Minutes—No Code, All Logic What if deploying a knowledge graph was as easy as loading a CSV? With Knowledge Mapper, it is. We’ve built a no-code intelligent assistant that empowers business users—not just data scientists—to create and deploy knowledge graphs using real enterprise data and ontologies. 🔍 What Knowledge Mapper does: Opens any ontology Reads structured data (CSV, SQL) Guides users through a graphical mapping process Deploys to Amazon Neptune or GraphDB 💬 Powered by LLMs, it: Assists in mapping with natural language Translates business questions into SPARQL Visualizes queries graphically Suggests additional queries for testing and exploration 🌐 Whether you're working with Semantic Web technologies, implementing FAIR data principles, or trying to bridge enterprise architecture with business logic, Knowledge Mapper is designed for you. It connects to multiple SQL and RDF backends, making it a versatile tool for data governance, semantic integration, and ontology-driven architectures. ⚡ Our 3-step process—Load, Map, Deploy—lets you build and launch a knowledge graph in under 10 minutes. 📊 Want to see how semantic technologies can drive real business value? Ready to make FAIR data actionable across your organization? Let’s make knowledge graphs a business asset—not just a technical artifact. #SemanticWeb #KnowledgeGraph #FAIRData #Ontologies #DataGovernance #EnterpriseArchitecture #NoCode #LLM #SPARQL #GraphDB #AmazonNeptune #LinkedData #AI #KnowledgeMapper
1...2...3 Knowledge Graph!!!
·linkedin.com·
1...2...3 Knowledge Graph!!!
Who is going to win the battle of the knowledge graph? RDF* or LPGs or XBRL?
Who is going to win the battle of the knowledge graph? RDF* or LPGs or XBRL?
Who is going to win the battle of the knowledge graph? RDF* or LPGs or XBRL? Is XBRL even in the running? Those are actually the wrong question to be asking. The right question is, "What useful things can I do with these interesting technologies?" Another good question is, "How will things change because of knowledge graphs?" This is what I am going to do: Universal Global Open Standard for Digital Accounting and Audit Working Papers https://lnkd.in/gtUhquRt | 11 comments on LinkedIn
Who is going to win the battle of the knowledge graph? RDF* or LPGs or XBRL?
·linkedin.com·
Who is going to win the battle of the knowledge graph? RDF* or LPGs or XBRL?
A simple one pager on LLMs, Knowledge Graphs, Ontologies (what is) | LinkedIn
A simple one pager on LLMs, Knowledge Graphs, Ontologies (what is) | LinkedIn
This is a very simple post, but if you are confused about LLMs, Knowledge Graphs and Ontologies, if you have questions like "what is a knowledge graph?", "can LLM do all?" or "do we still need ontologies?", I hope this post can bring some simple of fundamental orientation. Warning: this is not a tre
·linkedin.com·
A simple one pager on LLMs, Knowledge Graphs, Ontologies (what is) | LinkedIn