GraphNews

Building a Biomedical GraphRAG: When Knowledge Graphs Meet Vector Search
Building a Biomedical GraphRAG: When Knowledge Graphs Meet Vector Search

a RAG system for biomedical research that uses both vector search and knowledge graphs.

Turns out, you need both.

Vector databases, such as Qdrant, are excellent at handling semantic similarity, but they struggle with relationship queries.

𝐓𝐡𝐞 𝐢𝐬𝐬𝐮𝐞: Author networks, citations, and institutional collaborations aren't semantic similarities. They're structured relationships that don't live in embeddings.

𝐓𝐡𝐞 𝐡𝐲𝐛𝐫𝐢𝐝 𝐚𝐩𝐩𝐫𝐨𝐚𝐜𝐡

I combined Qdrant for semantic retrieval with Neo4j for relationship queries, using OpenAI's tool-calling to orchestrate between them.

The workflow:

1️⃣ User asks a question 2️⃣ Qdrant retrieves semantically relevant papers 3️⃣ LLM analyzes the query and decides which graph enrichment tools to call 4️⃣ Neo4j returns structured relationship data 5️⃣ Both sources combine into one answer

Same query with the hybrid system: Returns 4 specific collaborators with paper counts, plus relevant research context.

𝐈𝐦𝐩𝐥𝐞𝐦𝐞𝐧𝐭𝐚𝐭𝐢𝐨𝐧 𝐧𝐨𝐭𝐞𝐬

  • I initially tried having the LLM generate Cypher queries directly, but tool-calling worked much better. The LLM decides which pre-built tool to call, as the tools themselves contain reliable Cypher queries, and LLMs are not yet good enough at Cypher query generation

  • For domains with complex relationships, such as biomedical research, legal documents, and enterprise knowledge, combining vector search with knowledge graphs gives you capabilities neither has alone.

https://www.linkedin.com/posts/activity-7397237155716063232-0pku/

·aiechoes.substack.com·
Building a Biomedical GraphRAG: When Knowledge Graphs Meet Vector Search
The point of semantic modeling is to capture all of the detail, all of the knowledge, all of the information that becomes available to us
The point of semantic modeling is to capture all of the detail, all of the knowledge, all of the information that becomes available to us
The point of semantic modeling is to capture all of the detail, all of the knowledge, all of the information that becomes available to us. Here is an example of extensive semantic model: ontology plus taxonomies for greater depth. This example is quite a comprehensive semantic model if you consider that it’s supported with nearly 100 sets of definitions and descriptions. The model describes knowledge of many different terms, along with an understanding of how those terms are defined, described, and interrelated. When it becomes difficult to absorb all at once, view it in layers: - Begin with the simple knowledge graph—understand the nodes and the edges, the illustration of things and relationships among them. - Then view the property graph to understand the facts that can be known about each thing and each relationship. - Finally, extend it to include taxonomies to see classes and subclasses. Another approach for layering might begin with the knowledge graph showing things and relationships, then add entity taxonomies to understand classes and subclasses of entities, and finally extend it to see properties and property taxonomies. Don’t shy away from large or complex models! Simply plan to manage that detail and complexity by layering and segmenting the diagram. This provides the ability to look at subsets of the model without losing the comprehensive view of enterprise semantics. Graphic sourced from the ‘Architecture and Design for Data Interoperability’ course by Dave Wells. https://lnkd.in/gtqThWdX
The point of semantic modeling is to capture all of the detail, all of the knowledge, all of the information that becomes available to us
·linkedin.com·
The point of semantic modeling is to capture all of the detail, all of the knowledge, all of the information that becomes available to us
Most agentic systems hardcode their capabilities. This does not scale. Ontologies as executable metadata for the four core agent capabilities can solve this.
Most agentic systems hardcode their capabilities. This does not scale. Ontologies as executable metadata for the four core agent capabilities can solve this.
Most agentic systems hardcode their capabilities.
Most agentic systems hardcode their capabilities. 🔳This does not scale.Ontologies as executable metadata for the four core agent capabilities can solve this.
·linkedin.com·
Most agentic systems hardcode their capabilities. This does not scale. Ontologies as executable metadata for the four core agent capabilities can solve this.
Evaluate GraphDBs (the RAG angle)
Evaluate GraphDBs (the RAG angle)
As I’ve been diving deep into Graph RAG, one of my colleagues asked me to compare different graph databases. That got me thinking — it’s…
·medium.com·
Evaluate GraphDBs (the RAG angle)
Visualizing Knowledge Graphs
Visualizing Knowledge Graphs
A practical guide to visualizing and exploring knowledge graphs (RDF/OWL and property graphs) with yFiles: predicate-aware analysis, schema vs. instance views, appropriate layouts, semantic styling, and interaction patterns like predicate filters and progressive disclosure.
Visualizing Knowledge Graphs: A Comprehensive Guide
·yfiles.com·
Visualizing Knowledge Graphs