GraphNews

4343 bookmarks
Custom sorting
Government Funding Graph RAG
Government Funding Graph RAG
Graph visualisation for UK Research and Innovation (UKRI) funding, including NetworkX, PyVis and LlamaIndex graph retrieval-augmented generation (RAG)
ยทtowardsdatascience.comยท
Government Funding Graph RAG
Knowledge graphs to teach LLMs how to reason like doctors
Knowledge graphs to teach LLMs how to reason like doctors
Knowledge graphs to teach LLMs how to reason like doctors! Many medical LLMs can give you the right answer, but not the right reasoning which is a problem for clinical trust. ๐— ๐—ฒ๐—ฑ๐—ฅ๐—ฒ๐—ฎ๐˜€๐—ผ๐—ป ๐—ถ๐˜€ ๐˜๐—ต๐—ฒ ๐—ณ๐—ถ๐—ฟ๐˜€๐˜ ๐—ณ๐—ฎ๐—ฐ๐˜๐˜‚๐—ฎ๐—น๐—น๐˜†-๐—ด๐˜‚๐—ถ๐—ฑ๐—ฒ๐—ฑ ๐—ฑ๐—ฎ๐˜๐—ฎ๐˜€๐—ฒ๐˜ ๐˜๐—ผ ๐˜๐—ฒ๐—ฎ๐—ฐ๐—ต ๐—Ÿ๐—Ÿ๐— ๐˜€ ๐—ฐ๐—น๐—ถ๐—ป๐—ถ๐—ฐ๐—ฎ๐—น ๐—–๐—ต๐—ฎ๐—ถ๐—ป-๐—ผ๐—ณ-๐—ง๐—ต๐—ผ๐˜‚๐—ด๐—ต๐˜ (๐—–๐—ผ๐—ง) ๐—ฟ๐—ฒ๐—ฎ๐˜€๐—ผ๐—ป๐—ถ๐—ป๐—ด ๐˜‚๐˜€๐—ถ๐—ป๐—ด ๐—บ๐—ฒ๐—ฑ๐—ถ๐—ฐ๐—ฎ๐—น ๐—ธ๐—ป๐—ผ๐˜„๐—น๐—ฒ๐—ฑ๐—ด๐—ฒ ๐—ด๐—ฟ๐—ฎ๐—ฝ๐—ต๐˜€. ย 1. Created 32,682 clinically validated QA explanations by linking symptoms, findings, and diagnoses through PrimeKG. ย 2. Generated CoT reasoning paths using GPT-4o, but retained only those that produced correct answers during post-hoc verification. ย 3. Validated with physicians across 7 specialties, with expert preference for MedReasonโ€™s reasoning in 80โ€“100% of cases. ย 4. Enabled interpretable, step-by-step answers like linking difficulty walking to medulloblastoma via ataxia, preserving clinical fidelity throughout. Couple thoughts:ย  ย โ€ข introducing dynamic KG updates (e.g., weekly ingests of new clinical trial data) could keep reasoning current with evolving medical knowledge. ย โ€ข Could also integrating visual KGs derived from DICOM metadata help coherent reasoning across text and imaging inputs? We don't use DICOM metadata enough tbh ย โ€ข Adding testing with adversarial probing (like edgeโ€‘case clinical scenarios) and continuous alignment checks against updated evidenceโ€‘based guidelines might benefit the model performance Here's the awesome work: https://lnkd.in/g42-PKMG Congrats to Juncheng Wu, Wenlong Deng, Xiaoxiao Li, Yuyin Zhou and co! I post my takes on the latest developments in health AI โ€“ ๐—ฐ๐—ผ๐—ป๐—ป๐—ฒ๐—ฐ๐˜ ๐˜„๐—ถ๐˜๐—ต ๐—บ๐—ฒ ๐˜๐—ผ ๐˜€๐˜๐—ฎ๐˜† ๐˜‚๐—ฝ๐—ฑ๐—ฎ๐˜๐—ฒ๐—ฑ! Also, check out my health AI blog here: https://lnkd.in/g3nrQFxW | 40 comments on LinkedIn
Knowledge graphs to teach LLMs how to reason like doctors
ยทlinkedin.comยท
Knowledge graphs to teach LLMs how to reason like doctors
The DITA Graph RAG project automates building your content corpus's knowledge graph (KG) from structured documents
The DITA Graph RAG project automates building your content corpus's knowledge graph (KG) from structured documents
๐‘๐ž๐š๐๐ฒ ๐Ÿ๐จ๐ซ ๐š "๐–๐Ž๐–!" ๐ฆ๐จ๐ฆ๐ž๐ง๐ญ? The ๐˜ฟ๐™„๐™๐˜ผ ๐™‚๐™ง๐™–๐™ฅ๐™ ๐™๐˜ผ๐™‚ project automates building your content corpus's knowledge graph (KG) from structured documents. What few know are the (literally) hundreds of ways you can use a KG on your structured documents in addition to RAG retrieval for AI, so I've compiled a compendium of 150 DITA graph queries, what each does, the SPARQL queries themselves, and the business value of each. These 150 are only a sampling. Try doing THAT with the likes of Markdown, AsciiDoc, RsT, and other presentation-oriented document formats! 90 packed pages! https://lnkd.in/eY2kHcBe | 22 comments on LinkedIn
The ๐˜ฟ๐™„๐™๐˜ผ ๐™‚๐™ง๐™–๐™ฅ๐™ ๐™๐˜ผ๐™‚ project automates building your content corpus's knowledge graph (KG) from structured documents
ยทlinkedin.comยท
The DITA Graph RAG project automates building your content corpus's knowledge graph (KG) from structured documents
Lessons Learned from Evaluating NodeRAG vs Other RAG Systems
Lessons Learned from Evaluating NodeRAG vs Other RAG Systems
๐Ÿ”Ž Lessons Learned from Evaluating NodeRAG vs Other RAG Systems I recently dug into the NodeRAG paper (https://lnkd.in/gwaJHP94) and it was eye-opening not just for how it performed, but for what it revealed about the evolution of RAG (Retrieval-Augmented Generation) systems. Some key takeaways for me: ๐Ÿ‘‰ NaiveRAG is stronger than you think. Brute-force retrieval using simple vector search sometimes beats graph-based methods, especially when graph structures are too coarse or noisy. ๐Ÿ‘‰ GraphRAG was an important step, but not the final answer. While it introduced knowledge graphs and community-based retrieval, GraphRAG sometimes underperformed NaiveRAG because its communities could be too coarse, leading to irrelevant retrieval. ๐Ÿ‘‰ LightRAG reduced token cost, but at the expense of accuracy. By focusing on retrieving just 1-hop neighbors instead of traversing globally, LightRAG made retrieval cheaper โ€” but often missed important multi-hop reasoning paths, losing precision. ๐Ÿ‘‰ NodeRAG shows what mature RAG looks like. NodeRAG redesigned the graph structure itself: Instead of homogeneous graphs, it uses heterogeneous graphs with fine-grained semantic units, entities, relationships, and high-level summaries โ€” all as nodes. It combines dual search (exact match + semantic search) and shallow Personalized PageRank to precisely retrieve the most relevant context. The result? ๐Ÿš€ Highest accuracy across multi-hop and open-ended benchmarks ๐Ÿš€ Lowest token retrieval (i.e., lower inference costs) ๐Ÿš€ Faster indexing and querying ๐Ÿง  Key takeaway: In the RAG world, itโ€™s no longer about retrieving more โ€” itโ€™s about retrieving better. Fine-grained, explainable, efficient retrieval will define the next generation of RAG systems. If youโ€™re working on RAG architectures, NodeRAGโ€™s design principles are well worth studying! Would love to hear how others are thinking about the future of RAG systems. ๐Ÿš€๐Ÿ“š #RAG #KnowledgeGraphs #AI #LLM #NodeRAG #GraphRAG #LightRAG #MachineLearning #GenAI #KnowledegGraphs
ยทlinkedin.comยท
Lessons Learned from Evaluating NodeRAG vs Other RAG Systems
visualize graphs inside Kรนzu
visualize graphs inside Kรนzu
๐Ÿ“ฃ Byte #21: For those of you who want to visualize their graphs inside Jupyter notebooks - we have an exciting development! We recently released an integration with yWorks, who extended their yFiles Jupyter Graphs widget to support Kuzu databases! โœ… Once a Kuzu graph is created, we can instantiate the yFiles Jupyter KuzuGraphWidget, and use the `show_cypher` method to display a subgraph using regular Cypher queries. โœ… There are numerous custom layouts in the yFiles widget (tree, hierarchical, orthogonal, etc.). Give them a try! Here's an example of the tree layout, which is great for visualizing data like this that has rich tree structures. We can see the two-degree mentors of Christian Christiansen, a Nobel prize-winning laureate, in this example. โœ… You can customize the appearance of the nodes in the widget through `add_node_configuration` method. This way, you can display what you're looking for as you iterate through your graph building process. โœ… The Kuzu-yFiles integration is open source and you can begin using it right away for your own interactive visualizations. Give it a try and share around with fellow graph enthusiasts! pip install yfiles-jupyter-graphs-for-kuzu Docs page: https://lnkd.in/g97uSKRe GitHub repo: https://lnkd.in/gjA6ZjiF
ยทlinkedin.comยท
visualize graphs inside Kรนzu
RDF-specific functionality for VS Code
RDF-specific functionality for VS Code
A little peek into our developments of RDF-specific functionality for VS Code: 1๏ธโƒฃ The autocompletion and hover-help for RDF vocabularies. Some are stored within the VS Code plugin, the rest are queried from LOV, giving intellisense to the most prominent ontologies. 2๏ธโƒฃ We can use the ontology of the vocabularies to show when something is not typed correctly 3๏ธโƒฃ SHACL has a SHACL meta-model. As we built a SHACL engine into VS Code, we can use this meta model to hint if something is not done correctly (e.g., a string as part of a datatype). We plan to release the plugin to the marketplace in some time (However, we are still building more functionality). To not take too much credit: https://lnkd.in/eFB2wKdz delivers important features like most syntax highlighting and auto-import of the prefixes.
RDF-specific functionality for VS Code
ยทlinkedin.comยท
RDF-specific functionality for VS Code
if you believe that LLMs need graphs to reason, you are right and now you have evidence: Claude answers questions by building and traversing a graph
if you believe that LLMs need graphs to reason, you are right and now you have evidence: Claude answers questions by building and traversing a graph
To all the knowledge graph enthusiasts who've felt for a while that "graphs are the way to go" when it comes to enabling "intelligence," it was interesting to read Anthropic's "Tracing the thoughts of a large language model" - if you believe that LLMs need graphs to reason, you are right and now you have evidence: Claude answers questions by building and traversing a graph (in latent space) before it translates it back to language: https://lnkd.in/eWFWwfN4 | 20 comments on LinkedIn
if you believe that LLMs need graphs to reason, you are right and now you have evidence: Claude answers questions by building and traversing a graph
ยทlinkedin.comยท
if you believe that LLMs need graphs to reason, you are right and now you have evidence: Claude answers questions by building and traversing a graph
European Parliament Open Data Portal : a SHACL-powered knowledge graph - Sparna Blog
European Parliament Open Data Portal : a SHACL-powered knowledge graph - Sparna Blog
A second usecase Thomas wrote for Veronika Heimsbakkโ€™sย SHACL for theย Practitionerย upcoming book is about Sparnaโ€™s work for the European Parliament. From validation of the data in the knowledge graph to further projects of data integration and dissemination, many different usages of SHACL specifications were exploredโ€ฆ โ€ฆ and more exploratory usages of SHACL are foreseen ! โ€œโ€ฆ
ยทblog.sparna.frยท
European Parliament Open Data Portal : a SHACL-powered knowledge graph - Sparna Blog
What if your LLM isโ€ฆ a graph?
What if your LLM isโ€ฆ a graph?
What if your LLM isโ€ฆ a graph? A few days ago, Petar Veliฤkoviฤ‡ from Google DeepMind gave one of the most interesting and thought provoking conference I've seen in a while, "Large Language Models as Graph Neural Networks". Once you start seeing LLM as graph neural network, many structural oddities suddenly falls into place. For instance, OpenAI currently recommends to put the instructions at the top of a long prompt. Why is that so? Because due to the geometry of attention graphs, LLM are counter-intuitively biased in favors of the first tokens: they travel constinously through each generation steps, are internally repeated a lot and end up "over-squashing" the latter ones. Models then use a variety of internal metrics/transforms like softmax to moderate this bias and better ponderate distribution, but this is a late patch that cannot solve long time attention deficiencies, even more so for long context. The most interesting aspect of the conference from an applied perspective: graph/geometric representations directly affect accuracy and robustness. As the generated sequence grow and deal with sequences of complex reasoning steps, cannot build solid expert system when attention graphs have single point of failures. Or at least, without extrapolating this information in the first place and providing more detailed accuracy metrics. I do believe LLM explainability research is largely underexploited right now, despite being accordingly a key component of LLM devops in big labs. If anything, this is literal "prompt engineering", seeing models as nearly physical structure under stress and providing the right feedback loops to make them more reliable. | 30 comments on LinkedIn
What if your LLM isโ€ฆ a graph?
ยทlinkedin.comยท
What if your LLM isโ€ฆ a graph?
Spanner Graph: Graph databases reimagined
Spanner Graph: Graph databases reimagined
In case you missed the Spanner Graph sessionย at Google Cloud Nextโ€™25, the recording is now available: โ€ข Introduction of the graph space at 00:00 (https://lnkd.in/gsBFuDbt) โ€ข Spanner Graph overview at 07:24 (https://lnkd.in/ggxrzFrU) โ€ข How Snapchat builds its Identity Graph at 20:32 (https://lnkd.in/gFauYj-9) โ€ข Quick demo of an recommendation engine at 26:27 (https://lnkd.in/gvH4AbRF) โ€ข Recent launches at 32:00 (https://lnkd.in/gyCPq97t) โ€ข Vision: unified Google Cloud Graph solution with BigQuery Graph at 35:09 (https://lnkd.in/gRdbSMeu) I hope you like it! You can get started with Spanner Graph today! https://lnkd.in/gkwbGFbS Pratibha Suryadevara, Spoorthi Ravi, Sailesh Krishnamurthy, Andi Gutmans, Christopher Taylor, Girish Baliga, Tomas Talius, Candice Chen, Yun Zhang, Weidong Yang, Matthieu Besozzi, Giulia Rotondo, Leo Meyerovich, Thomas Cook, Arthur Bigeard #googlecloud #googlecloudnext25 #graphdatabases #spannergraph
Spanner Graph: Graph databases reimagined
ยทlinkedin.comยท
Spanner Graph: Graph databases reimagined
The Dataverse Project: 750K FAIR Datasets and a Living Knowledge Graph
The Dataverse Project: 750K FAIR Datasets and a Living Knowledge Graph
"I'm Ukrainian and I'm wearing a suit, so no complaints about me from the Oval Office" - that's the start of my lecture about building Artificial Intelligence with Croissant ML in the Dataverse data platform, for the Bio x AI Hackathon kick-off event in Berlin. https://lnkd.in/ePYHCfJt * 750,000+ FAIR datasets across the world forcing the innovation of the whole data landscape. * A knowledge graph with 50M+ triples. * AI-ready metadata exports. * Qdrant as a vector storage, Google Meta Mistral AI as LLM model providers. * Adrian Gschwend Qlever as fastest triple store for Dataverse knowledge graphs Multilingual, machine-readable, queryable scientific data at scale. If you're interested, you can also apply for the 2-month #BioAgentHack online hackathon: โ€ขย $125K+ prizes โ€ขย Mentorship from Biotech and AI leaders โ€ขย Build alongside top open-science researchers & devs More info: https://lnkd.in/eGhvaKdH
ยทlinkedin.comยท
The Dataverse Project: 750K FAIR Datasets and a Living Knowledge Graph
Google Cloud & Neo4j: Teaming Up at the Intersection of Knowledge Graphs, Agents, MCP, and Natural Language Interfaces - Graph Database & Analytics
Google Cloud & Neo4j: Teaming Up at the Intersection of Knowledge Graphs, Agents, MCP, and Natural Language Interfaces - Graph Database & Analytics
Weโ€™re thrilled to announce new Text2Cypher models and Googleโ€™s MCP Toolbox for Databases from the collaboration between Google Cloud and Neo4j.
ยทneo4j.comยท
Google Cloud & Neo4j: Teaming Up at the Intersection of Knowledge Graphs, Agents, MCP, and Natural Language Interfaces - Graph Database & Analytics
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning ... ๐Ÿ‘‰ Why This Matters Most AI systems blend knowledge graphs (structured data) with large language models (flexible reasoning). But thereโ€™s a hidden variable: "how" you translate the graph into text for the AI. Researchers discovered that the formatting choice alone can swing performance by up to "17.5%" on reasoning tasks. Imagine solving 1 in 5 more problems correctly just by adjusting how you present data. ๐Ÿ‘‰ What They Built KG-LLM-Bench is a new benchmark to test how language models reason with knowledge graphs. It includes five tasks: - Triple verification (โ€œDoes this fact exist?โ€) - Shortest path finding (โ€œHow are two concepts connected?โ€) - Aggregation (โ€œHow many entities meet X condition?โ€) - Multi-hop reasoning (โ€œWhich entities linked to A also have property B?โ€) - Global analysis (โ€œWhich node is most central?โ€) The team tested seven models (Claude, GPT-4o, Gemini, Llama, Nova) with five ways to โ€œtextualizeโ€ graphs, from simple edge lists to structured JSON and semantic web formats like RDF Turtle. ๐Ÿ‘‰ Key Insights 1. Format matters more than assumed: ย ย - Structured JSON and edge lists performed best overall, but results varied by task. ย ย - For example, JSON excels at aggregation tasks (data is grouped by entity), while edge lists help identify central nodes (repeated mentions highlight connections). 2. Models donโ€™t cheat: Replacing real entity names with fake ones (e.g., โ€œFranceโ€ โ†’ โ€œVerdaniaโ€) caused only a 0.2% performance drop, proving models rely on context, not memorized knowledge. 3. Token efficiency: ย ย - Edge lists used ~2,600 tokens vs. JSON-LDโ€™s ~13,500. Shorter formats free up context space for complex reasoning. ย ย - But concise โ‰  always better: structured formats improved accuracy for tasks requiring grouped data. 4. Models struggle with directionality: ย  Counting outgoing edges (e.g., โ€œWhich countries does France border?โ€) is easier than incoming ones (โ€œWhich countries border France?โ€), likely due to formatting biases. ๐Ÿ‘‰ Practical Takeaways - Optimize for your task: Use JSON for aggregation, edge lists for centrality. - Test your model: The best format depends on the LLMโ€”Claude thrived with RDF Turtle, while Gemini preferred edge lists. - Donโ€™t fear pseudonyms: Masking real names minimally impacts performance, useful for sensitive data. The benchmark is openly available, inviting researchers to add new tasks, graphs, and models. As AI handles larger knowledge bases, choosing the right โ€œdata languageโ€ becomes as critical as the reasoning logic itself. Paper: [KG-LLM-Bench: A Scalable Benchmark for Evaluating LLM Reasoning on Textualized Knowledge Graphs] Authors: Elan Markowitz, Krupa Galiya, Greg Ver Steeg, Aram Galstyan
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning
ยทlinkedin.comยท
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning
Towards Mechanistic Interpretability of Graph Transformers via Attention Graphs
Towards Mechanistic Interpretability of Graph Transformers via Attention Graphs
Our first attempts at mechanistic interpretability of Transformers from the perspective of network science and graph theory! Check out our preprint: arxiv.org/abs/2502.12352 A wonderful collaboration with superstar MPhil students Batu El, Deepro Choudhury, as well as Pietro Lio' as part of the Geometric Deep Learning class last year at University of Cambridge Department of Computer Science and Technology We were motivated by Demis Hassabis calling AlphaFold and other AI systems for scientific discovery as โ€˜engineering artifactsโ€™. We need new tools to interpret the underlying mechanisms and advance our scientific understanding. Graph Transformers are a good place to start. The key ideas are: - Attention across multi-heads and layers can be seen as a heterogenous, dynamically evolving graph. - Attention graphs are complex systems represent information flow in Transformers. - We can use network science to extract mechanistic insights from them! More to come on the network science perspective to understanding LLMs next! | 13 comments on LinkedIn
ยทlinkedin.comยท
Towards Mechanistic Interpretability of Graph Transformers via Attention Graphs
A comprehensive large-scale biomedical knowledge graph for AI-powered data-driven biomedical research
A comprehensive large-scale biomedical knowledge graph for AI-powered data-driven biomedical research
๐Ÿš€ Thrilled to share our latest work published in Nature Machine Intelligence! ๐Ÿ“„ "A comprehensive large-scale biomedical knowledge graph for AI-powered data-driven biomedical research" In this study, we constructed iKraph, one of the most comprehensive biomedical knowledge graphs to date, using a human-level information extraction pipeline that won both the LitCoin NLP Challenge and the BioCreative Challenge. iKraph integrates insights from over 34 million PubMed abstracts and 40 public databases, enabling unprecedented scale and precision in automated knowledge discovery (AKD). ๐Ÿ’ก What sets our work apart? We developed a causal knowledge graph and a probabilistic semantic reasoning (PSR) algorithm to infer indirect entity relationships, such as drug-disease relationships. This time-aware framework allowed us to retrospectively and prospectively validate drug repurposing and drug target predictions, something rarely done in prior work. โœ… For COVID-19, we predicted hundreds of drug candidates in real-time, one-third of which were later supported by clinical trials or publications. โœ… For cystic fibrosis, we demonstrated our predictions were often validated up to a decade later, suggesting our method could significantly accelerate the drug discovery pipeline. โœ… Across diverse diseases and common drugs, we achieved benchmark-setting recall and positive predictive rates, pushing the boundaries of what's possible in drug repurposing. We believe this study sets a new frontier in biomedical discovery and demonstrates the power of structured knowledge and interpretability in real-world applications. ๐Ÿ“š Read the full paper: https://lnkd.in/egYgbYT4? ๐Ÿ“Œ Access the platform: https://lnkd.in/ecxwHBK7 ๐Ÿ“‚ Access the data and code: https://lnkd.in/eBp2GEnH LitCoin NLP Challenge: https://lnkd.in/e-cBc6eR Kudos to our incredible team and collaborators who made this possible! #DrugDiscovery #AI #KnowledgeGraph #Bioinformatics #MachineLearning #NatureMachineIntelligence #DrugRepurposing #LLM #BiomedicalAI #NLP #COVID19 #Insilicom #NIH #NCI #NSF #ARPA-H | 10 comments on LinkedIn
A comprehensive large-scale biomedical knowledge graph for AI-powered data-driven biomedical research
ยทlinkedin.comยท
A comprehensive large-scale biomedical knowledge graph for AI-powered data-driven biomedical research
Experience Google Cloud Next 25
Experience Google Cloud Next 25
Uncover data's hidden connections using graph analytics in BigQuery. This session shows how to use BigQuery's scalable infrastructure for graph analysis directly in your data warehouse. Identify patterns, connections, and influences for fraud detection, drug discovery, social network analysis, and recommendation engines. Join us to explore the latest innovations in graphs and see real-world examples. Transform your data into actionable insights with BigQuery's powerful graph capabilities.
ยทcloud.withgoogle.comยท
Experience Google Cloud Next 25
Graph Data Modeling Without Graph Databases
Graph Data Modeling Without Graph Databases
Graph Data Modeling Without Graph Databases: PostgreSQL and Hybrid Approaches for Agentic Systems ๐Ÿ–‡๏ธ Organizations implementing AI systems today face a practical challenge: maintaining multiple specialized databases (vector stores, graph databases, relational systems) creates significant operational complexity, increases costs, and introduces synchronization headaches. Companies like Writer (insight from a recent Waseem Alshikh interview with Harrison Chase) have tackled this problem by implementing graph-like structures directly within PostgreSQL, eliminating the need for separate graph databases while maintaining the necessary functionality. This approach dramatically simplifies infrastructure management, reduces the number of systems to monitor, and eliminates error-prone synchronization processes that can cost thousands of dollars in wasted resources. For enterprises focused on delivering business value rather than managing technical complexity, these PostgreSQL-based implementations offer a pragmatic path forward, though with important trade-offs when considering more sophisticated agentic systems. Writer implemented a subject-predicate-object triple structure directly in PostgreSQL tables rather than using dedicated graph databases. This approach maintains the semantic richness of knowledge graphs while leveraging PostgreSQL's maturity and scalability. Writer kept the conceptual structure of triples that underpin knowledge graphs implemented through a relational schema design. Instead of relying on native graph traversals, Writer developed a fusion decoder that reconstructs graph-like relationships at query time. This component serves as the bridge between the storage layer (PostgreSQL with its triple-inspired structure) and the language model, enabling sophisticated information retrieval without requiring a dedicated graph database's traversal capabilities. The approach focuses on query translation and result combination rather than storage structure optimization. Complementing the triple-based approach, PostgreSQL with extensions (PG Vector and PG Vector Scale) can function effectively as a vector database. This challenges the notion that specialized vector databases are necessary, Treating embeddings as derived data leads to a more natural and maintainable architecture. This reframes the database's role from storing independent vector embeddings to managing derived data that automatically synchronizes with its source. But a critical distinction between retrieval systems and agentic systems need to be made. While PostgreSQL-based approaches excel at knowledge retrieval tasks where the focus is on precision and relevance, agentic systems operate in dynamic environments where context evolves over time, previous actions influence future decisions, and contradictions need to be resolved. This distinction drives different architectural requirements and suggests potential complementary roles for different database approaches. | 15 comments on LinkedIn
Graph Data Modeling Without Graph Databases
ยทlinkedin.comยท
Graph Data Modeling Without Graph Databases
Is developing an ontology from an LLM really feasible?
Is developing an ontology from an LLM really feasible?
It seems the answer on whether an LMM would be able to replace the whole text-to-ontology pipeline is a resounding โ€˜noโ€™. If youโ€™re one of those who think that should be (or even is?) a โ€˜yesโ€™: why, and did you do the experiments that show itโ€™s as good as the alternatives (with the results available)? And I mean a proper ontology, not a knowledge graph with numerous duplications and contradictions and lacking constraints. For a few gentle considerations (and pointers to longer arguments) and a summary figure of processes the LLM supposedly would be replacing: see https://lnkd.in/dG_Xsv_6 | 43 comments on LinkedIn
Maria KeetMaria Keet
ยทlinkedin.comยท
Is developing an ontology from an LLM really feasible?