Found 2132 bookmarks
Custom sorting
The Dataverse Project: 750K FAIR Datasets and a Living Knowledge Graph
The Dataverse Project: 750K FAIR Datasets and a Living Knowledge Graph
"I'm Ukrainian and I'm wearing a suit, so no complaints about me from the Oval Office" - that's the start of my lecture about building Artificial Intelligence with Croissant ML in the Dataverse data platform, for the Bio x AI Hackathon kick-off event in Berlin. https://lnkd.in/ePYHCfJt * 750,000+ FAIR datasets across the world forcing the innovation of the whole data landscape. * A knowledge graph with 50M+ triples. * AI-ready metadata exports. * Qdrant as a vector storage, Google Meta Mistral AI as LLM model providers. * Adrian Gschwend Qlever as fastest triple store for Dataverse knowledge graphs Multilingual, machine-readable, queryable scientific data at scale. If you're interested, you can also apply for the 2-month #BioAgentHack online hackathon: • $125K+ prizes • Mentorship from Biotech and AI leaders • Build alongside top open-science researchers & devs More info: https://lnkd.in/eGhvaKdH
·linkedin.com·
The Dataverse Project: 750K FAIR Datasets and a Living Knowledge Graph
Google Cloud & Neo4j: Teaming Up at the Intersection of Knowledge Graphs, Agents, MCP, and Natural Language Interfaces - Graph Database & Analytics
Google Cloud & Neo4j: Teaming Up at the Intersection of Knowledge Graphs, Agents, MCP, and Natural Language Interfaces - Graph Database & Analytics
We’re thrilled to announce new Text2Cypher models and Google’s MCP Toolbox for Databases from the collaboration between Google Cloud and Neo4j.
·neo4j.com·
Google Cloud & Neo4j: Teaming Up at the Intersection of Knowledge Graphs, Agents, MCP, and Natural Language Interfaces - Graph Database & Analytics
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning ... 👉 Why This Matters Most AI systems blend knowledge graphs (structured data) with large language models (flexible reasoning). But there’s a hidden variable: "how" you translate the graph into text for the AI. Researchers discovered that the formatting choice alone can swing performance by up to "17.5%" on reasoning tasks. Imagine solving 1 in 5 more problems correctly just by adjusting how you present data. 👉 What They Built KG-LLM-Bench is a new benchmark to test how language models reason with knowledge graphs. It includes five tasks: - Triple verification (“Does this fact exist?”) - Shortest path finding (“How are two concepts connected?”) - Aggregation (“How many entities meet X condition?”) - Multi-hop reasoning (“Which entities linked to A also have property B?”) - Global analysis (“Which node is most central?”) The team tested seven models (Claude, GPT-4o, Gemini, Llama, Nova) with five ways to “textualize” graphs, from simple edge lists to structured JSON and semantic web formats like RDF Turtle. 👉 Key Insights 1. Format matters more than assumed:   - Structured JSON and edge lists performed best overall, but results varied by task.   - For example, JSON excels at aggregation tasks (data is grouped by entity), while edge lists help identify central nodes (repeated mentions highlight connections). 2. Models don’t cheat: Replacing real entity names with fake ones (e.g., “France” → “Verdania”) caused only a 0.2% performance drop, proving models rely on context, not memorized knowledge. 3. Token efficiency:   - Edge lists used ~2,600 tokens vs. JSON-LD’s ~13,500. Shorter formats free up context space for complex reasoning.   - But concise ≠ always better: structured formats improved accuracy for tasks requiring grouped data. 4. Models struggle with directionality:   Counting outgoing edges (e.g., “Which countries does France border?”) is easier than incoming ones (“Which countries border France?”), likely due to formatting biases. 👉 Practical Takeaways - Optimize for your task: Use JSON for aggregation, edge lists for centrality. - Test your model: The best format depends on the LLM—Claude thrived with RDF Turtle, while Gemini preferred edge lists. - Don’t fear pseudonyms: Masking real names minimally impacts performance, useful for sensitive data. The benchmark is openly available, inviting researchers to add new tasks, graphs, and models. As AI handles larger knowledge bases, choosing the right “data language” becomes as critical as the reasoning logic itself. Paper: [KG-LLM-Bench: A Scalable Benchmark for Evaluating LLM Reasoning on Textualized Knowledge Graphs] Authors: Elan Markowitz, Krupa Galiya, Greg Ver Steeg, Aram Galstyan
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning
·linkedin.com·
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning
A comprehensive large-scale biomedical knowledge graph for AI-powered data-driven biomedical research
A comprehensive large-scale biomedical knowledge graph for AI-powered data-driven biomedical research
🚀 Thrilled to share our latest work published in Nature Machine Intelligence! 📄 "A comprehensive large-scale biomedical knowledge graph for AI-powered data-driven biomedical research" In this study, we constructed iKraph, one of the most comprehensive biomedical knowledge graphs to date, using a human-level information extraction pipeline that won both the LitCoin NLP Challenge and the BioCreative Challenge. iKraph integrates insights from over 34 million PubMed abstracts and 40 public databases, enabling unprecedented scale and precision in automated knowledge discovery (AKD). 💡 What sets our work apart? We developed a causal knowledge graph and a probabilistic semantic reasoning (PSR) algorithm to infer indirect entity relationships, such as drug-disease relationships. This time-aware framework allowed us to retrospectively and prospectively validate drug repurposing and drug target predictions, something rarely done in prior work. ✅ For COVID-19, we predicted hundreds of drug candidates in real-time, one-third of which were later supported by clinical trials or publications. ✅ For cystic fibrosis, we demonstrated our predictions were often validated up to a decade later, suggesting our method could significantly accelerate the drug discovery pipeline. ✅ Across diverse diseases and common drugs, we achieved benchmark-setting recall and positive predictive rates, pushing the boundaries of what's possible in drug repurposing. We believe this study sets a new frontier in biomedical discovery and demonstrates the power of structured knowledge and interpretability in real-world applications. 📚 Read the full paper: https://lnkd.in/egYgbYT4? 📌 Access the platform: https://lnkd.in/ecxwHBK7 📂 Access the data and code: https://lnkd.in/eBp2GEnH LitCoin NLP Challenge: https://lnkd.in/e-cBc6eR Kudos to our incredible team and collaborators who made this possible! #DrugDiscovery #AI #KnowledgeGraph #Bioinformatics #MachineLearning #NatureMachineIntelligence #DrugRepurposing #LLM #BiomedicalAI #NLP #COVID19 #Insilicom #NIH #NCI #NSF #ARPA-H | 10 comments on LinkedIn
A comprehensive large-scale biomedical knowledge graph for AI-powered data-driven biomedical research
·linkedin.com·
A comprehensive large-scale biomedical knowledge graph for AI-powered data-driven biomedical research
Is developing an ontology from an LLM really feasible?
Is developing an ontology from an LLM really feasible?
It seems the answer on whether an LMM would be able to replace the whole text-to-ontology pipeline is a resounding ‘no’. If you’re one of those who think that should be (or even is?) a ‘yes’: why, and did you do the experiments that show it’s as good as the alternatives (with the results available)? And I mean a proper ontology, not a knowledge graph with numerous duplications and contradictions and lacking constraints. For a few gentle considerations (and pointers to longer arguments) and a summary figure of processes the LLM supposedly would be replacing: see https://lnkd.in/dG_Xsv_6 | 43 comments on LinkedIn
Maria KeetMaria Keet
·linkedin.com·
Is developing an ontology from an LLM really feasible?
What are the Different Types of Graphs? The Most Common Misconceptions and Understanding Their Applications - Enterprise Knowledge
What are the Different Types of Graphs? The Most Common Misconceptions and Understanding Their Applications - Enterprise Knowledge
Learn about different types of graphs and their applications in data management and AI, as well as common misconceptions, in this article by Lulit Tesfaye.
·enterprise-knowledge.com·
What are the Different Types of Graphs? The Most Common Misconceptions and Understanding Their Applications - Enterprise Knowledge
Digital evolution: Novo Nordisk’s shift to ontology-based data management - Journal of Biomedical Semantics
Digital evolution: Novo Nordisk’s shift to ontology-based data management - Journal of Biomedical Semantics
The amount of biomedical data is growing, and managing it is increasingly challenging. While Findable, Accessible, Interoperable and Reusable (FAIR) data principles provide guidance, their adoption has proven difficult, especially in larger enterprises like pharmaceutical companies. In this manuscript, we describe how we leverage an Ontology-Based Data Management (OBDM) strategy for digital transformation in Novo Nordisk Research & Early Development. Here, we include both our technical blueprint and our approach for organizational change management. We further discuss how such an OBDM ecosystem plays a pivotal role in the organization’s digital aspirations for data federation and discovery fuelled by artificial intelligence. Our aim for this paper is to share the lessons learned in order to foster dialogue with parties navigating similar waters while collectively advancing the efforts in the fields of data management, semantics and data driven drug discovery.
·jbiomedsem.biomedcentral.com·
Digital evolution: Novo Nordisk’s shift to ontology-based data management - Journal of Biomedical Semantics
Knowledge graphs for LLM grounding and avoiding hallucination
Knowledge graphs for LLM grounding and avoiding hallucination
This blog post is part of a series that dives into various aspects of SAP’s approach to Generative AI, and its technical underpinnings. In previous blog posts of this series, you learned about how to use large language models (LLMs) for developing AI applications in a trustworthy and reliable manner...
·community.sap.com·
Knowledge graphs for LLM grounding and avoiding hallucination
"Knowledge Graphs Applied" becomes "Knowledge Graphs and LLMs in Action"
"Knowledge Graphs Applied" becomes "Knowledge Graphs and LLMs in Action"
🎉🎉 🎉 "Knowledge Graphs Applied" becomes "Knowledge Graphs and LLMs in Action" Four years ago, we embarked on writing "Knowledge Graphs Applied" with a clear mission: to guide practitioners in implementing production-ready knowledge graph solutions. Drawing from our extensive field experience across multiple domains, we aimed to share battle-tested best practices that transcend basic use cases. Like fine wine, ideas, and concepts need time to mature. During these four years of careful development, we witnessed a seismic shift in the technological landscape. Large Language Models (LLMs) emerged not just as a buzzword, but as a transformative force that naturally converged with knowledge graphs.  This synergy unlocked new possibilities, particularly in simplifying complex tasks like unstructured data ingestion and knowledge graph-based question-answering. We couldn't ignore this technological disruption. Instead, we embraced it, incorporating our hands-on experience in combining LLMs with graph technologies. The result is "Knowledge Graphs and LLMs in Action" – a thoroughly revised work with new chapters and an expanded scope. Yet our fundamental goal remains unchanged: to empower you to harness the full potential of knowledge graphs, now enhanced by their increasingly natural companion, LLMs. This book represents the culmination of a journey that evolved alongside the technology itself. It delivers practical, production-focused guidance for the modern era, in which knowledge graphs and LLMs work in concert. Now available in MEAP, with new LLMs-focused chapters ready to be published. #llms #knowledgegraph #graphdatascience
"Knowledge Graphs Applied" becomes "Knowledge Graphs and LLMs in Action"
·linkedin.com·
"Knowledge Graphs Applied" becomes "Knowledge Graphs and LLMs in Action"
The SECI model for knowledge creation, collection, and distribution within the organization
The SECI model for knowledge creation, collection, and distribution within the organization
💫 An 𝗲𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲 𝗼𝗻𝘁𝗼𝗹𝗼𝗴𝘆 is just a means, not an end. 👉 Transforming 𝘁𝗮𝗰𝗶𝘁 𝗸𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 into 𝗲𝘅𝗽𝗹𝗶𝗰𝗶𝘁 𝗸𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 through an enterprise ontology is a self-contained exercise if not framed within a broader process of knowledge creation, collection, and distribution within the organization. 👇 The 𝗦𝗘𝗖𝗜 𝗠𝗼𝗱𝗲𝗹 effectively describes the various steps of this process, going beyond mere collection and formalization. The SECI model outlines the following four phases that must be executed iteratively and continuously to properly manage organizational knowledge: 1️⃣ 𝗦𝗼𝗰𝗶𝗮𝗹𝗶𝘇𝗮𝘁𝗶𝗼𝗻: In this phase, tacit knowledge is shared through direct interaction, observation, or experiences. It emphasizes the transfer of personal knowledge between individuals and fosters mutual understanding through collaboration (tacit ➡️ tacit). 2️⃣ 𝗘𝘅𝘁𝗲𝗿𝗻𝗮𝗹𝗶𝘇𝗮𝘁𝗶𝗼𝗻: In this phase, tacit knowledge is articulated into explicit forms, such as an enterprise ontology. It helps to codify and communicate the personal knowledge that might otherwise remain unspoken or difficult to share (tacit ➡️ explicit). 3️⃣ 𝗖𝗼𝗺𝗯𝗶𝗻𝗮𝘁𝗶𝗼𝗻: In this phase, explicit knowledge is gathered from different sources, categorized, and synthesized to form new sets of knowledge. It involves the aggregation and reorganization of existing knowledge to create more structured and accessible forms (explicit ➡️ explicit). 4️⃣ 𝗜𝗻𝘁𝗲𝗿𝗻𝗮𝗹𝗶𝘇𝗮𝘁𝗶𝗼𝗻: In this phase, individuals internalize explicit knowledge, turning it back into tacit knowledge through practice, experience, and learning. It emphasizes the transformation of formalized knowledge into personal, actionable knowledge (explicit ➡️ tacit). 🎯 In a world where the only constant is change, it is no longer enough for an organization to know something; what matters most is how fast it learns by creating and redistributing new knowledge internally. 🧑‍🎓 To quote Nadella, organizations and the people within them should not be 𝘒𝘯𝘰𝘸-𝘐𝘵-𝘈𝘭𝘭𝘴 but rather 𝘓𝘦𝘢𝘳𝘯-𝘐𝘵-𝘈𝘭𝘭𝘴. #TheDataJoy #KnowledgeMesh #KnowledgeManagement #Ontologies
Transforming 𝘁𝗮𝗰𝗶𝘁 𝗸𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 into 𝗲𝘅𝗽𝗹𝗶𝗰𝗶𝘁 𝗸𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 through an enterprise ontology is a self-contained exercise if not framed within a broader process of knowledge creation, collection, and distribution within the organization.
·linkedin.com·
The SECI model for knowledge creation, collection, and distribution within the organization
Multi-Layer Agentic Reasoning: Connecting Complex Data and Dynamic Insights in Graph-Based RAG Systems
Multi-Layer Agentic Reasoning: Connecting Complex Data and Dynamic Insights in Graph-Based RAG Systems
Multi-Layer Agentic Reasoning: Connecting Complex Data and Dynamic Insights in Graph-Based RAG Systems 🛜 At the most fundamental level, all approaches rely… | 11 comments on LinkedIn
Multi-Layer Agentic Reasoning: Connecting Complex Data and Dynamic Insights in Graph-Based RAG Systems
·linkedin.com·
Multi-Layer Agentic Reasoning: Connecting Complex Data and Dynamic Insights in Graph-Based RAG Systems
Build your hybrid-Graph for RAG & GraphRAG applications using the power of NLP | LinkedIn
Build your hybrid-Graph for RAG & GraphRAG applications using the power of NLP | LinkedIn
Build a graph for RAG application for a price of a chocolate bar! What is GraphRAG for you? What is GraphRAG? What does GraphRAG mean from your perspective? What if you could have a standard RAG and a GraphRAG as a combi-package, with just a query switch? The fact is, there is no concrete, universal
·linkedin.com·
Build your hybrid-Graph for RAG & GraphRAG applications using the power of NLP | LinkedIn