GraphNews

4357 bookmarks
Custom sorting
Graph algebra
Graph algebra
The best talk during RSA Conference in my mind is: Graphs and Algebras of Defense John Lambert Corporate Vice President, CISO Microsoft What differentiates industry visionary, and average people, is the capability to abstract the theory from practice. John came up with an elegant abstraction of graph “algebra” for cybersecurity defense, that resonates well with my PhD thesis on manifold leanring and graph embedding. The way the algebra operator on cybersecurity graphs is inspiring. I hope more innovations can be sparked by such elegant framework. Leo Meyerovich Alexander Morisse, PhD #GraphThePlanet | 13 comments on LinkedIn
·linkedin.com·
Graph algebra
Aerospike Graph scales efficiently from 200GB to 20TB without performance degradation across multiple real-world identity graph workloads
Aerospike Graph scales efficiently from 200GB to 20TB without performance degradation across multiple real-world identity graph workloads
Discover how #Aerospike Graph overcomes #identityresolution limitations. Download our latest benchmark to: 💡 See how #AerospikeGraph scales efficiently from 200GB to 20TB without performance degradation across multiple real-world identity graph workloads 💡 Learn how to deploy high-performance identity graphs with fewer resources 💡 Use the results to plan your own scale-out graph infrastructure Get the benchmark here: https://lnkd.in/gZimB6Sh #AdTech #MarTech Ishaan Biswas Lyndon Bauto Phil Allsopp Matt Bushell Jim Doty
how hashtag#AerospikeGraph scales efficiently from 200GB to 20TB without performance degradation across multiple real-world identity graph workloads
·linkedin.com·
Aerospike Graph scales efficiently from 200GB to 20TB without performance degradation across multiple real-world identity graph workloads
Ontologies, OWL, SHACL - a baseline | LinkedIn
Ontologies, OWL, SHACL - a baseline | LinkedIn
Ontologies, OWL, SHACL: I was going to react to a comment that came through my feed and turned it into this post as it resonated with questions I am often asked about the mentioned technologies, their uses, and their relations and, more broadly, it concerns a key architectural discussion that I've h
·linkedin.com·
Ontologies, OWL, SHACL - a baseline | LinkedIn
𝗧𝗵𝗲 𝗙𝘂𝘁𝘂𝗿𝗲 𝗜𝘀 𝗖𝗹𝗲𝗮𝗿: 𝗔𝗴𝗲𝗻𝘁𝘀 𝗪𝗶𝗹𝗹 𝗡𝗘𝗘𝗗 𝗚𝗿𝗮𝗽𝗵 𝗥𝗔𝗚
𝗧𝗵𝗲 𝗙𝘂𝘁𝘂𝗿𝗲 𝗜𝘀 𝗖𝗹𝗲𝗮𝗿: 𝗔𝗴𝗲𝗻𝘁𝘀 𝗪𝗶𝗹𝗹 𝗡𝗘𝗘𝗗 𝗚𝗿𝗮𝗽𝗵 𝗥𝗔𝗚
🤺 𝗧𝗵𝗲 𝗙𝘂𝘁𝘂𝗿𝗲 𝗜𝘀 𝗖𝗹𝗲𝗮𝗿: 𝗔𝗴𝗲𝗻𝘁𝘀 𝗪𝗶𝗹𝗹 𝗡𝗘𝗘𝗗 𝗚𝗿𝗮𝗽𝗵 𝗥𝗔𝗚 Why? It combines Multi-hop reasoning, Non-Parameterized / Learning-Based Retrieval, Topology-Aware Prompting. ﹌﹌﹌﹌﹌﹌﹌﹌﹌ 🤺 𝗪𝗵𝗮𝘁 𝗜𝘀 𝗚𝗿𝗮𝗽𝗵-𝗘𝗻𝗵𝗮𝗻𝗰𝗲𝗱 𝗥𝗲𝘁𝗿𝗶𝗲𝘃𝗮𝗹-𝗔𝘂𝗴𝗺𝗲𝗻𝘁𝗲𝗱 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝗼𝗻 (𝗥𝗔𝗚)? ✩ LLMs hallucinate. ✩ LLMs forget. ✩ LLMs struggle with complex reasoning. Graphs connect facts. They organize knowledge into neat, structured webs. So when RAG retrieves from a graph, the LLM doesn't just guess — it reasons. It follows the map. ﹌﹌﹌﹌﹌﹌﹌﹌﹌ 🤺 𝗧𝗵𝗲 𝟰-𝗦𝘁𝗲𝗽 𝗪𝗼𝗿𝗸𝗳𝗹𝗼𝘄 𝗼𝗳 𝗚𝗿𝗮𝗽𝗵 𝗥𝗔𝗚 1️⃣ — User Query: The user asks a question. ("Tell me how Einstein used Riemannian geometry?") 2️⃣ — Retrieval Module: The system fetches the most structurally relevant knowledge from a graph. (Entities: Einstein, Grossmann, Riemannian Geometry.) 3️⃣ — Prompting Module: Retrieved knowledge is reshaped into a golden prompt — sometimes as structured triples, sometimes as smart text. 4️⃣ — Output Response: LLM generates a fact-rich, logically sound answer. ﹌﹌﹌﹌﹌﹌﹌﹌﹌ 🤺 𝗦𝘁𝗲𝗽 𝟭: 𝗕𝘂𝗶𝗹𝗱 𝗚𝗿𝗮𝗽𝗵-𝗣𝗼𝘄𝗲𝗿𝗲𝗱 𝗗𝗮𝘁𝗮𝗯𝗮𝘀𝗲𝘀 ✩ Use Existing Knowledge Graphs like Freebase or Wikidata — structured, reliable, but static. ✩ Or Build New Graphs From Text (OpenIE, instruction-tuned LLMs) — dynamic, adaptable, messy but powerful. 🤺 𝗦𝘁𝗲𝗽 𝟮: 𝗥𝗲𝘁𝗿𝗶𝗲𝘃𝗮𝗹 𝗮𝗻𝗱 𝗣𝗿𝗼𝗺𝗽𝘁𝗶𝗻𝗴 𝗔𝗹𝗴𝗼𝗿𝗶𝘁𝗵𝗺𝘀 ✩ Non-Parameterized Retrieval (Deterministic, Probabilistic, Heuristic) ★ Think Dijkstra's algorithm, PageRank, 1-hop neighbors. Fast but rigid. ✩ Learning-Based Retrieval (GNNs, Attention Models) ★ Think "graph convolution" or "graph attention." Smarter, deeper, but heavier. ✩ Prompting Approaches: ★ Topology-Aware: Preserve graph structure — multi-hop reasoning. ★ Text Prompting: Flatten into readable sentences — easier for vanilla LLMs. 🤺 𝗦𝘁𝗲𝗽 𝟯: 𝗚𝗿𝗮𝗽𝗵-𝗦𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲𝗱 𝗣𝗶𝗽𝗲𝗹𝗶𝗻𝗲𝘀 ✩ Sequential Pipelines: Straightforward query ➔ retrieve ➔ prompt ➔ answer. ✩ Loop Pipelines: Iterative refinement until the best evidence is found. ✩ Tree Pipelines: Parallel exploration ➔ multiple knowledge paths at once. 🤺 𝗦𝘁𝗲𝗽 𝟰: 𝗚𝗿𝗮𝗽𝗵-𝗢𝗿𝗶𝗲𝗻𝘁𝗲𝗱 𝗧𝗮𝘀𝗸𝘀 ✩ Knowledge Graph QA (KGQA): Answering deep, logical questions with graphs. ✩ Graph Tasks: Node classification, link prediction, graph summarization. ✩ Domain-Specific Applications: Biomedicine, law, scientific discovery, finance. ≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣ Join my 𝗛𝗮𝗻𝗱𝘀-𝗼𝗻 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁 𝗧𝗿𝗮𝗶𝗻𝗶𝗻𝗴. Skip the fluff and build real AI agents — fast. 𝗪𝗵𝗮𝘁 𝘆𝗼𝘂 𝗴𝗲𝘁: ✅ Create Smart Agents + Powerful RAG Pipelines ✅ Master 𝗟𝗮𝗻𝗴𝗖𝗵𝗮𝗶𝗻, 𝗖𝗿𝗲𝘄𝗔𝗜 & 𝗦𝘄𝗮𝗿𝗺 – all in one training ✅ Projects with Text, Audio, Video & Tabular Data 𝟰𝟲𝟬+ engineers already enrolled 𝗘𝗻𝗿𝗼𝗹𝗹 𝗻𝗼𝘄 — 𝟯𝟰% 𝗼𝗳𝗳, 𝗲𝗻𝗱𝘀 𝘀𝗼𝗼𝗻: https://lnkd.in/eGuWr4CH | 35 comments on LinkedIn
𝗧𝗵𝗲 𝗙𝘂𝘁𝘂𝗿𝗲 𝗜𝘀 𝗖𝗹𝗲𝗮𝗿: 𝗔𝗴𝗲𝗻𝘁𝘀 𝗪𝗶𝗹𝗹 𝗡𝗘𝗘𝗗 𝗚𝗿𝗮𝗽𝗵 𝗥𝗔𝗚
·linkedin.com·
𝗧𝗵𝗲 𝗙𝘂𝘁𝘂𝗿𝗲 𝗜𝘀 𝗖𝗹𝗲𝗮𝗿: 𝗔𝗴𝗲𝗻𝘁𝘀 𝗪𝗶𝗹𝗹 𝗡𝗘𝗘𝗗 𝗚𝗿𝗮𝗽𝗵 𝗥𝗔𝗚
SousLesensVocables is a set of tools developed to manage Thesaurus and Ontologies resources through SKOS , OWL and RDF standards and graph visualisation approaches
SousLesensVocables is a set of tools developed to manage Thesaurus and Ontologies resources through SKOS , OWL and RDF standards and graph visualisation approaches
SousLesensVocables is a set of tools developed to manage Thesaurus and Ontologies resources through SKOS , OWL and RDF standards and graph visualisation approaches
·souslesens.github.io·
SousLesensVocables is a set of tools developed to manage Thesaurus and Ontologies resources through SKOS , OWL and RDF standards and graph visualisation approaches
The new AI Risk “ontology”: A Map with No Rules
The new AI Risk “ontology”: A Map with No Rules
A Map with No Rules The new AI Risk “ontology” (AIRO) maps regulatory concepts from the EU AI Act, ISO/IEC 23894, and ISO 31000. But without formal constraints or ontological grounding in a top-level ontology, it reads more like a map with no rules. At first glance, AIRO seems well-structured. It defines entities like “AI Provider,” “AI Subject,” and “Capability,” linking them to legal clauses and decision workflows. But it lacks the logical scaffolding that makes semantic models computable. There are no disjointness constraints, no domain or range restrictions, no axioms to enforce identity or prevent contradiction. For example, if “Provider” and “Subject” are just two nodes in a graph, the system has no way to infer that they must be distinct. There’s nothing stopping an implementation from assigning both roles to the same agent. That’s not an edge case. It’s a missing foundation. This is where formal ontologies matter. Logic is not a luxury. It’s what makes it possible to validate, reason, and automate oversight. Without constraints and grounding in a TLO, semantic structures become decorative. They document language, but not the conditions that govern responsible behavior. If we want regulations that adapts with AI instead of chasing it, we need more than a vocabulary. We need logic, constraints, and ontological structure. #AIRegulation #ResponsibleAI #SemanticGovernance #AIAudits #AIAct #Ontologies #LogicMatters
A Map with No RulesThe new AI Risk “ontology”
·linkedin.com·
The new AI Risk “ontology”: A Map with No Rules
The "Ontology Gap" for property graphs
The "Ontology Gap" for property graphs
I was looking forward to speaking at next week's Knowledge Graph Conference, but I had a stroke in early March, so I've had to cut back my activity quite a lot. This short article talks about the overall problem/opportunity, which underpins the work in LDBC (Linked Data Benchmark Council), relating
·linkedin.com·
The "Ontology Gap" for property graphs
Graph Learning Will Lose Relevance Due To Poor Benchmarks
Graph Learning Will Lose Relevance Due To Poor Benchmarks
📣 Our spicy ICML 2025 position paper: “Graph Learning Will Lose Relevance Due To Poor Benchmarks”. Graph learning is less trendy in the ML world than it was in 2020-2022. We believe the problem is in poor benchmarks that hold the field back - and suggest ways to fix it! We identified three problems: #️⃣ P1: No transformative real-world applications - while LLMs and geometric generative models become more powerful and solve complex tasks every generation (from reasoning to protein folding), how transformative could a GNN on Cora or OGB be? P1 Remedies: The community is overlooking many significant and transformative applications, including chip design and broader ML for systems, combinatorial optimization, and relational data (as highlighted by RelBench). Each of them offers $billions in potential outcomes. #️⃣ P2: While everything can be modeled as a graph, often it should not be. We made a simple experiment and probed a vanilla DeepSet w/o edges and a GNN on Cayley graphs (fixed edges for a certain number of nodes) on molecular datasets and the performance is quite competitive. #️⃣ P3: Bad benchmarking culture (this one hits hard) - it’s a mess :) Small datasets (don’t use Cora and MUTAG in 2025), no standard splits, and in many cases recent models are clearly worse than GCN / Sage from 2020. It gets worse when evaluating generative models. Remedies for P3: We need more holistic benchmarks which are harder to game and saturate - while it’s a common problem for all ML fields, standard graph learning benchmarks are egregiously old and rather irrelevant for the scale of problems doable in 2025. 💡 As a result, it’s hard to build a true foundation model for graphs. Instead of training each model on each dataset, we suggest using GNNs / GTs as processors in the “encoder-processor-decoder” blueprint, train them at scale, and only tune graph-specific encoders/decoders. For example, we pre-trained several models on PCQM4M-v2, COCO-SP, and MalNet Tiny, and fine-tuned them on PascalVOC, Peptides-struct, and Stargazers to find that graph transformers benefit from pre-training. --- The project started around NeurIPS 2024 when Christopher Morris gathered us to discuss the peeve points of graph learning and how to continue to do impactful research in this area. I believe the outcomes appear promising, and we can re-imagine graph learning in 2025 and beyond! Massive work with 12 authors (everybody actually contributed): Maya Bechler-Speicher, Ben Finkelshtein, Fabrizio Frasca, Luis Müller, Jan Tönshoff, Antoine Siraudin, Viktor Zaverkin, Michael Bronstein, Mathias Niepert, Bryan Perozzi, and Christopher Morris (Chris you should create a LinkedIn account finally ;)
Graph Learning Will Lose Relevance Due To Poor Benchmarks
·linkedin.com·
Graph Learning Will Lose Relevance Due To Poor Benchmarks
Government Funding Graph RAG
Government Funding Graph RAG
Graph visualisation for UK Research and Innovation (UKRI) funding, including NetworkX, PyVis and LlamaIndex graph retrieval-augmented generation (RAG)
·towardsdatascience.com·
Government Funding Graph RAG
Knowledge graphs to teach LLMs how to reason like doctors
Knowledge graphs to teach LLMs how to reason like doctors
Knowledge graphs to teach LLMs how to reason like doctors! Many medical LLMs can give you the right answer, but not the right reasoning which is a problem for clinical trust. 𝗠𝗲𝗱𝗥𝗲𝗮𝘀𝗼𝗻 𝗶𝘀 𝘁𝗵𝗲 𝗳𝗶𝗿𝘀𝘁 𝗳𝗮𝗰𝘁𝘂𝗮𝗹𝗹𝘆-𝗴𝘂𝗶𝗱𝗲𝗱 𝗱𝗮𝘁𝗮𝘀𝗲𝘁 𝘁𝗼 𝘁𝗲𝗮𝗰𝗵 𝗟𝗟𝗠𝘀 𝗰𝗹𝗶𝗻𝗶𝗰𝗮𝗹 𝗖𝗵𝗮𝗶𝗻-𝗼𝗳-𝗧𝗵𝗼𝘂𝗴𝗵𝘁 (𝗖𝗼𝗧) 𝗿𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 𝘂𝘀𝗶𝗻𝗴 𝗺𝗲𝗱𝗶𝗰𝗮𝗹 𝗸𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 𝗴𝗿𝗮𝗽𝗵𝘀.  1. Created 32,682 clinically validated QA explanations by linking symptoms, findings, and diagnoses through PrimeKG.  2. Generated CoT reasoning paths using GPT-4o, but retained only those that produced correct answers during post-hoc verification.  3. Validated with physicians across 7 specialties, with expert preference for MedReason’s reasoning in 80–100% of cases.  4. Enabled interpretable, step-by-step answers like linking difficulty walking to medulloblastoma via ataxia, preserving clinical fidelity throughout. Couple thoughts:   • introducing dynamic KG updates (e.g., weekly ingests of new clinical trial data) could keep reasoning current with evolving medical knowledge.  • Could also integrating visual KGs derived from DICOM metadata help coherent reasoning across text and imaging inputs? We don't use DICOM metadata enough tbh  • Adding testing with adversarial probing (like edge‑case clinical scenarios) and continuous alignment checks against updated evidence‑based guidelines might benefit the model performance Here's the awesome work: https://lnkd.in/g42-PKMG Congrats to Juncheng Wu, Wenlong Deng, Xiaoxiao Li, Yuyin Zhou and co! I post my takes on the latest developments in health AI – 𝗰𝗼𝗻𝗻𝗲𝗰𝘁 𝘄𝗶𝘁𝗵 𝗺𝗲 𝘁𝗼 𝘀𝘁𝗮𝘆 𝘂𝗽𝗱𝗮𝘁𝗲𝗱! Also, check out my health AI blog here: https://lnkd.in/g3nrQFxW | 40 comments on LinkedIn
Knowledge graphs to teach LLMs how to reason like doctors
·linkedin.com·
Knowledge graphs to teach LLMs how to reason like doctors
The DITA Graph RAG project automates building your content corpus's knowledge graph (KG) from structured documents
The DITA Graph RAG project automates building your content corpus's knowledge graph (KG) from structured documents
𝐑𝐞𝐚𝐝𝐲 𝐟𝐨𝐫 𝐚 "𝐖𝐎𝐖!" 𝐦𝐨𝐦𝐞𝐧𝐭? The 𝘿𝙄𝙏𝘼 𝙂𝙧𝙖𝙥𝙝 𝙍𝘼𝙂 project automates building your content corpus's knowledge graph (KG) from structured documents. What few know are the (literally) hundreds of ways you can use a KG on your structured documents in addition to RAG retrieval for AI, so I've compiled a compendium of 150 DITA graph queries, what each does, the SPARQL queries themselves, and the business value of each. These 150 are only a sampling. Try doing THAT with the likes of Markdown, AsciiDoc, RsT, and other presentation-oriented document formats! 90 packed pages! https://lnkd.in/eY2kHcBe | 22 comments on LinkedIn
The 𝘿𝙄𝙏𝘼 𝙂𝙧𝙖𝙥𝙝 𝙍𝘼𝙂 project automates building your content corpus's knowledge graph (KG) from structured documents
·linkedin.com·
The DITA Graph RAG project automates building your content corpus's knowledge graph (KG) from structured documents
Lessons Learned from Evaluating NodeRAG vs Other RAG Systems
Lessons Learned from Evaluating NodeRAG vs Other RAG Systems
🔎 Lessons Learned from Evaluating NodeRAG vs Other RAG Systems I recently dug into the NodeRAG paper (https://lnkd.in/gwaJHP94) and it was eye-opening not just for how it performed, but for what it revealed about the evolution of RAG (Retrieval-Augmented Generation) systems. Some key takeaways for me: 👉 NaiveRAG is stronger than you think. Brute-force retrieval using simple vector search sometimes beats graph-based methods, especially when graph structures are too coarse or noisy. 👉 GraphRAG was an important step, but not the final answer. While it introduced knowledge graphs and community-based retrieval, GraphRAG sometimes underperformed NaiveRAG because its communities could be too coarse, leading to irrelevant retrieval. 👉 LightRAG reduced token cost, but at the expense of accuracy. By focusing on retrieving just 1-hop neighbors instead of traversing globally, LightRAG made retrieval cheaper — but often missed important multi-hop reasoning paths, losing precision. 👉 NodeRAG shows what mature RAG looks like. NodeRAG redesigned the graph structure itself: Instead of homogeneous graphs, it uses heterogeneous graphs with fine-grained semantic units, entities, relationships, and high-level summaries — all as nodes. It combines dual search (exact match + semantic search) and shallow Personalized PageRank to precisely retrieve the most relevant context. The result? 🚀 Highest accuracy across multi-hop and open-ended benchmarks 🚀 Lowest token retrieval (i.e., lower inference costs) 🚀 Faster indexing and querying 🧠 Key takeaway: In the RAG world, it’s no longer about retrieving more — it’s about retrieving better. Fine-grained, explainable, efficient retrieval will define the next generation of RAG systems. If you’re working on RAG architectures, NodeRAG’s design principles are well worth studying! Would love to hear how others are thinking about the future of RAG systems. 🚀📚 #RAG #KnowledgeGraphs #AI #LLM #NodeRAG #GraphRAG #LightRAG #MachineLearning #GenAI #KnowledegGraphs
·linkedin.com·
Lessons Learned from Evaluating NodeRAG vs Other RAG Systems
visualize graphs inside Kùzu
visualize graphs inside Kùzu
📣 Byte #21: For those of you who want to visualize their graphs inside Jupyter notebooks - we have an exciting development! We recently released an integration with yWorks, who extended their yFiles Jupyter Graphs widget to support Kuzu databases! ✅ Once a Kuzu graph is created, we can instantiate the yFiles Jupyter KuzuGraphWidget, and use the `show_cypher` method to display a subgraph using regular Cypher queries. ✅ There are numerous custom layouts in the yFiles widget (tree, hierarchical, orthogonal, etc.). Give them a try! Here's an example of the tree layout, which is great for visualizing data like this that has rich tree structures. We can see the two-degree mentors of Christian Christiansen, a Nobel prize-winning laureate, in this example. ✅ You can customize the appearance of the nodes in the widget through `add_node_configuration` method. This way, you can display what you're looking for as you iterate through your graph building process. ✅ The Kuzu-yFiles integration is open source and you can begin using it right away for your own interactive visualizations. Give it a try and share around with fellow graph enthusiasts! pip install yfiles-jupyter-graphs-for-kuzu Docs page: https://lnkd.in/g97uSKRe GitHub repo: https://lnkd.in/gjA6ZjiF
·linkedin.com·
visualize graphs inside Kùzu
RDF-specific functionality for VS Code
RDF-specific functionality for VS Code
A little peek into our developments of RDF-specific functionality for VS Code: 1️⃣ The autocompletion and hover-help for RDF vocabularies. Some are stored within the VS Code plugin, the rest are queried from LOV, giving intellisense to the most prominent ontologies. 2️⃣ We can use the ontology of the vocabularies to show when something is not typed correctly 3️⃣ SHACL has a SHACL meta-model. As we built a SHACL engine into VS Code, we can use this meta model to hint if something is not done correctly (e.g., a string as part of a datatype). We plan to release the plugin to the marketplace in some time (However, we are still building more functionality). To not take too much credit: https://lnkd.in/eFB2wKdz delivers important features like most syntax highlighting and auto-import of the prefixes.
RDF-specific functionality for VS Code
·linkedin.com·
RDF-specific functionality for VS Code
if you believe that LLMs need graphs to reason, you are right and now you have evidence: Claude answers questions by building and traversing a graph
if you believe that LLMs need graphs to reason, you are right and now you have evidence: Claude answers questions by building and traversing a graph
To all the knowledge graph enthusiasts who've felt for a while that "graphs are the way to go" when it comes to enabling "intelligence," it was interesting to read Anthropic's "Tracing the thoughts of a large language model" - if you believe that LLMs need graphs to reason, you are right and now you have evidence: Claude answers questions by building and traversing a graph (in latent space) before it translates it back to language: https://lnkd.in/eWFWwfN4 | 20 comments on LinkedIn
if you believe that LLMs need graphs to reason, you are right and now you have evidence: Claude answers questions by building and traversing a graph
·linkedin.com·
if you believe that LLMs need graphs to reason, you are right and now you have evidence: Claude answers questions by building and traversing a graph
European Parliament Open Data Portal : a SHACL-powered knowledge graph - Sparna Blog
European Parliament Open Data Portal : a SHACL-powered knowledge graph - Sparna Blog
A second usecase Thomas wrote for Veronika Heimsbakk’s SHACL for the Practitioner upcoming book is about Sparna’s work for the European Parliament. From validation of the data in the knowledge graph to further projects of data integration and dissemination, many different usages of SHACL specifications were explored… … and more exploratory usages of SHACL are foreseen ! “…
·blog.sparna.fr·
European Parliament Open Data Portal : a SHACL-powered knowledge graph - Sparna Blog
What if your LLM is… a graph?
What if your LLM is… a graph?
What if your LLM is… a graph? A few days ago, Petar Veličković from Google DeepMind gave one of the most interesting and thought provoking conference I've seen in a while, "Large Language Models as Graph Neural Networks". Once you start seeing LLM as graph neural network, many structural oddities suddenly falls into place. For instance, OpenAI currently recommends to put the instructions at the top of a long prompt. Why is that so? Because due to the geometry of attention graphs, LLM are counter-intuitively biased in favors of the first tokens: they travel constinously through each generation steps, are internally repeated a lot and end up "over-squashing" the latter ones. Models then use a variety of internal metrics/transforms like softmax to moderate this bias and better ponderate distribution, but this is a late patch that cannot solve long time attention deficiencies, even more so for long context. The most interesting aspect of the conference from an applied perspective: graph/geometric representations directly affect accuracy and robustness. As the generated sequence grow and deal with sequences of complex reasoning steps, cannot build solid expert system when attention graphs have single point of failures. Or at least, without extrapolating this information in the first place and providing more detailed accuracy metrics. I do believe LLM explainability research is largely underexploited right now, despite being accordingly a key component of LLM devops in big labs. If anything, this is literal "prompt engineering", seeing models as nearly physical structure under stress and providing the right feedback loops to make them more reliable. | 30 comments on LinkedIn
What if your LLM is… a graph?
·linkedin.com·
What if your LLM is… a graph?
Spanner Graph: Graph databases reimagined
Spanner Graph: Graph databases reimagined
In case you missed the Spanner Graph session at Google Cloud Next’25, the recording is now available: • Introduction of the graph space at 00:00 (https://lnkd.in/gsBFuDbt) • Spanner Graph overview at 07:24 (https://lnkd.in/ggxrzFrU) • How Snapchat builds its Identity Graph at 20:32 (https://lnkd.in/gFauYj-9) • Quick demo of an recommendation engine at 26:27 (https://lnkd.in/gvH4AbRF) • Recent launches at 32:00 (https://lnkd.in/gyCPq97t) • Vision: unified Google Cloud Graph solution with BigQuery Graph at 35:09 (https://lnkd.in/gRdbSMeu) I hope you like it! You can get started with Spanner Graph today! https://lnkd.in/gkwbGFbS Pratibha Suryadevara, Spoorthi Ravi, Sailesh Krishnamurthy, Andi Gutmans, Christopher Taylor, Girish Baliga, Tomas Talius, Candice Chen, Yun Zhang, Weidong Yang, Matthieu Besozzi, Giulia Rotondo, Leo Meyerovich, Thomas Cook, Arthur Bigeard #googlecloud #googlecloudnext25 #graphdatabases #spannergraph
Spanner Graph: Graph databases reimagined
·linkedin.com·
Spanner Graph: Graph databases reimagined
The Dataverse Project: 750K FAIR Datasets and a Living Knowledge Graph
The Dataverse Project: 750K FAIR Datasets and a Living Knowledge Graph
"I'm Ukrainian and I'm wearing a suit, so no complaints about me from the Oval Office" - that's the start of my lecture about building Artificial Intelligence with Croissant ML in the Dataverse data platform, for the Bio x AI Hackathon kick-off event in Berlin. https://lnkd.in/ePYHCfJt * 750,000+ FAIR datasets across the world forcing the innovation of the whole data landscape. * A knowledge graph with 50M+ triples. * AI-ready metadata exports. * Qdrant as a vector storage, Google Meta Mistral AI as LLM model providers. * Adrian Gschwend Qlever as fastest triple store for Dataverse knowledge graphs Multilingual, machine-readable, queryable scientific data at scale. If you're interested, you can also apply for the 2-month #BioAgentHack online hackathon: • $125K+ prizes • Mentorship from Biotech and AI leaders • Build alongside top open-science researchers & devs More info: https://lnkd.in/eGhvaKdH
·linkedin.com·
The Dataverse Project: 750K FAIR Datasets and a Living Knowledge Graph