GraphNews

4339 bookmarks
Custom sorting
LLMs and Neurosymbolic reasoning
LLMs and Neurosymbolic reasoning
When people discuss how LLMS "reason," you’ll often hear that they rely on transduction rather than abduction. It sounds technical, but the distinction matters - especially as we start wiring LLMs into systems that are supposed to think. 🔵 Transduction is case-to-case reasoning. It doesn’t build theories; it draws fuzzy connections based on resemblance. Think: “This metal conducts electricity, and that one looks similar - so maybe it does too.” 🔵 Abduction, by contrast, is about generating explanations. It’s what scientists (and detectives) do: “This metal is conducting - maybe it contains free electrons. That would explain it.” The claim is that LLMs operate more like transducers - navigating high-dimensional spaces of statistical similarity, rather than forming crisp generalisations. But this isn’t the whole picture. In practice, it seems to me that LLMs also perform a kind of induction - abstracting general patterns from oceans of text. They learn the shape of ideas and apply them in novel ways. That’s closer to “All metals of this type have conducted in the past, so this one probably will.” Now add tools to the mix - code execution, web search, Elon Musk's tweet history 😉 - and LLMs start doing something even more interesting: program search and synthesis. It's messy, probabilistic, and not at all principled or rigorous. But it’s inching toward a form of abductive reasoning. Which brings us to a more principled approach for reasoning within an enterprise domain: the neuro-symbolic loop - a collaboration between large language models and knowledge graphs. The graph provides structure: formal semantics, ontologies, logic, and depth. The LLM brings intuition: flexible inference, linguistic creativity, and breadth. One grounds. The other leaps. 💡 The real breakthrough could come when the grounding isn’t just factual, but conceptual - when the ontology encodes clean, meaningful generalisations. That’s when the LLM’s leaps wouldn’t just reach further - they’d rise higher, landing on novel ideas that hold up under formal scrutiny. 💡 So where do metals fit into this new framing? 🔵 Transduction: “This metal conducts. That one looks the same - it probably does too.” 🔵 Induction: “I’ve tested ten of these. All conducted. It’s probably a rule.” 🔵 Abduction: “This metal is conducting. It shares properties with the ‘conductive alloy’ class - especially composition and crystal structure. The best explanation is a sea of free electrons.” LLMs, in isolation, are limited in their ability to perform structured abduction. But when embedded in a system that includes a formal ontology, logical reasoning, and external tools, they can begin to participate in richer forms of reasoning. These hybrid systems are still far from principled scientific reasoners - but they hint at a path forward: a more integrated and disciplined neuro-symbolic architecture that moves beyond mere pattern completion.
·linkedin.com·
LLMs and Neurosymbolic reasoning
S&P Global Unlocks the Future of AI-driven insights with AI-Ready Metadata on S&P Global Marketplace
S&P Global Unlocks the Future of AI-driven insights with AI-Ready Metadata on S&P Global Marketplace
🚀 When I shared our 2025 goals for the Enterprise Data Organization, one of the things I alluded to was machine-readable column-level metadata. Let’s unpack what that means—and why it matters. 🔍 What: For datasets we deliver via modern cloud distribution, we now provide human - and machine - readable metadata at the column level. Each column has an immutable URL (no auth, no CAPTCHA) that hosts name/value metadata - synonyms, units of measure, descriptions, and more - in multiple human languages. It’s semantic context that goes far beyond what a traditional data dictionary can convey. We can't embed it, so we link to it. 💡 Why: Metadata is foundational to agentic, precise consumption of structured data. Our customers are investing in semantic layers, data catalogs, and knowledge graphs - and they shouldn’t have to copy-paste from a PDF to get there. Use curl, Python, Bash - whatever works - to automate ingestion. (We support content negotiation and conditional GETs.) 🧠 Under the hood? It’s RDF. Love it or hate it, you don’t need to engage with the plumbing unless you want to. ✨ To our knowledge, this hasn’t been done before. This is our MVP. We’re putting it out there to learn what works - and what doesn’t. It’s vendor-neutral, web-based, and designed to scale across: 📊 Breadth of datasets across S&P 🧬 Depth of metadata 🔗 Choice of linking venue 🙏 It took a village to make this happen. I can’t name everyone without writing a book, but I want to thank our executive leadership for the trust and support to go build this. Let us know what you think! 🔗 https://lnkd.in/gbe3NApH Martina Cheung, Saugata Saha, Swamy Kocherlakota, Dave Ernsberger, Mark Eramo, Frank Tarsillo, Warren Breakstone, Hamish B., Erica Robeen, Laura Miller, Justine S Iverson, | 17 comments on LinkedIn
·linkedin.com·
S&P Global Unlocks the Future of AI-driven insights with AI-Ready Metadata on S&P Global Marketplace
TigerGraph Accelerates Enterprise AI Infrastructure Innovation with Strategic Investment from Cuadrilla Capital - TigerGraph
TigerGraph Accelerates Enterprise AI Infrastructure Innovation with Strategic Investment from Cuadrilla Capital - TigerGraph
TigerGraph secures a strategic investment from Cuadrilla Capital to fuel innovation in enterprise AI infrastructure and graph database technology, delivering advanced solutions for fraud detection, customer 360, supply chain optimization, and real-time data analytics.
·tigergraph.com·
TigerGraph Accelerates Enterprise AI Infrastructure Innovation with Strategic Investment from Cuadrilla Capital - TigerGraph
Should ontologies be treated as organizational resources for semantic capabilities?
Should ontologies be treated as organizational resources for semantic capabilities?
💡 Should ontologies be treated as organizational resources for semantic capabilities? More and more organizations are investing in data platforms, modeling tools, and integration frameworks. But one key capability is often underused or misunderstood: ontologies as semantic infrastructure. While databases handle facts and BI platforms handle queries, ontologies structure meaning. They define what things are, not just what data says. When treated as living organizational resources, ontologies can bring: 🔹 Shared understanding across silos 🔹 Reasoning and inference beyond data queries 🔹 Semantic integration of diverse systems 🔹 Clarity and coherence in enterprise models But here’s the challenge — ontologies don’t operate in isolation. They must be positioned alongside: 🔸 Data-oriented technologies (RDF, RDF-star, quad stores) that track facts and provenance 🔸 Enterprise modeling tools (e.g., ArchiMate) that describe systems and views 🔸 Exploratory approaches (like semantic cartography) that support emergence over constraint These layers each come with their own logic — epistemic vs. ontologic, structural vs. operational, contextual vs. formal. ✅ Building semantic capabilities requires aligning all these dimensions. ✅ It demands governance, tooling, and a culture of collaboration between ontologists, data managers, architects, and domain experts. ✅ And it opens the door to richer insight, smarter automation, and more agile knowledge flows. 🔍 With projects like ArchiCG (semantic interactive cartography), I aim to explore how we can visually navigate this landscape — not constrained by predefined viewpoints, but guided by logic, meaning, and emergent perspectives. What do you think? Are ontologies ready to take their place as core infrastructure in your organization? | 16 comments on LinkedIn
·linkedin.com·
Should ontologies be treated as organizational resources for semantic capabilities?
Building AI Agents with LLMs, RAG, and Knowledge Graphs: A practical guide to autonomous and modern AI agents
Building AI Agents with LLMs, RAG, and Knowledge Graphs: A practical guide to autonomous and modern AI agents
𝐁𝐨𝐨𝐤 𝐩𝐫𝐨𝐦𝐨𝐭𝐢𝐨𝐧 𝐛𝐞𝐜𝐚𝐮𝐬𝐞 𝐭𝐡𝐢𝐬 𝐨𝐧𝐞 𝐢𝐬 𝐰𝐨𝐫𝐭𝐡 𝐢𝐭.. 𝐀𝐠𝐞𝐧𝐭𝐢𝐜 𝐀𝐈 𝐚𝐭 𝐢𝐭𝐬 𝐛𝐞𝐬𝐭.. This masterpiece was published by Salvatore Raieli and Gabriele Iuculano, and it is available for orders from today, and it's already a 𝐁𝐞𝐬𝐭𝐬𝐞𝐥𝐥𝐞𝐫! While many resources focus on LLMs or basic agentic workflows, what makes this book stand out is its deep dive into grounding LLMs with real-world data and action through the powerful combination of 𝘙𝘦𝘵𝘳𝘪𝘦𝘷𝘢𝘭-𝘈𝘶𝘨𝘮𝘦𝘯𝘵𝘦𝘥 𝘎𝘦𝘯𝘦𝘳𝘢𝘵𝘪𝘰𝘯 (𝘙𝘈𝘎) 𝘢𝘯𝘥 𝘒𝘯𝘰𝘸𝘭𝘦𝘥𝘨𝘦 𝘎𝘳𝘢𝘱𝘩𝘴. This isn't just about building Agents; it's about building AI that reasons, retrieves accurate information, and acts autonomously by leveraging structured knowledge alongside advanced LLMs. The book offers a practical roadmap, packed with concrete Python examples and real-world case studies, guiding you from concept to deployment of intelligent, robust, and hallucination-minimized AI solutions, even orchestrating multi-agent systems. Order your copy here - https://packt.link/RpzGM #AI #LLMs #KnowledgeGraphs #AIAgents #RAG #GenerativeAI #MachineLearning
·linkedin.com·
Building AI Agents with LLMs, RAG, and Knowledge Graphs: A practical guide to autonomous and modern AI agents
Semantics in use (part 3): an interview with Saritha V.Kuriakose, VP Research Data Management at Novo Nordisk | LinkedIn
Semantics in use (part 3): an interview with Saritha V.Kuriakose, VP Research Data Management at Novo Nordisk | LinkedIn
We continue our series of examples of the use of semantics and ontologies across organizations with an interview with Saritha V. Kuriakose from Novo Nordisk, talking about the pervasive and foundational use of ontologies in pharmaceutical R&D.
·linkedin.com·
Semantics in use (part 3): an interview with Saritha V.Kuriakose, VP Research Data Management at Novo Nordisk | LinkedIn
how Knowledge Graphs could be used to provide context
how Knowledge Graphs could be used to provide context
📚 Definition number 0️⃣0️⃣0️⃣0️⃣0️⃣1️⃣0️⃣1️⃣ 🌊 It is pretty easy to see how context is making really big waves recently. Not long time ago, there were announcements about Model Context Protocol (MCP). There is even saying that Prompt Engineers changed their job titles to Context Engineers. 😅 🔔 In my recent few posts about definitions I tried to show how Knowledge Graphs could be used to provide context as they are built with two types of real definitions expressed in a formalised language. Next, I explained how objective and linguistic nominal definitions in Natural Language can be linked to the models of external things encoded in the formal way to increase human-machine semantic interoperability. 🔄 Quick recap: in KGs objective definitions define objects external to the language and linguistic definitions relate words to other expressions of that language. This is regardless of the nature of the language under consideration - formalised or natural. Objective definitions are real definitions when they uniquely specify certain objects via their characteristics - this is also regardless of the language nature. Not all objective definitions are real definitions and none of linguistic definitions are real definitions. 💡 Classical objective definitions are an example of clear definitions. Another type of real definitions that could be encountered either in formalised or Natural Language are contextual definitions. An example of such definition is ‘Logarithm of a number A with base B is such a number C that B to the power of C is equal to A’. Obviously this familiar mathematical definition could be expressed in formalised language as well. This makes Knowledge Graphs capable of providing context via contextual definitions apart from other types of definitions covered so far. 🤷🏼‍♂️ At the same time another question appears. How is it possible to keep track of all those different types of definitions and always be able to know which one is which for a given modelled object? In my previous posts, I have shown how definitions could be linked via ‘rdfs:comment’ and ‘skos:definition’. However, that is still pretty generic. It is still possible to extend base vocabulary provided by SKOS and add custom properties for this purpose. Quick reminder: property in KG corresponds to relation between two other objects. Properties allowing to add multiple types of definitions in Natural Language can be created as instances of owl:AnnotationProperty as follows: namespace:contextualDefiniton a owl:AnnotationProperty . After that this new annotation property instance could be used in the same way as more generic linking definitions to objects in KGs. 🤓 🏄‍♂️ The above shows that getting context right way can be tricky endeavour indeed. In my next posts, I will try to describe some other types of definitions, so they can also be added to KGs. If you'd like to level up your KG in this way, please stay tuned. 🎸😎🤙🏻 #ai #knowledgegraphs #definitions
how Knowledge Graphs could be used to provide context
·linkedin.com·
how Knowledge Graphs could be used to provide context
Confession: until last week, I thought graphs were new
Confession: until last week, I thought graphs were new
Confession: until last week, I thought graphs were new. I shared what I thought was a fresh idea: that enterprise structured data should be modeled as a graph to make it digestible for today’s AI with its short context windows and text-based architecture. My post attracted graph leaders with roots in the Semantic Web. I learned that ontology was the big idea when the Semantic Web launched in 2001, and fell out of fashion by 2008. Then Google brought it back in 2012 —rebranded as the “knowledge graph” - and graphs became a mainstay in SEO. We’re living through the third wave of graphs, now driven by the need to feed data to AI agents. Graphs are indeed not new. But there’s no way I - or most enterprise data leaders of my generation - would have known that. I started my data career in 2013 - peak love for data lakes and disregard for schemas. I haven't met a single ontologist until 3 months ago (hi Madonnalisa C.!). And I deal with tables in the enterprise domain, not documents in public domain. These are two different worlds. Or are they?.. This 1999 quote from Tim Berners-Lee, the father of the Semantic Web hit me: “I have a dream for the Web [in which computers] become capable of analyzing all the data... When it [emerges], the day-to-day mechanisms of trade, bureaucracy, and our daily lives will be handled by machines talking to machines... The ‘intelligent agents’... will finally materialize.” We don't talk about this enough - but we are all one: ➡️ Semantic Web folks ➡️ Enterprise data teams ➡️ SEO and content teams ➡️ data providers like Scale AI and Surge AI In the grand scheme of things, we are all just feeding data into computers hoping to realize Tim’s dream. That’s when my initial shame turned into wonder. What if we all reimagined our jobs by learning from each other? What if enterprise data teams: ▶️ Prioritized algorithmic discoverability of their data assets, like SEOs do? ▶️ Pursued missing data that improves AI outcomes, like Scale AI does? ▶️ Took ownership of all data—not just the tables? Would we be the generation that finally realizes the dream? What a time to be alive. | 10 comments on LinkedIn
Confession: until last week, I thought graphs were new
·linkedin.com·
Confession: until last week, I thought graphs were new