Introducing NLWeb: Bringing conversational interfaces directly to the web - Source
Personal Knowledge Domain
𝙏𝙝𝙤𝙪𝙜𝙝𝙩 𝙛𝙤𝙧 𝙩𝙝𝙚 𝘿𝙖𝙮: What if we could encapsulate everything a person knows—their entire bubble of knowledge, what I’d call a Personal Knowledge Domain or better, our 𝙎𝙚𝙢𝙖𝙣𝙩𝙞𝙘 𝙎𝙚𝙡𝙛, and represent it in an RDF graph? From that foundation, we could create Personal Agents that act on our behalf. Each of us would own our agent, with the ability to share or lease it for collaboration with other agents.
If we could make these agents secure, continuously updatable, and interoperable, what kind of power might we unlock for the human race?
Is this idea so far-fetched? It has solid grounding in knowledge representation, identity theory, and agent-based systems. It fits right in with current trends: AI assistants, the semantic web, Web3 identity, and digital twins. Yes, the technical and ethical hurdles are significant, but this could become the backbone of a future architecture for personalized AI and cooperative knowledge ecosystems.
Pieces of the puzzle already exist: Tim Berners-Lee’s Solid Project, digital twins for individuals, Personal AI platforms like personal.ai, Retrieval-Augmented Language Model agents (ReALM), and Web3 identity efforts such as SpruceID, architectures such as MCP and inter-agent protocols such as A2A. We see movement in human-centric knowledge graphs like FOAF and SIOC, learning analytics, personal learning environments, and LLM-graph hybrids.
What we still need is a unified architecture that:
* Employs RDF or similar for semantic richness
* Ensures user ownership and true portability
* Enables secure agent-to-agent collaboration
* Supports continuous updates and trust mechanisms
* Integrates with LLMs for natural, contextual reasoning
These are certainly not novel notions, for example:
* MyPDDL (My Personal Digital Life) and the PDS (Personal Data Store) concept from MIT and the EU’s DECODE project.
* The Human-Centric AI Group at Stanford and the Augmented Social Cognition group at PARC have also published research around lifelong personal agents and social memory systems.
However, one wonders if anyone is working on combining all of the ingredients into a fully baked cake - after which we can enjoy dessert while our personal agents do our bidding. | 21 comments on LinkedIn
Personal Knowledge Domain
The new AI powered Anayltics stack is here…says Gartner’s Afraz Jaffri ! A key element of that stack is an ontology powered Semantic Layer
The new AI powered Anayltics stack is here…says Gartner’s Afraz Jaffri ! A key element of that stack is an ontology powered Semantic Layer that serves as the brain for AI agents to act on knowledge of your internal data and deliver timely, accurate and hallucination-free insights!
#semanticlayer #knowledgegraphs #genai #decisionintelligence
The new AI powered Anayltics stack is here…says Gartner’s Afraz Jaffri ! A key element of that stack is an ontology powered Semantic Layer
Trends from KGC 2025
Last week I was fortunate to attend the Knowledge Graph Conference in NYC!
Here are a few trends that span multiple presentations and conversations.
- AI and LLM Integration: A major focus [again this year] was how LLMs can be used to enrich knowledge graphs and how knowledge graphs, in turn, can improve LLM outputs. This included using LLMs for entity extraction, verification, inference, and query generation. Many presentations demonstrated how grounding LLMs in knowledge graphs leads to more accurate, contextual, and explainable AI responses.
- Semantic Layers and Enterprise Knowledge: There was a strong emphasis on building semantic layers that act as gateways to structured, connected enterprise data. These layers facilitate data integration, governance, and more intelligent AI agents. Decentralized semantic data products (DPROD) were discussed as a framework for internal enterprise data ecosystems.
- From Data to Knowledge: Many speakers highlighted that AI is just the “tip of the iceberg” and the true power lies in the data beneath. Converting raw data into structured, connected knowledge was seen as crucial. The hidden costs of ignoring semantics were also discussed, emphasizing the need for consistent data preparation, cleansing, and governance.
- Ontology Management and Change: Managing changes and governance in ontologies was a recurring theme. Strategies such as modularization, version control, and semantic testing were recommended. The concept of “SemOps” (Semantic Operations) was discussed, paralleling DevOps for software development.
- Practical Tools and Demos: The conference included numerous demos of tools and platforms for building, querying, and visualizing knowledge graphs. These ranged from embedded databases like KuzuDB and RDFox to conversational AI interfaces for KGs, such as those from Metaphacts and Stardog.
I especially enjoyed catching up with the Semantic Arts team (Mark Wallace, Dave McComb and Steve Case), talking Gist Ontology and SemOps. I also appreciated the detailed Neptune Q&A I had with Brian O'Keefe, the vision of Ora Lassila and then a chance meeting Adrian Gschwend for the first time, where we connected on LinkML and Elmo as a means to help with bidirectional dataflows. I was so excited by these conversations that I planned to have two team members join me in June at the Data Centric Architecture Workshop Forum, https://www.dcaforum.com/
trends
On the different roles of ontologies (& machine learning) | LinkedIn
In a previous post I was touching on how ontologies are foundational to many data activities, yet "obscure". As a consequence, the different roles of ontologies are not always known among people that make use of them, as they may focus only on some of the aspects relevant for specific use cases.
Is developing an ontology from an LLM really feasible?
It seems the answer on whether an LMM would be able to replace the whole text-to-ontology pipeline is a resounding ‘no’. If you’re one of those who think that should be (or even is?) a ‘yes’: why, and did you do the experiments that show it’s as good as the alternatives (with the results available)? And I mean a proper ontology, not a knowledge graph with numerous duplications and contradictions and lacking constraints.
For a few gentle considerations (and pointers to longer arguments) and a summary figure of processes the LLM supposedly would be replacing: see https://lnkd.in/dG_Xsv_6 | 43 comments on LinkedIn
Maria KeetMaria Keet
Agentic AI and Knowledge Graph definitions
In the last few weeks, I’ve been diving into the world of #AgenticAI, and I found quite a mess with definitions, which creates a lot of misunderstanding… | 13 comments on LinkedIn
Knowledge Graph
coming around to the idea of ontologies
I'm coming around to the idea of ontologies. My experience with entity extraction with LLMs has been inconsistent at best. Even running the same request with… | 63 comments on LinkedIn
coming around to the idea of ontologies
Can Ontologies be seen as General Ledger for AI?
Can Ontologies be seen as General Ledger for AI? Could that be a good way to audit AI systems delivering critical business outcomes? In my quest to develop a… | 69 comments on LinkedIn
Can Ontologies be seen as General Ledger for AI?
Steps to generate text to sql through an ontology instead of an LLM
i want to share the actual steps we’re using to generate text to sql through an ontology instead of an LLM [explained with a library analogy]: 𝟭… | 15 comments on LinkedIn
Taxonomies, Ontologies, and Semantics in Tech Comm’s World of AI
Technical communicators: understand these AI skills to form your new portfolio: terminology management, taxonomy, ontology, semantic layer, knowledge graph and knowledge management in general
Move over, deep learning: Symbolica’s structured approach could transform AI
Symbolica's groundbreaking AI approach using advanced math promises human-like reasoning, unparalleled transparency, and efficiency in a fraction of the data.
Croissant (JSON-LD) data format
Have you tried Croissant? If not, you are missing out. Using LLMs to generate knowledge graphs is an exciting area of exploration. My colleague Jesús Barrasa…
Croissant (JSON-LD) data format
The Era of Semantic Decoding
Recent work demonstrated great promise in the idea of orchestrating collaborations between LLMs, human input, and various tools to address the inherent limitations of LLMs. We propose a novel perspective called semantic decoding, which frames these collaborative processes as optimization procedures in semantic space. Specifically, we conceptualize LLMs as semantic processors that manipulate meaningful pieces of information that we call semantic tokens (known thoughts). LLMs are among a large pool of other semantic processors, including humans and tools, such as search engines or code executors. Collectively, semantic processors engage in dynamic exchanges of semantic tokens to progressively construct high-utility outputs. We refer to these orchestrated interactions among semantic processors, optimizing and searching in semantic space, as semantic decoding algorithms. This concept draws a direct parallel to the well-studied problem of syntactic decoding, which involves crafting algorithms to best exploit auto-regressive language models for extracting high-utility sequences of syntactic tokens. By focusing on the semantic level and disregarding syntactic details, we gain a fresh perspective on the engineering of AI systems, enabling us to imagine systems with much greater complexity and capabilities. In this position paper, we formalize the transition from syntactic to semantic tokens as well as the analogy between syntactic and semantic decoding. Subsequently, we explore the possibilities of optimizing within the space of semantic tokens via semantic decoding algorithms. We conclude with a list of research opportunities and questions arising from this fresh perspective. The semantic decoding perspective offers a powerful abstraction for search and optimization directly in the space of meaningful concepts, with semantic tokens as the fundamental units of a new type of computation.
A word of caution from Netflix against blindly using cosine similarity as a measure of semantic similarity
A word of caution from Netflix against blindly using cosine similarity as a measure of semantic similarity: https://lnkd.in/gX3tR4YK They study linear matrix… | 12 comments on LinkedIn
A word of caution from Netflix against blindly using cosine similarity as a measure of semantic similarity