GenAI

GenAI

352 bookmarks
Newest
MongoDB
MongoDB
Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.
·learn.mongodb.com·
MongoDB
Create a Neo4j GraphRAG Workflow Using LangChain and LangGraph
Create a Neo4j GraphRAG Workflow Using LangChain and LangGraph
Create a Neo4j GraphRAG workflow using LangChain and LangGraph, combining graph queries, vector search, and dynamic prompting for advanced RAG.
In this situation, we need to parse the question into the desired number of subqueries that perform a necessary task.
·neo4j.com·
Create a Neo4j GraphRAG Workflow Using LangChain and LangGraph
Low-Hanging Fruit for RAG Search - jxnl.co
Low-Hanging Fruit for RAG Search - jxnl.co
Explore low-hanging fruit strategies to enhance your RAG search systems and improve user experience with practical techniques.
·jxnl.co·
Low-Hanging Fruit for RAG Search - jxnl.co
Security planning for LLM-based applications
Security planning for LLM-based applications
This article discusses the Security planning for the sample Retail-mart application. It shows the architecture and data flow diagram of the example application.
·learn.microsoft.com·
Security planning for LLM-based applications
What is Databricks Feature Serving? - Azure Databricks
What is Databricks Feature Serving? - Azure Databricks
Feature Serving provides structured data for RAG applications and makes data in the Databricks platform available to applications deployed outside of Databricks.
With Databricks Feature Serving, you can serve structured data for retrieval augmented generation (RAG) applications, as well as features that are required for other applications, such as models served outside of Databricks or any other application that requires features based on data in Unity Catalog.
·learn.microsoft.com·
What is Databricks Feature Serving? - Azure Databricks
Langfuse
Langfuse
Open source LLM engineering platform - LLM observability, metrics, evaluations, prompt management.
·langfuse.com·
Langfuse
The RAG Playbook - jxnl.co
The RAG Playbook - jxnl.co
Discover a systematic approach to enhance Retrieval-Augmented Generation (RAG) systems for improved performance and user satisfaction.
·jxnl.co·
The RAG Playbook - jxnl.co
Lamini - Enterprise LLM Platform
Lamini - Enterprise LLM Platform
Lamini is the enterprise LLM platform for existing software teams to quickly develop and control their own LLMs. Lamini has built-in best practices for specializing LLMs on billions of proprietary documents to improve performance, reduce hallucinations, offer citations, and ensure safety. Lamini can be installed on-premise or on clouds securely. Thanks to the partnership with AMD, Lamini is the only platform for running LLMs on AMD GPUs and scaling to thousands with confidence. Lamini is now used by Fortune 500 enterprises and top AI startups.
·lamini.ai·
Lamini - Enterprise LLM Platform
Hierarchical Indices: Enhancing RAG Systems
Hierarchical Indices: Enhancing RAG Systems
Hello, AI and data professionals! Today, we’re exploring hierarchical indices — a method significantly improving information retrieval in…
·medium.com·
Hierarchical Indices: Enhancing RAG Systems
GraphRAG Analysis, Part 2: Graph Creation and Retrieval vs Vector Database Retrieval - Blog | MLOps Community
GraphRAG Analysis, Part 2: Graph Creation and Retrieval vs Vector Database Retrieval - Blog | MLOps Community
GraphRAG (by way of Neo4j in this case) enhances faithfulness (a RAGAS metric most similar to precision) when compared to vector-based RAG, but does not significantly lift other RAGAS metrics related to retrieval; may not offer enough ROI to justify the hype of the accuracy benefits given the performance overhead.
·home.mlops.community·
GraphRAG Analysis, Part 2: Graph Creation and Retrieval vs Vector Database Retrieval - Blog | MLOps Community
UX for Agents, Part 2: Ambient
UX for Agents, Part 2: Ambient
This is our second post focused on UX for agents. We discuss ambient background agents, which can handle multiple tasks at the same time, and how they can be used in your workflow.
·blog.langchain.dev·
UX for Agents, Part 2: Ambient
UX for Agents, Part 1: Chat
UX for Agents, Part 1: Chat
At Sequoia’s AI Ascent conference in March, I talked about three limitations for agents: planning, UX, and memory. Check out that talk here. In this post I will dive deeper into UX for agents. Thanks to Nuno Campos, LangChain founding engineer for many of the original thoughts and analogies
·blog.langchain.dev·
UX for Agents, Part 1: Chat
Memory for agents
Memory for agents
At Sequoia’s AI Ascent conference in March, I talked about three limitations for agents: planning, UX, and memory. Check out that talk here. In this post I will dive more into memory. See the previous post on planning here, and the previous posts on UX here, here, and here.
·blog.langchain.dev·
Memory for agents