GenAI

GenAI

416 bookmarks
Newest
RAG Developer Attention! đź”” Docling is a new library from that efficiently parses PDF, DOCX, and PPTX and exports them to Markdown and JSON. It supports advanced PDF understanding and seamless integration with and .
RAG Developer Attention! đź”” Docling is a new library from that efficiently parses PDF, DOCX, and PPTX and exports them to Markdown and JSON. It supports advanced PDF understanding and seamless integration with and .
TL;DR: 🗂️ Parses numerous… — Philipp Schmid (@_philschmid)
·x.com·
RAG Developer Attention! đź”” Docling is a new library from that efficiently parses PDF, DOCX, and PPTX and exports them to Markdown and JSON. It supports advanced PDF understanding and seamless integration with and .
MongoDB
MongoDB
Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.
·learn.mongodb.com·
MongoDB
Create a Neo4j GraphRAG Workflow Using LangChain and LangGraph
Create a Neo4j GraphRAG Workflow Using LangChain and LangGraph
Create a Neo4j GraphRAG workflow using LangChain and LangGraph, combining graph queries, vector search, and dynamic prompting for advanced RAG.
In this situation, we need to parse the question into the desired number of subqueries that perform a necessary task.
·neo4j.com·
Create a Neo4j GraphRAG Workflow Using LangChain and LangGraph
Low-Hanging Fruit for RAG Search - jxnl.co
Low-Hanging Fruit for RAG Search - jxnl.co
Explore low-hanging fruit strategies to enhance your RAG search systems and improve user experience with practical techniques.
·jxnl.co·
Low-Hanging Fruit for RAG Search - jxnl.co
Security planning for LLM-based applications
Security planning for LLM-based applications
This article discusses the Security planning for the sample Retail-mart application. It shows the architecture and data flow diagram of the example application.
·learn.microsoft.com·
Security planning for LLM-based applications
What is Databricks Feature Serving? - Azure Databricks
What is Databricks Feature Serving? - Azure Databricks
Feature Serving provides structured data for RAG applications and makes data in the Databricks platform available to applications deployed outside of Databricks.
With Databricks Feature Serving, you can serve structured data for retrieval augmented generation (RAG) applications, as well as features that are required for other applications, such as models served outside of Databricks or any other application that requires features based on data in Unity Catalog.
·learn.microsoft.com·
What is Databricks Feature Serving? - Azure Databricks
Langfuse
Langfuse
Open source LLM engineering platform - LLM observability, metrics, evaluations, prompt management.
·langfuse.com·
Langfuse
The RAG Playbook - jxnl.co
The RAG Playbook - jxnl.co
Discover a systematic approach to enhance Retrieval-Augmented Generation (RAG) systems for improved performance and user satisfaction.
·jxnl.co·
The RAG Playbook - jxnl.co
A new wave of AI apps with agent-native UX is emerging, from Replit Agent to v0. Using LangGraph + 's new CoAgents extension, developers can build agent-native React applications.
A new wave of AI apps with agent-native UX is emerging, from Replit Agent to v0. Using LangGraph + 's new CoAgents extension, developers can build agent-native React applications.
In CopilotKit's blog, see how to use: • Real-time state sharing to match user… — LangChain (@LangChainAI)
·x.com·
A new wave of AI apps with agent-native UX is emerging, from Replit Agent to v0. Using LangGraph + 's new CoAgents extension, developers can build agent-native React applications.
Lamini - Enterprise LLM Platform
Lamini - Enterprise LLM Platform
Lamini is the enterprise LLM platform for existing software teams to quickly develop and control their own LLMs. Lamini has built-in best practices for specializing LLMs on billions of proprietary documents to improve performance, reduce hallucinations, offer citations, and ensure safety. Lamini can be installed on-premise or on clouds securely. Thanks to the partnership with AMD, Lamini is the only platform for running LLMs on AMD GPUs and scaling to thousands with confidence. Lamini is now used by Fortune 500 enterprises and top AI startups.
·lamini.ai·
Lamini - Enterprise LLM Platform
Hierarchical Indices: Enhancing RAG Systems
Hierarchical Indices: Enhancing RAG Systems
Hello, AI and data professionals! Today, we’re exploring hierarchical indices — a method significantly improving information retrieval in…
·medium.com·
Hierarchical Indices: Enhancing RAG Systems
GraphRAG Analysis, Part 2: Graph Creation and Retrieval vs Vector Database Retrieval - Blog | MLOps Community
GraphRAG Analysis, Part 2: Graph Creation and Retrieval vs Vector Database Retrieval - Blog | MLOps Community
GraphRAG (by way of Neo4j in this case) enhances faithfulness (a RAGAS metric most similar to precision) when compared to vector-based RAG, but does not significantly lift other RAGAS metrics related to retrieval; may not offer enough ROI to justify the hype of the accuracy benefits given the performance overhead.
·home.mlops.community·
GraphRAG Analysis, Part 2: Graph Creation and Retrieval vs Vector Database Retrieval - Blog | MLOps Community