Chatbots with RAG: LangChain Full Walkthrough
In this video, we work through building a chatbot using Retrieval Augmented Generation (RAG) from start to finish. We use OpenAI's gpt-3.5-turbo Large Language Model (LLM) as the "engine", we implement it with LangChain's ChatOpenAI class, use OpenAI's text-embedding-ada-002 for embedding, and the Pinecone vector database as our knowledge base.
📌 Code:
https://github.com/pinecone-io/examples/blob/master/learn/generation/langchain/rag-chatbot.ipynb
🌲 Subscribe for Latest Articles and Videos:
https://www.pinecone.io/newsletter-signup/
👋🏼 AI Consulting:
https://aurelio.ai
👾 Discord:
https://discord.gg/c5QtDB9RAP
Twitter: https://twitter.com/jamescalam
LinkedIn: https://www.linkedin.com/in/jamescalam/
00:00 Chatbots with RAG
00:59 RAG Pipeline
02:35 Hallucinations in LLMs
04:08 LangChain ChatOpenAI Chatbot
09:11 Reducing LLM Hallucinations
13:37 Adding Context to Prompts
17:47 Building the Vector Database
25:14 Adding RAG to Chatbot
28:52 Testing the RAG Chatbot
32:56 Important Notes when using RAG
#artificialintelligence #nlp #ai #langchain #openai #vectordb