Found 23 bookmarks
Newest
AzureDataRetrievalAugmentedGenerationSamples/Python/CosmosDB-NoSQL_VectorSearch/CosmosDB-NoSQL-Vector_AzureOpenAI_Tutorial.ipynb at main · microsoft/AzureDataRetrievalAugmentedGenerationSamples
AzureDataRetrievalAugmentedGenerationSamples/Python/CosmosDB-NoSQL_VectorSearch/CosmosDB-NoSQL-Vector_AzureOpenAI_Tutorial.ipynb at main · microsoft/AzureDataRetrievalAugmentedGenerationSamples
Samples to demonstrate pathways for Retrieval Augmented Generation (RAG) for Azure Data - microsoft/AzureDataRetrievalAugmentedGenerationSamples
·github.com·
AzureDataRetrievalAugmentedGenerationSamples/Python/CosmosDB-NoSQL_VectorSearch/CosmosDB-NoSQL-Vector_AzureOpenAI_Tutorial.ipynb at main · microsoft/AzureDataRetrievalAugmentedGenerationSamples
Developing an LLM: Building, Training, Finetuning
Developing an LLM: Building, Training, Finetuning
REFERENCES: 1. Build an LLM from Scratch book: https://mng.bz/M96o 2. Build an LLM from Scratch repo: https://github.com/rasbt/LLMs-from-scratch 3. Slides: https://sebastianraschka.com/pdf/slides/2024-build-llms.pdf 4. LitGPT: https://github.com/Lightning-AI/litgpt 5. TinyLlama pretraining: https://lightning.ai/lightning-ai/studios/pretrain-llms-tinyllama-1-1b DESCRIPTION: This video provides an overview of the three stages of developing an LLM: Building, Training, and Finetuning. The focus is on explaining how LLMs work by describing how each step works. OUTLINE: 00:00 – Using LLMs 02:50 – The stages of developing an LLM 05:26 – The dataset 10:15 – Generating multi-word outputs 12:30 – Tokenization 15:35 – Pretraining datasets 21:53 – LLM architecture 27:20 – Pretraining 35:21 – Classification finetuning 39:48 – Instruction finetuning 43:06 – Preference finetuning 46:04 – Evaluating LLMs 53:59 – Pretraining & finetuning rules of thumb
·youtube.com·
Developing an LLM: Building, Training, Finetuning
Building an AI Agent With Memory Using MongoDB, Fireworks AI, and LangChain | MongoDB
Building an AI Agent With Memory Using MongoDB, Fireworks AI, and LangChain | MongoDB
Creating your own AI agent equipped with a sophisticated memory system. This guide provides a detailed walkthrough on leveraging the capabilities of Fireworks AI, MongoDB, and LangChain to construct an AI agent that not only responds intelligently but also remembers past interactions.
·mongodb.com·
Building an AI Agent With Memory Using MongoDB, Fireworks AI, and LangChain | MongoDB
How LLMs Work, Explained Without Math
How LLMs Work, Explained Without Math
I'm sure you agree that it has become impossible to ignore Generative AI (GenAI), as we are constantly bombarded with mainstream news about Large Language Models (LLMs). Very likely you have tried…
·blog.miguelgrinberg.com·
How LLMs Work, Explained Without Math
Prompt Engineering Roadmap - roadmap.sh
Prompt Engineering Roadmap - roadmap.sh
Step by step guide to learn Prompt Engineering. We also have resources and short descriptions attached to the roadmap items so you can get everything you want to learn in one place.
·roadmap.sh·
Prompt Engineering Roadmap - roadmap.sh
How we built Text-to-SQL at Pinterest
How we built Text-to-SQL at Pinterest
Adam Obeng | Data Scientist, Data Platform Science; J.C. Zhong | Tech Lead, Analytics Platform; Charlie Gu | Sr. Manager, Engineering
·medium.com·
How we built Text-to-SQL at Pinterest
Build an LLM RAG Chatbot With LangChain – Real Python
Build an LLM RAG Chatbot With LangChain – Real Python
Large language models (LLMs) have taken the world by storm, demonstrating unprecedented capabilities in natural language tasks. In this step-by-step tutorial, you'll leverage LLMs to build your own retrieval-augmented generation (RAG) chatbot using synthetic data with LangChain and Neo4j.
·realpython.com·
Build an LLM RAG Chatbot With LangChain – Real Python
The architecture of today's LLM applications
The architecture of today's LLM applications
Here’s everything you need to know to build your first LLM app and problem spaces you can start exploring today.
·github.blog·
The architecture of today's LLM applications