Tutorial

Tutorial

2 bookmarks
Custom sorting
Developing an LLM: Building, Training, Finetuning
Developing an LLM: Building, Training, Finetuning
REFERENCES: 1. Build an LLM from Scratch book: https://mng.bz/M96o 2. Build an LLM from Scratch repo: https://github.com/rasbt/LLMs-from-scratch 3. Slides: https://sebastianraschka.com/pdf/slides/2024-build-llms.pdf 4. LitGPT: https://github.com/Lightning-AI/litgpt 5. TinyLlama pretraining: https://lightning.ai/lightning-ai/studios/pretrain-llms-tinyllama-1-1b DESCRIPTION: This video provides an overview of the three stages of developing an LLM: Building, Training, and Finetuning. The focus is on explaining how LLMs work by describing how each step works. OUTLINE: 00:00 – Using LLMs 02:50 – The stages of developing an LLM 05:26 – The dataset 10:15 – Generating multi-word outputs 12:30 – Tokenization 15:35 – Pretraining datasets 21:53 – LLM architecture 27:20 – Pretraining 35:21 – Classification finetuning 39:48 – Instruction finetuning 43:06 – Preference finetuning 46:04 – Evaluating LLMs 53:59 – Pretraining & finetuning rules of thumb
·youtube.com·
Developing an LLM: Building, Training, Finetuning