Found 3 bookmarks
Newest
EASIEST Way to Fine-Tune a LLM and Use It With Ollama
EASIEST Way to Fine-Tune a LLM and Use It With Ollama
Get started with 10Web and their AI Website Builder API: https://10web.io/website-builder-api/?utm_source=YouTube&utm_medium=Influencer&utm_campaign=TechWithTim Today, you'll learn how to fine-tune LLMs in Python for use in Ollama. I'll walk you through it step by step, give you all the code and show you how to test it out. DevLaunch is my mentorship program where I personally help developers go beyond tutorials, build real-world projects, and actually land jobs. No fluff. Just real accountability, proven strategies, and hands-on guidance. Learn more here - https://training.devlaunch.us/tim ⏳ Timestamps ⏳ 00:00 | What is Fine-Tuning? 02:25 | Gathering Data 05:52 | Google Collab Setup 09:17 | Fine-Tuning with Unsloth 16:58 | Model Setup in Ollama 🎞 Video Resources 🎞 Code in this video: https://drive.google.com/drive/folders/1p4ZilsJsdxB5lH6ZBMdIEJBt0WVUMsDq?usp=sharing Notebook Google Collab: https://colab.research.google.com/drive/1NsRGmHVupulRzsq9iUTx8V8WgTSpO_04?usp=sharing Hashtags #Python #Ollama #LLM
·youtube.com·
EASIEST Way to Fine-Tune a LLM and Use It With Ollama
QWEN-3: EASIEST WAY TO FINE-TUNE WITH REASONING 🙌
QWEN-3: EASIEST WAY TO FINE-TUNE WITH REASONING 🙌
Learn how to fine‑tune Qwen‑3‑14B on your own data—with LoRA adapters, Unsloth’s 4‑bit quantization, and just 12 GB of VRAM—while preserving its chain‑of‑thought reasoning. I’ll walk you through dataset prep, the key hyper‑parameters that prevent catastrophic forgetting, and the exact Colab notebook to get you running in minutes. Build a lightweight, reasoning‑ready Qwen‑3 model tailored to your project today! LINKS: https://qwenlm.github.io/blog/qwen3/ https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs https://huggingface.co/datasets/unsloth/OpenMathReasoning-mini https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune https://huggingface.co/datasets/mlabonne/FineTome-100k https://docs.unsloth.ai/get-started/fine-tuning-guide https://arxiv.org/html/2308.08747v5 https://heidloff.net/article/efficient-fine-tuning-lora/ NOTEBOOK: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen3_(14B)-Reasoning-Conversational.ipynb Fine-tuning Playlist: https://www.youtube.com/playlist?list=PLVEEucA9MYhPjLFhcIoNxw8FkN28-ixAn Website: https://engineerprompt.ai/ RAG Beyond Basics Course: https://prompt-s-site.thinkific.com/courses/rag Let's Connect: 🦾 Discord: https://discord.com/invite/t4eYQRUcXB ☕ Buy me a Coffee: https://ko-fi.com/promptengineering |🔴 Patreon: https://www.patreon.com/PromptEngineering 💼Consulting: https://calendly.com/engineerprompt/consulting-call 📧 Business Contact: engineerprompt@gmail.com Become Member: http://tinyurl.com/y5h28s6h 💻 Pre-configured localGPT VM: https://bit.ly/localGPT (use Code: PromptEngineering for 50% off). Signup for Newsletter, localgpt: https://tally.so/r/3y9bb0 Fine-Tuning Qwen-3 Models: Step-by-Step Guide 00:00 Introduction to Fine-Tuning Qwen-3 01:24 Understanding Catastrophic Forgetting and LoRa Adapters 03:06 Installing and Using unsloth for Fine-Tuning 04:19 Code Walkthrough: Preparing Your Dataset 07:14 Combining Reasoning and Non-Reasoning Datasets 09:48 Prompt Templates and Fine-Tuning 16:13 Inference and Hyperparameter Settings 18:11 Saving and Loading LoRa Adapters
·youtube.com·
QWEN-3: EASIEST WAY TO FINE-TUNE WITH REASONING 🙌