Zero to Hero LLMs with M3 Max BEAST
M3 Max is a Machine Learning BEAST. So I took it for a spin with some LLM's running locally.
Temperature/fan on your Mac: https://www.tunabellysoftware.com/tgpro/index.php?fpr=alex (affiliate link)
Run Windows on a Mac: https://prf.hn/click/camref:1100libNI (affiliate)
Use COUPON: ZISKIND20
🛒 Gear Links 🛒
* 🍏💥 New MacBook Air M1 Deal: https://amzn.to/3S59ID8
* 💻🔄 Renewed MacBook Air M1 Deal: https://amzn.to/45K1Gmk
* 🎧⚡ Great 40Gbps T4 enclosure: https://amzn.to/3JNwBGW
* 🛠️🚀 My nvme ssd: https://amzn.to/3YLEySo
* 📦🎮 My gear: https://www.amazon.com/shop/alexziskind
🎥 Related Videos 🎥
* 🌗 RAM torture test on Mac - https://youtu.be/l3zIwPgan7M
* 🛠️ Set up Conda on Mac - https://youtu.be/2Acht_5_HTo
* 👨💻 15" MacBook Air | developer's dream - https://youtu.be/A1IOZUCTOkM
* 🤖 INSANE Machine Learning on Neural Engine - https://youtu.be/Y2FOUg_jo7k
* 💻 M2 MacBook Air and temps - https://youtu.be/R7F-TxEukdY
* 💰 This is what spending more on a MacBook Pro gets you - https://youtu.be/iLHrYuQjKPU
* 🛠️ Developer productivity Playlist - https://www.youtube.com/playlist?list=PLPwbI_iIX3aQCRdFGM7j4TY_7STfv2aXX
Timestamps
00:00 Intro
00:40 Build from scratch - manual
09:44 Bonus script - automated
11:21 LM Studio - one handed
Repo
https://github.com/ggerganov/llama.cpp/
Commands
//assuming you already have a conda environment set up, and dev tools installed (see videos above for instructions)
*Part 1 - manual*
brew install git-lfs
git lfs install
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
pip install -r requirements.txt
make
git clone https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B openhermes-7b-v2.5
mv openhermes-7b-v2.5 models/
python3 convert.py ./models/openhermes-7b-v2.5 --outfile ./models/openhermes-7b-v2.5/ggml-model-f16.gguf --outtype f16
./quantize ./models/openhermes-7b-v2.5/ggml-model-f16.gguf ./models/openhermes-7b-v2.5/ggml-model-q8_0.gguf q8_0
./quantize ./models/openhermes-7b-v2.5/ggml-model-f16.gguf ./models/openhermes-7b-v2.5/ggml-model-q4_k.gguf q4_k
./batched-bench ./models/openhermes-7b-v2.5/ggml-model-f16.gguf 4096 0 99 0 2048 128,512 1,2,3,4
./server -m models/openhermes-7b-v2.5/ggml-model-q4_k.gguf --port 8888 --host 0.0.0.0 --ctx-size 10240 --parallel 4 -ngl 99 -n 512
*Part 2 - auto*
bash -c "$(curl -s https://ggml.ai/server-llm.sh)"
💻 MacBooks in this video
M3 Max (16/40) 16" MacBook Pro 64GB/2TB
#m3max #macbook #macbookpro
— — — — — — — — —
📱LET'S CONNECT ON SOCIAL MEDIA
ALEX ON TWITTER: https://twitter.com/digitalix