GPU Cloud, Clusters, Servers, Workstations | Lambda

AI/ML
Building My Own Deep Learning Rig · Den Delimarsky
Build A Capable Machine For LLM and AI | by Andrew Zhu | CodeX | Medium
MyScale | Run Vector Search with SQL
Teach your LLM to always answer with facts not fiction | MyScale | Blog
GitHub - karpathy/llama2.c: Inference Llama 2 in one file of pure C
Inference Llama 2 in one file of pure C. Contribute to karpathy/llama2.c development by creating an account on GitHub.
OpenAI’s Karpathy Creates Baby Llama Instead of GPT-5
The person who can easily build GPT-5 over the weekend, is surprisingly spending time testing out the capabilities of open source Llama 2
SDXL - Stable Diffusion XL - NightCafe Creator
facebookresearch/fastText: Library for fast text representation and classification.
examples/learn/generation/llm-field-guide/llama-2-70b-chat-agent.ipynb at master · pinecone-io/examples
LLAMA 2: an incredible open-source LLM - by Nathan Lambert
Llama 2: Open Foundation and Fine-Tuned Chat Models | Meta AI Research
OpenAI's head of trust and safety steps down | Reuters
AI and Microdirectives - Schneier on Security
Running Llama 2 on CPU Inference Locally for Document Q&A | by Kennet…
archived 20 Jul 2023 09:09:33 UTC
Microsoft’s New AI Method to Predict How Molecules Move and Function …
archived 20 Jul 2023 11:03:54 UTC
ClipDrop - Stable Doodle
Transform your doodles into real images in seconds
Petals - Getting started with LLaMA (GPU Colab) - Colaboratory
Petals – Decentralized platform for running large language models
Petals: decentralized inference and finetuning of large language models
LocalAI :: LocalAI documentation
Build a Q&A Bot over private data with OpenAI and LangChain - Part 1
Is the New M2Pro Mac Mini a Deep Learning Workstation? | pytorch-M1Pro – Weights & Biases
gpt - What sort of computer would be necessary to run queries on a LLM? - Artificial Intelligence Stack Exchange
Troyanovsky/Local-LLM-comparison: Compare the performance of different LLM that can be deployed locally on consumer hardware. Run yourself with Colab WebUI.
benchmarks? · Issue #34 · ggerganov/llama.cpp
Run Llama-2-13B-chat locally on your M1/M2 Mac with GPU inference. Uses 10GB RAM
Run Llama-2-13B-chat locally on your M1/M2 Mac with GPU inference. Uses 10GB RAM - llama2-mac-gpu.sh
jmorganca/ollama: Run, customize, and share self-contained & portable large language models
Ollama
This tool for running LLMs on your own laptop directly includes an installer for macOS (Apple Silicon) and provides a terminal chat interface for interacting with models. They already have …
Accessing Llama 2 from the command-line with the llm-replicate plugin