OpenAI's head of trust and safety steps down | Reuters
AI/ML
AI and Microdirectives - Schneier on Security
Running Llama 2 on CPU Inference Locally for Document Q&A | by Kennet…
archived 20 Jul 2023 09:09:33 UTC
Microsoft’s New AI Method to Predict How Molecules Move and Function …
archived 20 Jul 2023 11:03:54 UTC
ClipDrop - Stable Doodle
Transform your doodles into real images in seconds
Petals - Getting started with LLaMA (GPU Colab) - Colaboratory
Petals – Decentralized platform for running large language models
Petals: decentralized inference and finetuning of large language models
LocalAI :: LocalAI documentation
Build a Q&A Bot over private data with OpenAI and LangChain - Part 1
Is the New M2Pro Mac Mini a Deep Learning Workstation? | pytorch-M1Pro – Weights & Biases
gpt - What sort of computer would be necessary to run queries on a LLM? - Artificial Intelligence Stack Exchange
Troyanovsky/Local-LLM-comparison: Compare the performance of different LLM that can be deployed locally on consumer hardware. Run yourself with Colab WebUI.
benchmarks? · Issue #34 · ggerganov/llama.cpp
Run Llama-2-13B-chat locally on your M1/M2 Mac with GPU inference. Uses 10GB RAM
Run Llama-2-13B-chat locally on your M1/M2 Mac with GPU inference. Uses 10GB RAM - llama2-mac-gpu.sh
jmorganca/ollama: Run, customize, and share self-contained & portable large language models
Ollama
This tool for running LLMs on your own laptop directly includes an installer for macOS (Apple Silicon) and provides a terminal chat interface for interacting with models. They already have …
Accessing Llama 2 from the command-line with the llm-replicate plugin
OpenAI commits $5M to local news partnership with the American Journalism Project | VentureBeat
Getting Started with Weaviate: A Beginner’s Guide to Search with Vector Databases | by Leonie Monigatti | Jul, 2023 | Towards Data Science
Introducing CM3leon, a more efficient, state-of-the-art generative model for text and images
Today, we’re showcasing CM3leon (pronounced like “chameleon”), a single foundation model that does both text-to-image and image-to-text generation.
Even the scientists who build AI can’t tell you how it works
"We built it, we trained it, but we don’t know what it’s doing."
Why AI detectors think the US Constitution was written by AI
Can AI writing detectors be trusted? We dig into the theory behind them.
The Problem With LangChain | Max Woolf's Blog
LangChain is complicated, so it must be better. Right?
AI May Be More Prone to Errors in Image-Based Diagnoses Than Clinicians
Setup - LLM
My LLM CLI tool now supports self-hosted language models via plugins
Weeknotes: Self-hosted language models with LLM plugins, a new Datasette tutorial, a dozen package releases, a dozen TILs
A lot of stuff to cover from the past two and a half weeks. LLM and self-hosted language model plugins My biggest project was the new version of my LLM …
The shady world of Brave selling copyrighted data for AI training
I'm fairly certain that I was not the only person in the world who thought to himself, "Did they just yoink the entire Internet and bundle it together
What happens when AI reads a book 🤖📖 - by Ethan Mollick