How Anthropic teams How Anthropic teams use Claude Code

AI/ML
AI Coding Agents
codename goose | codename goose
Your open source AI agent, automating engineering tasks seamlessly.
dagger/container-use: Development environments for coding agents. Enable multiple agents to work safely and independently with your preferred stack.
Development environments for coding agents. Enable multiple agents to work safely and independently with your preferred stack. - dagger/container-use
Peekaboo MCP – lightning-fast macOS screenshots for AI agents | Peter Steinberger
Turn your blind AI into a visual debugger with instant screenshot capture and analysis
Qwen3 Embedding
New family of embedding models from Qwen, in three sizes: 0.6B, 4B, 8B - and two categories: Text Embedding and Text Reranking. The full collection can be browsed on Hugging …
cognitivecomputations/dolphin-mcp
Contribute to cognitivecomputations/dolphin-mcp development by creating an account on GitHub.
punkpeye/awesome-mcp-servers: A collection of MCP servers.
A collection of MCP servers. Contribute to punkpeye/awesome-mcp-servers development by creating an account on GitHub.
An Intro to RAG with sqlite-vec & llamafile!
A brief introduction to using llamafile (a single-file tool for working with large language models) and sqlite-vec (A SQLite extension for vector search) to build a Retrival Augmentation Generation (RAG) application.
This was a live online event hosted on Dec 17th 2024 in the Mozilla AI Discord, join us for the next event at at https://discord.gg/Ve7WeCJFXk
LINKS:
- Doc w/ links to all mentioned projects/blog posts: https://docs.google.com/document/d/17GYLzlGUyJF9EDeaa1P-dFFZnkwxATnBcg5KnNgpvPE/edit?usp=sharing
- Slides: https://docs.google.com/presentation/d/14Szda-VnZzepL-1U9Nb7sXQg_TTf56OQ-KtUIMQ5xug/edit?usp=sharing
InectGit/pdf-to-markdown: a package and makitdown plugin that transforms pdf files into markdown for llm use.
a package and makitdown plugin that transforms pdf files into markdown for llm use. - InectGit/pdf-to-markdown
ask-human mcp 🚀 - Mason Yarbrough
Olow304/memvid: Video-based AI memory library. Store millions of text chunks in MP4 files with lightning-fast semantic search. No database needed.
Video-based AI memory library. Store millions of text chunks in MP4 files with lightning-fast semantic search. No database needed. - Olow304/memvid
Qwen 3 Embeddings & Rerankers
In this video I look at the new release from Qwen of their new Embedding and Reranking models which are start of the art and most importantly open weights mo...
How to build an AI-first organization | Ethan Mollick
Most companies are using AI to cut costs. Ethan Mollick argues that the biggest mistake companies make is thinking too small.
In the first episode of Strange Loop, Wharton professor and leading AI researcher Ethan Mollick joins Sana founder and CEO Joel Hellermark for a candid and wide-ranging conversation about the rapidly changing world of AI at work.
They explore how AI is not just an efficiency tool but a turning point—one that forces a choice between incremental optimization and transformational scale. The discussion covers the roots of machine intelligence, the relevance of AGI, and what it takes to build organizations designed from the ground up for an AI-native future.
What’s in this episode:
- Why most companies are underestimating what AI makes possible
- The tension between using AI for efficiency vs. scaling ambition
- How traditional org charts, built for a human-only workforce, are breaking
- The collapse of apprenticeship and its long-term implications
- How prompting is becoming a foundational business skill
- Why “cheating” with AI may be the new form of learning
- The risks of using AI to optimize the past instead of inventing the future
- What it means to build truly AI-native teams and organizations
Strange Loop is a podcast about how artificial intelligence is reshaping the systems we live and work in. Each episode features deep, unscripted conversations with thinkers and builders reimagining intelligence, leadership, and the architectures of progress. The goal is not just to follow AI’s trajectory, but to question the assumptions guiding it.
Subscribe for more conversations at the edge of AI and human knowledge.
--
00:20 - Origins: AI in the early days at MIT
01:53 - Defining and testing intelligence: Beyond the Turing test
06:35 - Redesigning organizations for the AI era
08:56 - Human augmentation or replacement
14:58 - Navigating AI's jagged frontier
17:18 - The 3 ingredients for successful AI adoption
23:31 - Roles to hire for an AI-first world
33:41 - Do orgs need a Chief AI officer?
39:45 - The interface for AI and human collaboration
43:50 - Rethinking the goals of enterprise AI
49:15 - The case for abundance
52:30 - Best and worse case scenarios
58:51 - Avoiding the trap of enterprise AI KPIs
Adventures in Symbolic Algebra with Model Context Protocol
Personal Blog
A practical guide to building agents
Interfacing MCP with Combinatorial, Convex, and SMT Solvers
Personal Blog
MCP Best Practices | Peter Steinberger
A comprehensive guide outlining best practices for building reliable, user-friendly Model Context Protocol (MCP) tools with proper configuration, testing, and release management.
Claude Code is My Computer | Peter Steinberger
I run Claude Code with --dangerously-skip-permissions flag, giving it full system access. Let me show you a new way of approaching computers.
Why I have slightly longer timelines than some of my guests
Continual learning is a huge bottleneck
The Prompt Engineering Playbook for Programmers
Turn AI coding assistants into more reliable development partners
What Actually Works: 12 Lessons from AI Pair Programming | Forge Code
Field-tested practices for productive AI-assisted development. Real lessons from 6 months of daily AI pair programming, including what works, what fails, and why most engineers are doing it wrong.
Michael Tsai - Blog - Model Context Protocol (MCP) Tools for Mac
Chatterbox-TTS Apple Silicon - a Hugging Face Space by Jimmi42
Upload a reference audio file and enter text to create audio in that voice. The app automatically chunks long text and uses Apple Silicon's GPU for faster processing.
resemble-ai/chatterbox: SoTA open-source TTS
SoTA open-source TTS. Contribute to resemble-ai/chatterbox development by creating an account on GitHub.
Building with Chatterbox TTS, Voice Cloning & Watermarking
In this video, I look at the new Chatterbox TTS from Resemble.AI and how it's improving open-source text-to-speech with its impressive voice cloning and emotion control capabilities. We explore its features, including zero-shot voice cloning that requires only a few seconds of audio, and its unique ability to adjust the emotional intensity of speech.
Colab: https://dripl.ink/Vxs8D
Blog: https://www.resemble.ai/chatterbox/
Hugging Face Spaces: https://huggingface.co/spaces/ResembleAI/Chatterbox
Hugging Face: https://huggingface.co/ResembleAI/chatterbox
GitHub: Chatterbox-TTS-Extended https://github.com/petermg/Chatterbox-TTS-Extended
For more tutorials on using LLMs and building agents, check out my Patreon
Patreon: https://www.patreon.com/SamWitteveen
Twitter: https://x.com/Sam_Witteveen
🕵️ Interested in building LLM Agents? Fill out the form below
Building LLM Agents Form: https://drp.li/dIMes
👨💻Github:
https://github.com/samwit/llm-tutorials
⏱️Time Stamps:
00:00 Intro
00:24 Resemble.AI - Chatterbox
01:53 Samples
04:53 Hugging Face: Chatterbox
05:22 Demo
06:26 Adding Exaggeration
08:56 Voice Cloning
13:00 Chatterbox TTS Extended Github
14:07 Hugging Face: Chatterbox GGUF
asg017/sqlite-vec: A vector search SQLite extension that runs anywhere!
A vector search SQLite extension that runs anywhere! - asg017/sqlite-vec
Wispr Flow | Effortless Voice Dictation
Flow makes writing quick and clear with seamless voice dictation. It is the fastest, smartest way to type with your voice.
PromptHub Blog: A Complete Guide to Meta Prompting
Check out our deep dive on the latest meta prompting methods, like DSPy and TEXTGRAD, and the best prompt generator tools out there.
THIS is why large language models can understand the world
5 years ago, nobody would have guessed that scaling up LLMs would as successful as they are. This belief, in part, was due to the fact that all known statistical learning theory predicted that massively oversized models should overfit, and hence perform worse than smaller models. Yet the undeniable fact is that modern LLMs do possess models of the world that allow them to generalize beyond their training data.
Why do larger models generalize better than smaller models? Why does training a model to predict internet text cause it to develop world models? Come deep dive into the inner working of neural network training to understand why scaling LLMs works so damn well.
Want to see more videos like this in the future? Support me on Ko-fi https://ko-fi.com/algorithmicsimplicity
Papers referenced:
Double Descent: https://arxiv.org/abs/1812.11118
The Lottery Ticket Hypothesis: https://arxiv.org/abs/1803.03635
My previous videos on Autoregressive Transformers:
Auto-regression (and diffusion): https://youtu.be/zc5NTeJbk-k
Transformers: https://youtu.be/kWLed8o5M2Y