ask-human mcp 🚀 - Mason Yarbrough

AI/ML
Olow304/memvid: Video-based AI memory library. Store millions of text chunks in MP4 files with lightning-fast semantic search. No database needed.
Video-based AI memory library. Store millions of text chunks in MP4 files with lightning-fast semantic search. No database needed. - Olow304/memvid
Qwen 3 Embeddings & Rerankers
In this video I look at the new release from Qwen of their new Embedding and Reranking models which are start of the art and most importantly open weights mo...
How to build an AI-first organization | Ethan Mollick
Most companies are using AI to cut costs. Ethan Mollick argues that the biggest mistake companies make is thinking too small.
In the first episode of Strange Loop, Wharton professor and leading AI researcher Ethan Mollick joins Sana founder and CEO Joel Hellermark for a candid and wide-ranging conversation about the rapidly changing world of AI at work.
They explore how AI is not just an efficiency tool but a turning point—one that forces a choice between incremental optimization and transformational scale. The discussion covers the roots of machine intelligence, the relevance of AGI, and what it takes to build organizations designed from the ground up for an AI-native future.
What’s in this episode:
- Why most companies are underestimating what AI makes possible
- The tension between using AI for efficiency vs. scaling ambition
- How traditional org charts, built for a human-only workforce, are breaking
- The collapse of apprenticeship and its long-term implications
- How prompting is becoming a foundational business skill
- Why “cheating” with AI may be the new form of learning
- The risks of using AI to optimize the past instead of inventing the future
- What it means to build truly AI-native teams and organizations
Strange Loop is a podcast about how artificial intelligence is reshaping the systems we live and work in. Each episode features deep, unscripted conversations with thinkers and builders reimagining intelligence, leadership, and the architectures of progress. The goal is not just to follow AI’s trajectory, but to question the assumptions guiding it.
Subscribe for more conversations at the edge of AI and human knowledge.
--
00:20 - Origins: AI in the early days at MIT
01:53 - Defining and testing intelligence: Beyond the Turing test
06:35 - Redesigning organizations for the AI era
08:56 - Human augmentation or replacement
14:58 - Navigating AI's jagged frontier
17:18 - The 3 ingredients for successful AI adoption
23:31 - Roles to hire for an AI-first world
33:41 - Do orgs need a Chief AI officer?
39:45 - The interface for AI and human collaboration
43:50 - Rethinking the goals of enterprise AI
49:15 - The case for abundance
52:30 - Best and worse case scenarios
58:51 - Avoiding the trap of enterprise AI KPIs
Adventures in Symbolic Algebra with Model Context Protocol
Personal Blog
A practical guide to building agents
Interfacing MCP with Combinatorial, Convex, and SMT Solvers
Personal Blog
MCP Best Practices | Peter Steinberger
A comprehensive guide outlining best practices for building reliable, user-friendly Model Context Protocol (MCP) tools with proper configuration, testing, and release management.
Claude Code is My Computer | Peter Steinberger
I run Claude Code with --dangerously-skip-permissions flag, giving it full system access. Let me show you a new way of approaching computers.
Why I have slightly longer timelines than some of my guests
Continual learning is a huge bottleneck
The Prompt Engineering Playbook for Programmers
Turn AI coding assistants into more reliable development partners
What Actually Works: 12 Lessons from AI Pair Programming | Forge Code
Field-tested practices for productive AI-assisted development. Real lessons from 6 months of daily AI pair programming, including what works, what fails, and why most engineers are doing it wrong.
Michael Tsai - Blog - Model Context Protocol (MCP) Tools for Mac
Chatterbox-TTS Apple Silicon - a Hugging Face Space by Jimmi42
Upload a reference audio file and enter text to create audio in that voice. The app automatically chunks long text and uses Apple Silicon's GPU for faster processing.
resemble-ai/chatterbox: SoTA open-source TTS
SoTA open-source TTS. Contribute to resemble-ai/chatterbox development by creating an account on GitHub.
Building with Chatterbox TTS, Voice Cloning & Watermarking
In this video, I look at the new Chatterbox TTS from Resemble.AI and how it's improving open-source text-to-speech with its impressive voice cloning and emotion control capabilities. We explore its features, including zero-shot voice cloning that requires only a few seconds of audio, and its unique ability to adjust the emotional intensity of speech.
Colab: https://dripl.ink/Vxs8D
Blog: https://www.resemble.ai/chatterbox/
Hugging Face Spaces: https://huggingface.co/spaces/ResembleAI/Chatterbox
Hugging Face: https://huggingface.co/ResembleAI/chatterbox
GitHub: Chatterbox-TTS-Extended https://github.com/petermg/Chatterbox-TTS-Extended
For more tutorials on using LLMs and building agents, check out my Patreon
Patreon: https://www.patreon.com/SamWitteveen
Twitter: https://x.com/Sam_Witteveen
🕵️ Interested in building LLM Agents? Fill out the form below
Building LLM Agents Form: https://drp.li/dIMes
👨💻Github:
https://github.com/samwit/llm-tutorials
⏱️Time Stamps:
00:00 Intro
00:24 Resemble.AI - Chatterbox
01:53 Samples
04:53 Hugging Face: Chatterbox
05:22 Demo
06:26 Adding Exaggeration
08:56 Voice Cloning
13:00 Chatterbox TTS Extended Github
14:07 Hugging Face: Chatterbox GGUF
asg017/sqlite-vec: A vector search SQLite extension that runs anywhere!
A vector search SQLite extension that runs anywhere! - asg017/sqlite-vec
Wispr Flow | Effortless Voice Dictation
Flow makes writing quick and clear with seamless voice dictation. It is the fastest, smartest way to type with your voice.
PromptHub Blog: A Complete Guide to Meta Prompting
Check out our deep dive on the latest meta prompting methods, like DSPy and TEXTGRAD, and the best prompt generator tools out there.
THIS is why large language models can understand the world
5 years ago, nobody would have guessed that scaling up LLMs would as successful as they are. This belief, in part, was due to the fact that all known statistical learning theory predicted that massively oversized models should overfit, and hence perform worse than smaller models. Yet the undeniable fact is that modern LLMs do possess models of the world that allow them to generalize beyond their training data.
Why do larger models generalize better than smaller models? Why does training a model to predict internet text cause it to develop world models? Come deep dive into the inner working of neural network training to understand why scaling LLMs works so damn well.
Want to see more videos like this in the future? Support me on Ko-fi https://ko-fi.com/algorithmicsimplicity
Papers referenced:
Double Descent: https://arxiv.org/abs/1812.11118
The Lottery Ticket Hypothesis: https://arxiv.org/abs/1803.03635
My previous videos on Autoregressive Transformers:
Auto-regression (and diffusion): https://youtu.be/zc5NTeJbk-k
Transformers: https://youtu.be/kWLed8o5M2Y
Claude 4 and Anthropic's bet on code
Reasons to be optimistic and pessimistic on Anthropic's future.
anthropics/prompt-eng-interactive-tutorial: Anthropic's Interactive Prompt Engineering Tutorial
Anthropic's Interactive Prompt Engineering Tutorial - anthropics/prompt-eng-interactive-tutorial
Exclusive: Anthropic hits $3 billion in annualized revenue on business demand for AI
Artificial intelligence developer Anthropic is making about $3 billion in annualized revenue, according to two sources familiar with the matter, in an early validation of generative AI use in the business world.
MythCloud ⟶ The Stories That Shape Us.
story telling
LLMs + Pandas: How I Use Generative AI to Generate Pandas DataFrame Summaries | Towards Data Science
Local Large Language Models can convert massive DataFrames to presentable Markdown reports — here's how.
My AI Skeptic Friends Are All Nuts
Thomas Ptacek's frustrated tone throughout this piece perfectly captures how it feels sometimes to be an experienced programmer trying to argue that "LLMs are actually really useful" in many corners …
Agentic Document Extraction: 17x Faster, Smarter, with LLM-Ready Outputs
Agentic Document Extraction just got faster! We've improved the median document processing from 135 seconds to 8 seconds!
Agentic Document Extraction sees documents visually and uses an iterative workflow to accurately extract text, figures, form fields, charts, and more to create an LLM-ready output.
You can use our SDK to parse complex documents and get the extracted content in Markdown and JSON. You can then feed the output to an LLM, RAG application, or other downstream apps.
You can also use our Playground to test out Agentic Document Extraction.
Try out Agentic Document Extraction:
- Playground: https://va.landing.ai/demo/doc-extraction
- Library: https://github.com/landing-ai/agentic-doc
Learn more: https://landing.ai/agentic-document-extraction
microsoft/aurora: Implementation of the Aurora model for Earth system forecasting
Implementation of the Aurora model for Earth system forecasting - microsoft/aurora
robertjakob/rigorous: A comprehensive suite of tools, built to liberate science by making the creation, evaluation, and dissemination of research more transparent, affordable, and efficient.
A comprehensive suite of tools, built to liberate science by making the creation, evaluation, and dissemination of research more transparent, affordable, and efficient. - robertjakob/rigorous
The ‘white-collar bloodbath’ is all part of the AI hype machine | CNN Business
If the CEO of a soda company declared that soda-making technology is getting so good it’s going to ruin the global economy, you’d be forgiven for thinking that person is either lying or fully detached from reality.