AI/ML

AI/ML

2199 bookmarks
Custom sorting
Qwen 3 Embeddings & Rerankers
Qwen 3 Embeddings & Rerankers
In this video I look at the new release from Qwen of their new Embedding and Reranking models which are start of the art and most importantly open weights mo...
·youtube.com·
Qwen 3 Embeddings & Rerankers
How to build an AI-first organization | Ethan Mollick
How to build an AI-first organization | Ethan Mollick
Most companies are using AI to cut costs. Ethan Mollick argues that the biggest mistake companies make is thinking too small. In the first episode of Strange Loop, Wharton professor and leading AI researcher Ethan Mollick joins Sana founder and CEO Joel Hellermark for a candid and wide-ranging conversation about the rapidly changing world of AI at work. They explore how AI is not just an efficiency tool but a turning point—one that forces a choice between incremental optimization and transformational scale. The discussion covers the roots of machine intelligence, the relevance of AGI, and what it takes to build organizations designed from the ground up for an AI-native future. What’s in this episode: - Why most companies are underestimating what AI makes possible - The tension between using AI for efficiency vs. scaling ambition - How traditional org charts, built for a human-only workforce, are breaking - The collapse of apprenticeship and its long-term implications - How prompting is becoming a foundational business skill - Why “cheating” with AI may be the new form of learning - The risks of using AI to optimize the past instead of inventing the future - What it means to build truly AI-native teams and organizations Strange Loop is a podcast about how artificial intelligence is reshaping the systems we live and work in. Each episode features deep, unscripted conversations with thinkers and builders reimagining intelligence, leadership, and the architectures of progress. The goal is not just to follow AI’s trajectory, but to question the assumptions guiding it. Subscribe for more conversations at the edge of AI and human knowledge. -- 00:20 - Origins: AI in the early days at MIT 01:53 - Defining and testing intelligence: Beyond the Turing test 06:35 - Redesigning organizations for the AI era 08:56 - Human augmentation or replacement 14:58 - Navigating AI's jagged frontier 17:18 - The 3 ingredients for successful AI adoption 23:31 - Roles to hire for an AI-first world 33:41 - Do orgs need a Chief AI officer? 39:45 - The interface for AI and human collaboration 43:50 - Rethinking the goals of enterprise AI 49:15 - The case for abundance 52:30 - Best and worse case scenarios 58:51 - Avoiding the trap of enterprise AI KPIs
·youtube.com·
How to build an AI-first organization | Ethan Mollick
MCP Best Practices | Peter Steinberger
MCP Best Practices | Peter Steinberger
A comprehensive guide outlining best practices for building reliable, user-friendly Model Context Protocol (MCP) tools with proper configuration, testing, and release management.
·steipete.me·
MCP Best Practices | Peter Steinberger
Claude Code is My Computer | Peter Steinberger
Claude Code is My Computer | Peter Steinberger
I run Claude Code with --dangerously-skip-permissions flag, giving it full system access. Let me show you a new way of approaching computers.
·steipete.me·
Claude Code is My Computer | Peter Steinberger
Building with Chatterbox TTS, Voice Cloning & Watermarking
Building with Chatterbox TTS, Voice Cloning & Watermarking
In this video, I look at the new Chatterbox TTS from Resemble.AI and how it's improving open-source text-to-speech with its impressive voice cloning and emotion control capabilities. We explore its features, including zero-shot voice cloning that requires only a few seconds of audio, and its unique ability to adjust the emotional intensity of speech. Colab: https://dripl.ink/Vxs8D Blog: https://www.resemble.ai/chatterbox/ Hugging Face Spaces: https://huggingface.co/spaces/ResembleAI/Chatterbox Hugging Face: https://huggingface.co/ResembleAI/chatterbox GitHub: Chatterbox-TTS-Extended https://github.com/petermg/Chatterbox-TTS-Extended For more tutorials on using LLMs and building agents, check out my Patreon Patreon: https://www.patreon.com/SamWitteveen Twitter: https://x.com/Sam_Witteveen 🕵️ Interested in building LLM Agents? Fill out the form below Building LLM Agents Form: https://drp.li/dIMes 👨‍💻Github: https://github.com/samwit/llm-tutorials ⏱️Time Stamps: 00:00 Intro 00:24 Resemble.AI - Chatterbox 01:53 Samples 04:53 Hugging Face: Chatterbox 05:22 Demo 06:26 Adding Exaggeration 08:56 Voice Cloning 13:00 Chatterbox TTS Extended Github 14:07 Hugging Face: Chatterbox GGUF
·youtube.com·
Building with Chatterbox TTS, Voice Cloning & Watermarking
THIS is why large language models can understand the world
THIS is why large language models can understand the world
5 years ago, nobody would have guessed that scaling up LLMs would as successful as they are. This belief, in part, was due to the fact that all known statistical learning theory predicted that massively oversized models should overfit, and hence perform worse than smaller models. Yet the undeniable fact is that modern LLMs do possess models of the world that allow them to generalize beyond their training data. Why do larger models generalize better than smaller models? Why does training a model to predict internet text cause it to develop world models? Come deep dive into the inner working of neural network training to understand why scaling LLMs works so damn well. Want to see more videos like this in the future? Support me on Ko-fi https://ko-fi.com/algorithmicsimplicity Papers referenced: Double Descent: https://arxiv.org/abs/1812.11118 The Lottery Ticket Hypothesis: https://arxiv.org/abs/1803.03635 My previous videos on Autoregressive Transformers: Auto-regression (and diffusion): https://youtu.be/zc5NTeJbk-k Transformers: https://youtu.be/kWLed8o5M2Y
·youtube.com·
THIS is why large language models can understand the world
My AI Skeptic Friends Are All Nuts
My AI Skeptic Friends Are All Nuts
Thomas Ptacek's frustrated tone throughout this piece perfectly captures how it feels sometimes to be an experienced programmer trying to argue that "LLMs are actually really useful" in many corners …
·simonwillison.net·
My AI Skeptic Friends Are All Nuts
Agentic Document Extraction: 17x Faster, Smarter, with LLM-Ready Outputs
Agentic Document Extraction: 17x Faster, Smarter, with LLM-Ready Outputs
Agentic Document Extraction just got faster! We've improved the median document processing from 135 seconds to 8 seconds! Agentic Document Extraction sees documents visually and uses an iterative workflow to accurately extract text, figures, form fields, charts, and more to create an LLM-ready output. You can use our SDK to parse complex documents and get the extracted content in Markdown and JSON. You can then feed the output to an LLM, RAG application, or other downstream apps. You can also use our Playground to test out Agentic Document Extraction. Try out Agentic Document Extraction: - Playground: https://va.landing.ai/demo/doc-extraction - Library: https://github.com/landing-ai/agentic-doc Learn more: https://landing.ai/agentic-document-extraction
·youtube.com·
Agentic Document Extraction: 17x Faster, Smarter, with LLM-Ready Outputs
robertjakob/rigorous: A comprehensive suite of tools, built to liberate science by making the creation, evaluation, and dissemination of research more transparent, affordable, and efficient.
robertjakob/rigorous: A comprehensive suite of tools, built to liberate science by making the creation, evaluation, and dissemination of research more transparent, affordable, and efficient.
A comprehensive suite of tools, built to liberate science by making the creation, evaluation, and dissemination of research more transparent, affordable, and efficient. - robertjakob/rigorous
·github.com·
robertjakob/rigorous: A comprehensive suite of tools, built to liberate science by making the creation, evaluation, and dissemination of research more transparent, affordable, and efficient.