Found 42 bookmarks
Custom sorting
How to Build an In-N-Out Agent with OpenAI Agents SDK
How to Build an In-N-Out Agent with OpenAI Agents SDK
In this video, I take a deeper dive look at the OpenAI Agents SDK and how it can be used to build a fast food agent. Colab: https://dripl.ink/MZw2R For more tutorials on using LLMs and building agents, check out my Patreon Patreon: https://www.patreon.com/SamWitteveen Twitter: https://x.com/Sam_Witteveen 🕵️ Interested in building LLM Agents? Fill out the form below Building LLM Agents Form: https://drp.li/dIMes 👨‍💻Github: https://github.com/samwit/llm-tutorials ⏱️Time Stamps: 00:00 Intro 00:11 Creating an In-N-Out Agent (Colab Demo) 00:40 In-N-Out Burger Agent 04:35 Streaming runs 05:40 Adding Tools 08:20 Websearch Tool 09:45 Agents as Tools 12:21 Giving it a Chat Memory
·youtube.com·
How to Build an In-N-Out Agent with OpenAI Agents SDK
You HAVE to Try Agentic RAG with DeepSeek R1 (Insane Results)
You HAVE to Try Agentic RAG with DeepSeek R1 (Insane Results)
Deepseek R1 - the latest and greatest open source reasoning LLM - has taken the world by storm and a lot of content creators are doing a great job covering its implications and strengths/weaknesses. What I haven’t seen a lot of though is actually using R1 in agentic workflows to truly leverage its power. So that’s what I’m showing you in this video - we’ll be using the power of R1 to make a simple but super effective agentic RAG setup. We’ll be using Smolagents by HuggingFace to create our agent - it’s the simplest agent framework out there and many of you have been asking me to try it out. This agentic RAG setup centers around the idea that reasoning LLMs like R1 are extremely powerful but quite slow. Because of this, a lot of people are starting to experiment with combining the raw power of a model like R1 with a more lightweight and fast LLM to drive the primary conversation/agent flow. Think of basically giving R1 as a tool for an agent to use when it needs more reasoning power at the cost of a slower response (and higher costs). That’s what we’ll be doing here - creating an agent that has an R1 driven RAG tool to extract in depth insights from a knowledgebase. The example in this video is meant to be an introduction to these kind of reasoning agentic flows. That’s why I keep it simple with Smolagents and a local knowledgebase. But I’m planning on expanding this much further soon with a much more robust but still similar flow built with Pydantic AI and LangGraph! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The Community Voting period of the oTTomator Hackathon is open! Head on over to the Live Agent Studio now and test out the submissions and vote for your favorite agents. There are so many incredible projects to try out! https://studio.ottomator.ai All the code covered in this video + instructions to run it can be found here: https://github.com/coleam00/ottomator-agents/tree/main/r1-distill-rag SmolAgents: https://huggingface.co/docs/smolagents/en/index R1 on Ollama: https://ollama.com/library/deepseek-r1 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 00:00 - Why R1 for Agentic RAG? 01:56 - Overview of our Agent 03:33 - SmolAgents - Our Ticket to Fast Agents 06:07 - Building our Agentic RAG Agent with R1 14:17 - Creating our Local Knowledgebase w/ Chroma DB 15:45 - Getting our Local LLMs Set Up with Ollama 19:15 - R1 Agentic RAG Demo 21:42 - Outro ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Join me as I push the limits of what is possible with AI. I'll be uploading videos at least two times a week - Sundays and Wednesdays at 7:00 PM CDT!
Deep Dive into LLMs like ChatGPT
·youtube.com·
You HAVE to Try Agentic RAG with DeepSeek R1 (Insane Results)
n8n + Crawl4AI - Scrape ANY Website in Minutes with NO Code
n8n + Crawl4AI - Scrape ANY Website in Minutes with NO Code
Last week I introduced you to Crawl4AI - an open source and LLM friendly web scraper that makes it super easy to crawl any website and format it for a RAG knowledgebase for your AI agent. I even created a full AI agent as a follow up video that leverages this knowledgebase I created with Crawl4AI. A TON of you asked me to do the same thing in n8n, so here it is! In this video I show you exactly how to deploy Crawl4AI super easily with Docker and leverage it within your n8n workflows to crawl website pages in seconds. We even build a simple AI agent that uses this knowledgebase to become an expert at the documentation for Pydantic AI - my favorite AI Agent framework right now! There are a lot of ways to crawl websites, but many of them are expensive, slow, and/or difficult to work with. Crawl4AI on the other hand is easy to use, fast, and completely free since it is open source. The only thing you have to pay for is the machine in the cloud to run your crawler, and that’s only if you aren’t just running it on your computer! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Check out TEN Agent now (completely open source!) and see how easy it is to get started building voice AI agents for free: GitHub repo: https://github.com/TEN-framework/TEN-Agent Playground: https://agent.theten.ai/ If you aren't aware, voice agents are one of the biggest needs businesses have right now, so if you're a developer looking to make money with AI, tools like TEN Agent are definitely worth learning and using! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Here is the n8n workflow I covered in this video! It’s in a folder along with all the other Crawl4AI stuff I’ve done on my channel recently with Python. https://github.com/coleam00/ottomator-agents/blob/main/crawl4AI-agent/n8n-version/Crawl4AI_Agent.json Register now for the oTTomator AI Agent Hackathon with a $6,000 prize pool! https://studio.ottomator.ai/hackathon/register Try the Pydantic AI expert out now on the Live Agent Studio! https://studio.ottomator.ai Crawl4AI: https://github.com/unclecode/crawl4ai ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 00:00 - Intro to Crawl4AI + n8n 01:45 - Showing off the n8n Workflow 02:31 - What We're Crawling (and Ethics) 04:36 - How to Deploy Crawl4AI for n8n 07:57 - Deploying Crawl4AI with Docker 13:06 - TEN Agent 15:27 - Building Crawl4AI into n8n 29:15 - n8n + Crawl4AI RAG Demo 32:43 - Outro ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Join me as I push the limits of what is possible with AI. I'll be uploading videos at least two times a week - Sundays and Wednesdays at 7:00 PM CDT!
·youtube.com·
n8n + Crawl4AI - Scrape ANY Website in Minutes with NO Code
The Future of RAG is Agentic - Learn this Strategy NOW
The Future of RAG is Agentic - Learn this Strategy NOW
Buckle up - HUGE amount of value in this video for building RAG AI Agents that actually work. Honestly I could have made this video into an entire course but I wanted to give it away to you for free. :) RAG is the most common approach for providing external knowledge to an LLM. The problem is, once you have your own curated data in a vector database as a knowledgebase for your LLM, often times these RAG setups can be very underwhelming. The wrong text is returned from the search, the LLM ignores the context provided, etc. The logic of RAG makes sense in your head but it just doesn’t work in practice. And you certainly aren’t alone! That’s why there is a TON of research in the industry for how to essentially just do RAG better. There are a lot of strategies out there, but out of all the ones I’ve researched and tried myself, agentic RAG is the most obvious, works the best, and is what I’m going to introduce you to and show you exactly how to implement in this video. In the last video on my channel, I showed you how to use Crawl4AI, an open source LLM-friendly web crawler, to scrape entire websites for RAG SUPER fast. We used the entire documentation for my favorite agent framework, Pydantic AI, as an example. Now we’re taking this MUCH further by: 1. Putting all the documentation in a database for RAG 2. Creating an agentic RAG agent to use this knowledgebase with Pydantic AI 3. Building a frontend to chat with our agent using Streamlit I’ll explain exactly what Agentic RAG is, what makes it so powerful, and then this AI agent we’ll build in the video will be the perfect example! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Try GPUStack for free - it's open source and you can find their GitHub repo here: https://github.com/gpustack/gpustack I don't have the pleasure of being sponsored by open source projects often, so this was a treat! It's the best GPU cluster manager for LLM inference that I have seen and a very honest recommendation! Here is their main site as well: https://gpustack.ai/ Key features of GPUStack: 1. Heterogeneous GPU cluster management including Linux, Mac and Windows with Nvidia, and Apple Silicon. AMD coming soon! 2. Distributed inference with smart scheduling: GPUStack can distribute a big model to multiple heterogeneous workers. Automatically calculates and decide whether distributed inference is required and configure it automatically. 3. Rich model types support: GPUStack supports LLM, VLM, Image Generation, Embedding, Rerank, TTS&STT models. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Previous video with Crawl4AI: https://youtu.be/JWfNLF_g_V0 All code for this Agentic RAG Agent can be found here: https://github.com/coleam00/ottomator-agents/tree/main/crawl4AI-agent Try this agent yourself right now on the Live Agent Studio (called the "Pydantic AI Expert")! https://studio.ottomator.ai Diagram to follow along with the knowledgebase creation flow: https://claude.site/artifacts/f4dca1c3-f137-4b82-9254-dfa01ca43802 Weaviate Article on Agentic RAG: https://weaviate.io/blog/what-is-agentic-rag ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 00:00 - Agentic RAG - the Holy Grail of RAG 02:18 - What is Agentic RAG? 06:22 - Breaking our Agent Down Step by Step 08:33 - Try this Agent Now for Free 09:00 - Code Overview 09:58 - Crawl4AI Review 10:52 - Creating Our Knowledgebase for Supabase 21:38 - GPUStack 23:33 - Supabase Setup 26:08 - Getting Crawl4AI Data into Supabase 28:09 - Basic RAG AI Agent with Pydantic AI 33:44 - Testing our Basic RAG Agent 36:33 - Agentic RAG Implementation 40:40 - Demo of Our Agentic RAG Agent 41:37 - Streamlit UI 44:53 - Outro ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Join me as I push the limits of what is possible with AI. I'll be uploading videos at least two times a week - Sundays and Wednesdays at 7:00 PM CDT! Sundays and Wednesdays are for everything AI, focusing on providing insane and practical educational value. I will also post sometimes on Fridays at 7:00 PM CDT - specifically for platform showcases - sometimes sponsored, always creative in approach!
·youtube.com·
The Future of RAG is Agentic - Learn this Strategy NOW
Pydantic AI + DeepSeek V3 - The BEST AI Agent Combo
Pydantic AI + DeepSeek V3 - The BEST AI Agent Combo
I get asked a lot what my process looks like for building AI agents, so I recently kicked off a mini series showing my entire process! In this series, we’ll build AI agent that can consume entire GitHub repositories so you can ask it questions about all the code in the repo. In this video (3rd one in the series), I show you how to take an AI agent prototype built with n8n and turn it into a full custom coded agent EASILY with Pydantic AI. We’ll also use DeepSeek V3 for the LLM so it’s super powerful and still dirt cheap! Keep in mind that the n8n prototype is optional - this can very much be a standalone Pydantic AI guide. The best LLM or agent framework could change in a month. I keep this guide high level (while still covering technical details) so there is a lot to get out of this even if you aren't using Pydantic AI or DeepSeek V3 for your LLM. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Register now for the oTTomator AI Agent Hackathon with a $6,000 prize pool! https://studio.ottomator.ai/hackathon/register Try the n8n version of this GitHub agent now on the Live Agent Studio (Pydantic version coming soon): https://studio.ottomator.ai All code for this Pydantic GitHub agent can be found here: https://github.com/coleam00/ottomator-agents/tree/main/pydantic-github-agent And the n8n version of this agent: https://github.com/coleam00/ottomator-agents/tree/main/n8n-github-assistant Pydantic AI documentation: https://ai.pydantic.dev/ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 00:00 - Intro 02:04 - Where We are in the AI Agent Roadmap 04:11 - n8n Prototype - Our Blueprint 06:46 - Live Agent Studio GitHub Agent 07:24 - Pydantic AI's Beautiful Docs 10:28 - Agent Code Overview 11:23 - Building our Pydantic AI Agent 21:23 - Building the Agent Chat Tooling 25:17 - Testing our Agent 28:23 - Outro ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Join me as I push the limits of what is possible with AI. I'll be uploading videos at least two times a week - Sundays and Wednesdays at 7:00 PM CDT! Sundays and Wednesdays are for everything AI, focusing on providing insane and practical educational value. I will also post sometimes on Fridays at 7:00 PM CDT - specifically for platform showcases - sometimes sponsored, always creative in approach!
·youtube.com·
Pydantic AI + DeepSeek V3 - The BEST AI Agent Combo
Brandon-c-tech/RAG-logger: RAG Logger is an open-source logging tool designed specifically for Retrieval-Augmented Generation (RAG) applications. It serves as a lightweight, open-source alternative to LangSmith, focusing on RAG-specific logging needs.
Brandon-c-tech/RAG-logger: RAG Logger is an open-source logging tool designed specifically for Retrieval-Augmented Generation (RAG) applications. It serves as a lightweight, open-source alternative to LangSmith, focusing on RAG-specific logging needs.
RAG Logger is an open-source logging tool designed specifically for Retrieval-Augmented Generation (RAG) applications. It serves as a lightweight, open-source alternative to LangSmith, focusing on ...
·github.com·
Brandon-c-tech/RAG-logger: RAG Logger is an open-source logging tool designed specifically for Retrieval-Augmented Generation (RAG) applications. It serves as a lightweight, open-source alternative to LangSmith, focusing on RAG-specific logging needs.
ChatGPT for teams
ChatGPT for teams
Model Context Protocol (MCP) is an open-source standard released by Anthropic in November 2024 that enables AI models to interact with external data sources through a unified interface.
·glama.ai·
ChatGPT for teams
open-interpreter
open-interpreter
This "natural language interface for computers" open source ChatGPT Code Interpreter alternative has been around for a while, but today I finally got around to trying it out. Here's how …
·simonwillison.net·
open-interpreter
Painless Data Extraction and Web Automation
Painless Data Extraction and Web Automation
Start building AI agents using natural language queries for precise web and app automation. Scrape web data with ease without worrying about complexities of the modern Web
·agentql.com·
Painless Data Extraction and Web Automation
ChainForge
ChainForge
I'm still on the hunt for good options for running evaluations against prompts. ChainForge offers an interesting approach, calling itself "an open-source visual programming environment for prompt engineering". The interface …
·simonwillison.net·
ChainForge
Docling
Docling
MIT licensed document extraction Python library from the Deep Search team at IBM, who released [Docling v2](https://ds4sd.github.io/docling/v2/#changes-in-docling-v2) on October 16th. Here's the [Docling Technical Report](https://arxiv.org/abs/2408.09869) paper from August, which provides …
·simonwillison.net·
Docling
nilsherzig/LLocalSearch: LLocalSearch is a completely locally running search aggregator using LLM Agents. The user can ask a question and the system will use a chain of LLMs to find the answer. The user can see the progress of the agents and the final answer. No OpenAI or Google API keys are needed.
nilsherzig/LLocalSearch: LLocalSearch is a completely locally running search aggregator using LLM Agents. The user can ask a question and the system will use a chain of LLMs to find the answer. The user can see the progress of the agents and the final answer. No OpenAI or Google API keys are needed.
LLocalSearch is a completely locally running search aggregator using LLM Agents. The user can ask a question and the system will use a chain of LLMs to find the answer. The user can see the progres...
·github.com·
nilsherzig/LLocalSearch: LLocalSearch is a completely locally running search aggregator using LLM Agents. The user can ask a question and the system will use a chain of LLMs to find the answer. The user can see the progress of the agents and the final answer. No OpenAI or Google API keys are needed.
Add WebPilot to your GPTs
Add WebPilot to your GPTs
AI powered Search, access any online information, and generate very long content
·webpilot.ai·
Add WebPilot to your GPTs