Found 56 bookmarks
Custom sorting
FUSE is All You Need - Giving agents access to anything via filesystems
FUSE is All You Need - Giving agents access to anything via filesystems
Giving agents access to a sandboxed environment with a shell and a filesystem has been the latest hype when it comes to agentic harnesses. Recent examples of this include: Turso’s AgentFS Anthropic’s Agent SDK, which brings Claude Code’s harness to non-coding domains Vercel rebuilding their text-to-SQL agent on top of a sandbox Anthropic’s Agent Skills for filesystem-based progressive disclosure The argument for why this is good goes something like this: The big labs are doing heavy RL for coding tasks in these kinds of environments. Aligning more closely with such a harness brings free gains from the coding domain to other problem spaces.
·jakobemmerling.de·
FUSE is All You Need - Giving agents access to anything via filesystems
ChrisWiles/claude-code-showcase: Comprehensive Claude Code project configuration example with hooks, skills, agents, commands, and GitHub Actions workflows
ChrisWiles/claude-code-showcase: Comprehensive Claude Code project configuration example with hooks, skills, agents, commands, and GitHub Actions workflows
Comprehensive Claude Code project configuration example with hooks, skills, agents, commands, and GitHub Actions workflows - ChrisWiles/claude-code-showcase
·github.com·
ChrisWiles/claude-code-showcase: Comprehensive Claude Code project configuration example with hooks, skills, agents, commands, and GitHub Actions workflows
The current state of gpt-5
The current state of gpt-5
The GPT-5 launch was uh, rough. A lot went wrong here, and I want to talk about what really happened...Thank you Kilo Code for sponsoring! Check them out at:...
·youtube.com·
The current state of gpt-5
Much Ado About Vibe Coding
Much Ado About Vibe Coding
Lauren Goode convinced her editors at Wired to let her spend a couple of days at a tech company called Notion learning how to vibe-code (i.e. AI-assisted computer programming): Why Did a $10 Billion Startup Let Me Vibe-Code for Them — and Why Did I Love I
·kottke.org·
Much Ado About Vibe Coding
Our principles on AI
Our principles on AI
We want to make Piccalilli’s position absolutely clear on LLMs and generative AI. It’s important for you, the reader, to understand our position.
·piccalil.li·
Our principles on AI
AGENTS.md
AGENTS.md
AGENTS.md is a simple, open format for guiding coding agents. Think of it as a README for agents.
·agents.md·
AGENTS.md
6 Weeks of Claude Code
6 Weeks of Claude Code
It is wild to think that it has been only a handful of weeks. Claude Code has considerably changed my relationship to writing and maintaining code at scale. I still write code at the same level of quality, but I feel like I have a new freedom of expression which is hard to fully articulate. Claude Code has decoupled myself from writing every line of code, I still consider myself fully responsible for everything I ship to Puzzmo, but the ability to instantly create a whole scene instead of going line by line, word by word is incredibly powerful.
·blog.puzzmo.com·
6 Weeks of Claude Code
Bay.Area.AI: DSPy: Prompt Optimization for LM Programs, Michael Ryan
Bay.Area.AI: DSPy: Prompt Optimization for LM Programs, Michael Ryan
ai.bythebay.io Nov 2025, Oakland, full-stack AI conference DSPy: Prompt Optimization for LM Programs Michael Ryan, Stanford It has never been easier to build amazing LLM powered applications. Unfortunately engineering reliable and trustworthy LLMs remains challenging. Instead, practitioners should build LM Programs comprised of several composable calls to LLMs which can be rigorously tested, audited, and optimized like other software systems. In this talk I will introduce the idea of LM Programs in DSPy: The library for Programming — not Prompting LMs. I will demonstrate how the LM Program abstraction allows the creation of automatic optimizers for LM Programs which can optimize both the prompts and weights in an LM Program. I will conclude with an introduction to MIPROv2: our latest and highest performing prompt optimization algorithm for LM Programs. Michael Ryan is a masters student at Stanford University working on optimization for Language Model Programs in DSPy and Personalizing Language Models. His work has been recognized with a Best Social Impact award at ACL 2024, and an honorable mention for outstanding paper at ACL 2023. Michael co-lead the creation of the MIPRO & MIPROv2 optimizers, DSPy’s most performant optimizers for Language Model Programs. His prior work has showcased unintended cultural and global biases expressed in popular LLMs. He is currently a research intern at Snowflake.
·youtube.com·
Bay.Area.AI: DSPy: Prompt Optimization for LM Programs, Michael Ryan