MCP is all you need — Samuel Colvin, Pydantic
Everyone is talking about agents, and right after that, they’re talking about agent-to-agent communications. Not surprisingly, various nascent, competing protocols are popping up to handle it.
But maybe all we need is MCP — the OG of GenAI communication protocols (it's from way back in 2024!).
Last year, Jason Liu gave the second most watched AIE talk — “Pydantic is all you need”.
This year, I (the creator of Pydantic) am continuing the tradition by arguing that MCP might be all we need for agent-to-agent communications.
What I’ll cover:
- Misusing Common Patterns: MCP was designed for desktop/IDE applications like Claude Code and Cursor. How can we adapt MCP for autonomous agents?
- Many Common Problems: MCP is great, but what can go wrong? How can you work around it? Can the protocol be extended to solve these issues?
- Monitoring Complex Phenomena: How does observability work (and not work) with MCP?
- Multiple Competing Protocols: A quick run-through of other agent communication protocols like A2A and AGNTCY, and probably a few more by June 😴
- Massive Crustaceans Party: What might success look like if everything goes to plan?
---related links---
https://x.com/samuel_colvin
https://www.linkedin.com/in/samuel-colvin/
https://github.com/samuelcolvin
https://pydantic.dev/
Timestamps
00:00:00 - Introduction: Speaker Samuel Colvin introduces himself as the creator of Pydantic.
00:00:42 - Pydantic Ecosystem: Introduction to Pydantic the company, the Pydantic AI agent framework, and the Logfire observability platform.
00:01:18 - Talk Thesis: Explaining the title "MCP is all you need" and the main argument that MCP simplifies agent communication.
00:02:05 - MCP's Focus: Clarifying that the talk focuses on MCP for autonomous agents and custom code, not its original desktop automation use case.
00:02:48 - Tool Calling Primitive: Highlighting that "tool calling" is the most relevant MCP primitive for this context.
00:03:10 - MCP vs. OpenAPI: Listing the advantages MCP has over a simple OpenAPI specification for tool calls.
00:03:21 - Feature 1: Dynamic Tools: Tools can appear and disappear based on server state.
00:03:26 - Feature 2: Streaming Logs: The ability to return log data to the user while a tool is still executing.
00:03:33 - Feature 3: Sampling: A mechanism for a tool (server) to request an LLM call back through the agent (client).
00:04:01 - MCP Architecture Diagram: Visualizing the basic agent-to-tool communication flow.
00:04:43 - Complex Architecture: Discussing scenarios where tools are themselves agents that need LLM access.
00:05:24 - Explaining Sampling: Detailing how sampling solves the problem of every agent needing its own LLM by allowing tools to "piggyback" on the client's LLM access.
00:06:42 - Pydantic AI's Role in Sampling: How the Pydantic AI library supports sampling on both the client and server side.
00:07:10 - Demo Start: Beginning the demonstration of a research agent that uses an MCP tool to query BigQuery.
00:08:23 - Code Walkthrough: Validation: Showing how Pydantic is used for output validation and automatic retries (model_retry).
00:09:00 - Code Walkthrough: Context Logging: Demonstrating the use of mcp_context.log to send progress updates back to the client.
00:10:51 - MCP Server Setup: Showing the code for setting up an MCP server using fast_mcp.
00:11:54 - Design Pattern: Inference Inside the Tool: Explaining the benefit of having the tool perform its own LLM inference to reduce the context burden on the main agent.
00:12:27 - Main Application Code: Reviewing the client-side code that defines the agent and registers the MCP tool.
00:13:16 - Observability with Logfire: Switching to the Logfire UI to trace the execution of the agent's query.
00:14:09 - Observing Sampling in Action: Pointing out the specific span in the trace that shows the tool making an LLM call back through the client via sampling.
00:14:48 - Inspecting the SQL Query: Showing how the observability tool can be used to see the exact SQL query that was generated by the internal agent.
00:15:15 - Conclusion: Final summary of the talk's points.