Won't You Be My Neighbor? Part 2: The Multi-Protocol Agent - Automate Your Network
## Or: How I Taught an AI Agent to Speak Every Language in the Network *Building on [Part 1: Teaching an AI to Speak OSPF](https://www.automateyournetwork.ca/uncategorized/i-taught-an-ai-agent-to-speak-ospf-its-now-my-routers-neighbour/)* — ## The Question That Started It All (Again) Remember when we asked: “What if networks didn’t need to be configured—what if they could just… talk?” We proved that with … Continue reading "Won’t You Be My Neighbor? Part 2: The Multi-Protocol Agent"
I Taught an AI Agent to Speak OSPF: It's Now My Router's Neighbour - Automate Your Network
Wont You Be My Neighbour? ## The Network as a Conversation, Not a Configuration For decades, we’ve treated networks as things to be **configured**. We push commands, pull outputs, parse CLI text, and hope our automation scripts survive the next OS upgrade. **What if we’ve been thinking about this wrong?** What if networks aren’t meant … Continue reading "I Taught an AI Agent to Speak OSPF: It’s Now My Router’s Neighbour"
Zo is a personal cloud server with AI built-in. Do research, generate images, analyze data, vibe code sites, text or email your AI; connect your Gmail, Drive, Calendar and more; create scheduled AI automations; built-in web hosting for sites, APIs, or self-hosted services (like Plex or n8n); integrated payments, and much more. With Zo, you don't need NotebookLM, Claude, ChatGPT, Replit, Notion AI, Zapier, or Manus. Do more with AI – all in one place.
Thanks to the Raspberry Pi, we have easy access to extremely inexpensive machines running Linux that have all kinds of GPIO as well as various networking protocols. And as the platform has improved…
Building an internal agent: Code-driven vs LLM-driven workflows
When I started this project, I knew deep in my heart that we could get an LLM
plus tool-usage to solve arbitrarily complex workflows.
I still believe this is possible, but I’m no longer convinced this is
actually a good solution. Some problems are just vastly simpler, cheaper,
and faster to solve with software.
This post talks about our approach to supporting both code and LLM-driven
workflows, and why we decided it was necessary.
Most of the extensions to our internal agent have been the direct result of running into
a problem that I couldn’t elegantly solve within our current framework.
Evals, compaction, large-file handling all fit into that category.
Subagents, allowing an agent to initiate other agents, are in a different category:
I’ve frequently thought that we needed subagents, and then always found an
alternative that felt more natural.
Eventually, I decided to implement them anyway, because it seemed like
an interesting problem to reason through. Eventually I would need them… right?
(Aside: I did, indeed, eventually use subagents to support code-driven workflows invoking LLMs.)
How to Build a GitHub Code-Analyser Agent for Developer Productivity |
Understanding large GitHub repositories can be time-consuming. Code-Analyser tackles this problem by using an agentic, approach to parse and analyze codebases.
Run Your Own Phone To Bring The Dreamcast Back Online
Playing a video game online is almost second nature now. So much so that almost all multiplayer video games have ditched their split-screen multiplayer modes because they assume you’d rather …
Building an internal agent: Context window compaction
Although my model of choice for most internal workflows remains ChatGPT 4.1
for its predictable speed and high-adherence to instructions, even its 1,047,576-token context window can run out of space.
When you run out of space in the context window, your agent either needs to give up, or it needs to compact that large context
window into a smaller one. Here are our notes on implementing compaction.
This is part of the Building an internal agent series.
Goodbye Plugins: MCP Is Becoming the Universal Interface for AI
The era of fragmented plugins is ending. Model Context Protocol is changing the game, offering cross-model compatibility, richer context and lower overhead.
The biggest gap between AI agents and human intelligence is the ability to learn. There are various emerging approaches to support continual learning for AI ...