Where to go from here
This workshop covers building a coding agent with tool calls, a skills system, and two framework implementations. Several important topics were intentionally left out.
Evaluation
We did not measure how well the agent performs. A real coding agent needs evaluations: a set of tasks, a way to run the agent on them automatically, and metrics that tell you whether a change to the prompt or tools made things better or worse. Without evaluation, every prompt tweak is a guess.
Multi-agent patterns
The agent in this workshop is a single loop with one set of tools. More complex setups use multiple agents that specialize and collaborate: one agent plans, another writes code, a third reviews it. That pattern adds power but also adds coordination complexity.
Deployment
We ran the agent in a notebook. Deploying it as a production service is a separate problem: you need an API, authentication, rate limiting, and persistent history. See the end-to-end agent deployment workshop for one approach.
MCP integration
The Model Context Protocol (MCP) is a standard for exposing tools to agents. The skills system we built is a simple version of this idea. MCP provides a more general protocol with discovery, schema negotiation, and tool composition. Integrating MCP would let the agent use tools from any MCP-compatible server.
Other frameworks
We used ToyAIKit and PydanticAI. The same patterns apply to other agent frameworks: LangGraph, CrewAI, AutoGen, and others. The important part is understanding the tool-call loop - the framework is packaging around that core idea.