Back to Events
Completed

Building a Coding Agent: Python/Django Edition

August 14, 2025, 02:00 Europe/Berlin

Full workshop writeup

Pages, code repo, and split-level access for Main and above.

View workshop writeup

We build a small project bootstrapper for Django: a coding agent that takes a plain-language app request, copies a working Django template, reads and writes files through tools, and iterates until the generated app runs. The first implementation uses the OpenAI Responses API through ToyAIKit, then we try the same idea with OpenAI Agents SDK, PydanticAI, Anthropic, and Z.AI.

Links

The main resources:

The app you will build

The coding agent is a notebook-based chat interface backed by an LLM and a small set of filesystem tools. You give it a request like to-do list. The agent edits a copied Django template and leaves you with a project you can run.

flowchart LR USER["You<br/>short app request"] CHAT["Jupyter chat UI<br/>ToyAIKit"] RUNNER["Agent runner<br/>Responses API or framework"] TOOLS["AgentTools<br/>read, write, tree, grep, bash"] DJANGO["Copied Django template<br/>project folder"] LLM["LLM provider<br/>OpenAI, Anthropic, Z.AI"] USER -->|type request| CHAT CHAT --> RUNNER RUNNER -->|tool calls| TOOLS TOOLS -->|modify files| DJANGO RUNNER -->|messages and tools| LLM DJANGO -->|make run| USER

Two screenshots show what the finished workshop output looks like. The first one shows the notebook chat after the agent plans and starts calling file tools:

Coding agent chat

The second one shows one of the generated Django todo apps:

Generated todo app

Walkthrough

Follow the numbered files in order. Each file is self-contained enough to read on its own, but the steps build on each other.

  1. Overview and setup - what we are building, prerequisites, OpenAI key setup, Codespaces notes, and package installation.
  2. Part 1: OpenAI function calling recap - a quick OpenAI Responses API recap with a joke function, tool schema, and the model choosing a tool call.
  3. Part 2: ToyAIKit runner - use ToyAIKit to run the same tool-calling example through a notebook chat interface.
  4. Part 3: Django template - use a working Django template as the starting point, with both clone-and-run and build-from-scratch paths.
  5. Part 4: File tools - create the file tools the coding agent can call: read, write, file tree, bash command, and search.
  6. Part 5: Developer prompt - write the developer prompt that tells the agent what project it is editing and how to behave.
  7. Part 6: Run the coding agent - run the first coding agent with ToyAIKit, look at the generated todo app, and iterate when a generated feature does not work.
  8. Part 7: OpenAI Agents SDK - switch the runner to OpenAI Agents SDK, wrap the same tools with function_tool, and run the Django agent again.
  9. Part 8: PydanticAI, Z.AI, and next steps - try PydanticAI, Anthropic Claude, Z.AI through chat completions, and the multi-agent direction from the extra notebook.
  10. Q&A: side discussions - side discussions: Jupyter, templates, open-source models, MCP, production agents, API keys, streaming, timeouts, and notebooks.
  11. Appendix: file inventory - file inventory for the workshop code and the Django template.

Result

The simplest version is intentionally small. It runs in Jupyter, uses local filesystem tools, and edits one copied Django project folder. That is enough to understand how larger coding agents work under the hood: prepare a template, expose the right tools, give the model precise instructions, and iterate on the generated code.