Q&A: side discussions

Side discussions from the workshop that are useful alongside the main walkthrough.

Jupyter from VS Code

Q: How do you launch Jupyter Notebook from Visual Studio Code?

Open the integrated terminal and run:

jupyter notebook

If you use Codespaces or a remote VS Code session, VS Code should detect the port and forward it. If it does not, open the Ports panel and forward the Jupyter port manually.

Jupyter as the workshop environment

Q: Why use Jupyter for this?

Jupyter gives a fast interactive loop. You can run a cell, look at the agent output, change one function or prompt, and run again. That is useful when building an agent because prompt and tool design usually require several iterations.

Prerequisites

Q: What are the prerequisites for understanding this workshop?

Basic Python is enough to follow along. The workshop uses Django, Jupyter, OpenAI tool calling, and ToyAIKit, but the code is copy-pasteable and the lower-level tool loop is hidden behind the runner.

Open-source models

Q: Did you experiment with open-source LLMs?

Not directly in the main walkthrough. Some of the same ideas work with providers that serve open models. Groq is one option for open-source model hosting, but those providers often use the chat completions API rather than the newer Responses API. That is why the Z.AI example switches to OpenAIChatCompletionsRunner.

Template project

Q: Why does the agent need a template?

The template gives the model a working project instead of an empty folder. That improves the chance of runnable output because dependencies, settings, URL structure, base templates, and conventions already exist. The coding agent only has to modify a few files rather than invent the entire Django project from scratch.

Production-ready agents

Q: Where should I learn production-ready agents?

The related AI Bootcamp course covers production-ready agents in more depth. This workshop is intentionally small. Production concerns like testing, monitoring, guardrails, safety, and deployment are larger topics than this single coding-agent walkthrough.

Guardrails and data masking

Q: How do you secure agents from unintended access or prompt injection?

This workshop does not implement those controls. The short answer is guardrails. Data masking is a related topic, but this session stays focused on the coding-agent mechanics.

Codex

Q: Did you use Codex to create the file tools?

No. The first version of the file tools was written with ChatGPT and then edited with Cursor. The workshop did not use Codex.

Tools and MCP

Q: What is the difference between tools used in this agent and tools used in MCP?

The tools in this workshop are local Python functions in the same process as the notebook runner. MCP is a protocol for connecting an agent to tools served elsewhere. A useful comparison is microservices for function calling: the agent talks to another service through a defined protocol.

Printing from tools

Q: Can you print from a function tool?

Yes, a tool can print. For an agent UI, returning structured output is usually more useful than printing, because the runner can display or pass the returned value back to the model predictably.

API keys

Q: Where do you put the Claude API key?

Use the same pattern as the OpenAI key: set the environment variable expected by the Anthropic client or by the framework you are using. In Codespaces, store it as a Codespaces secret. Locally, export it before starting Jupyter.

Running the server from the agent

Q: Can the agent run make run and check the app without opening the browser?

Technically yes, but the simple implementation here blocks runserver. When Jupyter starts a long-running server process through the tool call, it waits for the process to finish and the notebook hangs. A stronger version would start the server in the background, watch logs, open a browser or HTTP client, and stop the process afterward.

Developer prompt across models

Q: Does the developer prompt stay the same for other models?

For the workshop examples, yes. We use the same prompt across OpenAI, Anthropic, and Z.AI variants. In real projects you may adjust the wording for a specific model if it repeatedly ignores or misreads part of the instruction.

Streaming

Q: Can you make the agent stream?

Yes. The notebook does not implement streaming for every runner because non-streaming is simpler for teaching. For a real chat application you should stream progress, tokens, and tool calls so the user can see what is happening.

Z.AI timeout

Q: What do you do when a provider times out?

If the issue is on the provider side, switch to another provider or retry later. A production system should have fallback models. If a request times out after partially editing files, run the agent again and let it read the current state.

Notebook access

Q: Can we get the Jupyter notebook?

The public workshop repo includes the notebooks:

  • agent.ipynb
  • maven-agents-workshop.ipynb
  • agent-sdk-runner.ipynb
  • pydantic-ai-runner.ipynb
  • agent-chat-completions-runner.ipynb
  • multiple-agents.ipynb

You can also follow the numbered pages without the notebooks because the commands, prompts, and code fragments are included here.

Continue with Appendix: file inventory.

Questions & Answers (0)

Sign in to ask questions