Q&A
These questions came up during the workshop and are useful on the side.
Recording
The workshop recording is linked from Building AI Agents with MCP, PydanticAI and OpenAI.
AI assistance in notebooks
For AI help while working in notebooks, use VS Code or Cursor notebook support, or use Google Colab with Gemini support. The examples use Jupyter directly, but the same notebook can be opened from an IDE.
minsearch and RAG
The workshop uses text search in minsearch, not vector search. That
still counts as RAG because we retrieve data from a knowledge base and
use it to augment the answer. The retrieval backend can be text search,
vector search, a SQL query, or any other function.
Responses API message format
With the Responses API, the model output objects can be appended back to
the input list directly. With the older Chat Completions API, the
message format is different and role fields are handled differently.
OpenAI API key
Set OPENAI_API_KEY as an environment variable before running the
notebook. If you are not sure how to do that on your operating system,
ask an assistant for OS-specific shell instructions. You can also pass
api_key directly to the client while testing, but do not commit it.
ToyAIKit and production
toyaikit is for teaching, workshops, and quick experiments. For
production-grade agent work, use a tested framework such as OpenAI Agents
SDK, PydanticAI, LangChain, or your own well-tested loop.
Notebook chat input
The interactive notebook chat uses Python input(). The runner reads a
question, stops if the question is stop, and otherwise passes the text
to the same agent loop used by runner.loop(...).
uv
uv creates an isolated environment and pins dependencies in uv.lock.
That makes the workshop easier to reproduce and avoids packages from
other projects interfering with the notebook.
Prompt logic and function logic
Put behavioral instructions in the prompt. Put actions in functions. Searching a database, sending an email, appending a row, and calling an API belong in tools. The docstring is part of the tool description the model reads, so use it to explain when and how the tool should be used.
PydanticAI and multiple providers
PydanticAI can use different model providers through the model string.
The example starts with OpenAI, then switches to an Anthropic model with
model="anthropic:claude-3-7-sonnet-latest". That requires an Anthropic
key in the environment.
Anthropic key
Anthropic API access is paid, similar to OpenAI API access. The workshop does not walk through the full signup flow. The Anthropic example assumes a configured environment variable.
MCP and databases
Postgres is a good example of why MCP exists. If many agents need database access, one team can expose database tools through an MCP server and agent teams can call that server instead of each writing their own database integration.
Hallucination mitigation
To reduce hallucinations, start with evaluations and monitoring. The workshop does not implement that layer directly. It builds the tool foundation that later evaluation work can test.
MCP registries
Cursor has a directory of MCP servers that can be added to projects. Docker is an example of a tool you might add through that kind of registry.
FastMCP name
The name is likely inspired by FastAPI. For the workshop, the important part is that FastMCP gives us a small server framework for exposing tools.
Context7 and DeepWiki
Context7 is a ready-made MCP server for framework documentation. DeepWiki is a related idea for GitHub repositories. Both fit the same pattern: index a source of information and let an agent consult it through a tool.