Back to Workshops
Workshop Free

Building AI Agents with MCP, PydanticAI and OpenAI

Alexey Grigorev Sep 1, 2025 ai-agents
Workshop

Building AI Agents with MCP, PydanticAI and OpenAI

September 1, 2025 Alexey Grigorev
ai-agents llm-engineering agent-systems tooling-architecture mcp rag

We build a course FAQ assistant from the bottom up. First we expose a plain Python search(query) function to the OpenAI Responses API. Then we turn the same idea into a reusable agent loop, compare toyaikit, OpenAI Agents SDK, and PydanticAI, and finally move the FAQ tools behind an MCP server that can be used from a notebook, PydanticAI, Cursor, and VS Code.

Links

The main resources:

The system you will build

The final setup looks like this:

flowchart LR NOTEBOOK["Jupyter notebook"] OPENAI["OpenAI Responses API"] FRAMEWORKS["Agents SDK<br/>PydanticAI"] MCPCLIENT["MCP clients<br/>toyaikit, PydanticAI, Cursor"] MCPSERVER["FastMCP server<br/>SSE or stdio"] TOOLS["FAQ tools<br/>search, add_entry"] INDEX["minsearch index<br/>FAQ JSON"] NOTEBOOK -->|function calling| OPENAI NOTEBOOK --> FRAMEWORKS FRAMEWORKS -->|tool calls| TOOLS NOTEBOOK -->|MCP client| MCPCLIENT MCPCLIENT -->|MCP protocol| MCPSERVER MCPSERVER --> TOOLS TOOLS --> INDEX

The FAQ data comes from the Data Engineering Zoomcamp FAQ. The first half of the workshop keeps the tools inside the notebook so you can see the agent loop directly. The second half moves the same tools into mcp_faq/, which makes them reusable by any MCP client.

Walkthrough

Follow the numbered files in order. Each file is self-contained enough to read on its own.

  1. Overview and setup - prerequisites, environment setup, and the shape of the project.
  2. Part 1: Load the FAQ documents - load the parsed FAQ, build a minsearch index, and create the first search tool.
  3. Part 2: Start with a plain model call - describe search to the OpenAI Responses API and run one manual tool-call round.
  4. Part 3: Make the instructions stronger - add a developer prompt, support multiple tool calls, and turn the manual round into a loop.
  5. Part 4: Replace the loop with ToyAIKit - use toyaikit to run the same loop with a notebook chat interface.
  6. Part 5: Infer tool schemas from Python functions - infer schemas from docstrings, add add_entry, and move tool state into a SearchTools class.
  7. Part 6: Install the framework - recreate the FAQ assistant with OpenAI Agents SDK.
  8. Part 7: Create the same agent with PydanticAI - recreate the assistant with PydanticAI and switch from OpenAI to Anthropic by changing the model string.
  9. Part 8: Move the FAQ tools into a project - create the mcp_faq project and expose the FAQ tools with FastMCP.
  10. Part 9: Connect from Python over stdio - connect to the MCP server from Python, convert MCP tools for OpenAI, and use PydanticAI over SSE.
  11. Part 10: Use the FAQ server from Cursor - connect the same MCP server to Cursor and VS Code.
  12. Q&A - Q&A from the live workshop.
  13. Appendix: file list - file list and final command recap.