Workshops ... Part 7: Create the same agent with PydanticAI

Part 7: Create the same agent with PydanticAI

PydanticAI is another framework for building agents. In this workshop we use it because it can work with multiple LLM providers, including OpenAI and Anthropic.

Install it if it is not already in your environment:

uv add pydantic-ai

Import the agent class:

from pydantic_ai import Agent

PydanticAI can use the Python methods directly. There is no function_tool wrapper in this basic example.

tools = [
    search_tools.search,
    search_tools.add_entry
]

Create the agent:

agent = Agent(
    name="faq_agent",
    instructions=developer_prompt,
    tools=tools,
    model="gpt-4o-mini"
)

The repetition is useful: the agent idea is the same, while each framework chooses a different interface.

Run the PydanticAI agent

The notebook again uses toyaikit only to keep the chat interface consistent:

from toyaikit.chat.runners import PydanticAIRunner

runner = PydanticAIRunner(
    chat_interface=chat_interface,
    agent=agent
)

Run it with await:

await runner.run()

To learn more, look at the PydanticAIRunner implementation. It shows the thin layer between the notebook UI and the framework.

Generate the tool list from the object

The earlier helper can also collect methods without wrapping them:

from toyaikit.tools import get_instance_methods

tools = get_instance_methods(search_tools)

Pass that list to PydanticAI:

agent = Agent(
    name="faq_agent",
    instructions=developer_prompt,
    tools=tools,
    model="gpt-4o-mini"
)

This is useful when your tool object has more methods and you do not want to maintain a separate list by hand.

Switch providers

Now change the model string to use Anthropic through PydanticAI:

agent = Agent(
    name="faq_agent",
    instructions=developer_prompt,
    tools=tools,
    model="anthropic:claude-3-7-sonnet-latest"
)

This requires an Anthropic API key in your environment. The rest of the tool code stays the same.

We use PydanticAI here because it gives you a common agent interface while still letting you switch providers. You can also combine agents that use different providers. This workshop keeps the example to one FAQ assistant so the provider swap stays easy to see.

Prompt logic and function logic

A common question is where to put logic: inside a prompt or inside a function. Use the prompt for instructions about behavior, like when to search or how to answer. Use a function for actions the model cannot do by text alone, like reading a database, sending an email, or appending a record.

The docstring sits between those worlds. It is part of the tool description the model reads, so it should tell the model what the tool does and how to call it.

Questions & Answers (0)

Sign in to ask questions