Part 9: Connect from Python over stdio
The previous step runs a FastMCP server over standard input and output.
Now connect to it from the notebook with the toyaikit MCP client.
from toyaikit.mcp import MCPClient, SubprocessMCPTransport
command = "uv run python main.py".split()
workdir = "mcp_faq"
client = MCPClient(
transport=SubprocessMCPTransport(
server_command=command,
workdir=workdir
)
)
The client needs the same handshake we tested by hand:
client.start_server()
client.initialize()
client.initialized()
client.get_tools()
The convenience method does the same sequence:
client.full_initialize()
Call the tool through MCP:
result = client.call_tool("search", {"query": "how do I run docker?"})
Now Python can talk to the server. The next layer makes those remote MCP tools look like OpenAI tools.
Use MCP tools with the OpenAI runner
MCPTools wraps an MCP client and converts MCP tool descriptions into
the schema expected by the OpenAI Responses API.
from toyaikit.mcp import MCPTools
mcp_tools = MCPTools(client)
mcp_tools.get_tools()
Use mcp_tools in the same OpenAIResponsesRunner:
runner = OpenAIResponsesRunner(
tools=mcp_tools,
developer_prompt=developer_prompt,
chat_interface=chat_interface,
llm_client=OpenAIClient(model="gpt-4o-mini")
)
Run the chat:
runner.run()
Try how do I install Kafka?. The model still performs function calling,
but the tool execution now goes through the MCP server instead of calling
the notebook function directly.
To learn more, look at the toyaikit MCP client implementation. It is close to the manual JSON-RPC flow from the previous step.
PydanticAI with stdio
PydanticAI has MCP support, so you usually do not need to write your own
client for production code. Here is a small terminal script that uses
MCPServerStdio.
Install the MCP extra if needed:
uv add "pydantic-ai[mcp]" openai toyaikit
Create test.py outside mcp_faq/:
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerStdio
from toyaikit.chat.interface import StdOutputInterface
from toyaikit.chat.runners import PydanticAIRunner
mcp_client = MCPServerStdio(
command="uv",
args=["run", "python", "main.py"],
cwd="mcp_faq"
)
Create the agent with the MCP client as a toolset:
agent = Agent(
name="faq_agent",
instructions=developer_prompt,
toolsets=[mcp_client],
model="gpt-4o-mini"
)
chat_interface = StdOutputInterface()
runner = PydanticAIRunner(
chat_interface=chat_interface,
agent=agent
)
Run the async runner from the script:
if __name__ == "__main__":
import asyncio
asyncio.run(runner.run())
Then start it:
uv run python test.py
Stdio can be inconvenient from Jupyter and on some Windows setups. SSE is the next option.
Run FastMCP over SSE
SSE is an HTTP transport using Server-Sent Events. It starts a web service that MCP clients can connect to over a URL.
Change the server entry point in mcp_faq/main.py:
if __name__ == "__main__":
mcp = init_mcp()
mcp.run(transport="sse")
Start the server:
cd mcp_faq
uv run python main.py
FastMCP serves the MCP endpoint at:
http://localhost:8000/sse
For the rest of the walkthrough, mcp_faq/main.py uses this SSE
transport.
Connect PydanticAI over SSE
In the notebook, create an SSE MCP client:
from pydantic_ai.mcp import MCPServerSSE
faq_mcp_client = MCPServerSSE(
url="http://localhost:8000/sse"
)
Pass it as a PydanticAI toolset:
agent = Agent(
name="faq_agent",
instructions=developer_prompt,
model="anthropic:claude-3-7-sonnet-latest",
toolsets=[faq_mcp_client]
)
Run the same notebook runner:
runner = PydanticAIRunner(
chat_interface=chat_interface,
agent=agent
)
await runner.run()
Try how do I install Kafka for Python?. PydanticAI connects to the MCP
server, reads the available tools, calls search, and uses the result in
the answer.