Workshops ... Part 4: Replace the loop with ToyAIKit

Part 4: Replace the loop with ToyAIKit

The handwritten loop is useful because you can see every moving part. After that, it becomes repetitive. toyaikit wraps the same loop so the notebook can focus on tools, prompts, and behavior.

Install it if your setup does not already include it:

uv add toyaikit

Import the small set of classes the notebook uses:

from toyaikit.llm import OpenAIClient
from toyaikit.tools import Tools
from toyaikit.chat import IPythonChatInterface
from toyaikit.chat.runners import OpenAIResponsesRunner
from toyaikit.chat.runners import DisplayingRunnerCallback

OpenAIResponsesRunner contains the same request-response loop as the handwritten version. It sends messages to the Responses API, executes function calls, adds tool outputs back to the conversation, and repeats until the model returns an answer.

Register the search tool

The first version still uses the manual tool schema. This lines up with the previous step: same search function, same search_tool schema, less loop code.

agent_tools = Tools()
agent_tools.add_tool(search, search_tool)

Create the notebook chat interface and the runner:

chat_interface = IPythonChatInterface()

runner = OpenAIResponsesRunner(
    tools=agent_tools,
    developer_prompt=developer_prompt,
    chat_interface=chat_interface,
    llm_client=OpenAIClient()
)

The chat_interface is only for display. The runner runs the agent loop.

Run one prompt with visible tool calls

Use Kafka installation as the first concrete prompt:

callback = DisplayingRunnerCallback(chat_interface)
messages = runner.loop(prompt="how do I install kafka", callback=callback)

The callback renders model messages, function calls, arguments, and tool outputs in the notebook. During development, this shows whether the model searched for the right thing.

For this prompt, the model may ask a clarifying question about whether the student wants Windows, macOS, or Docker instructions. Continue the same conversation by passing the previous messages:

new_messages = runner.loop(
    prompt="I want to use docker",
    previous_messages=messages,
    callback=callback,
)

This is the same statelessness rule from the raw API section. Follow-up questions work because we resend the previous messages.

Use the interactive notebook chat

For a more chat-like workflow, run the built-in input loop:

messages = runner.run()

Try these prompts:

  • I just discovered the course. Can I still join?
  • How do I install Kafka?
  • I want to use Docker
  • stop

When you type stop, the runner exits and returns the accumulated messages.

Note: toyaikit is a teaching and experimentation library. It is useful for notebooks and workshops because it keeps display code out of the way. For production systems, use a framework like OpenAI Agents SDK, PydanticAI, LangChain, or your own tested loop.

Optional Groq notebook

The workshop code also includes notebook-groq.ipynb. It shows the same idea with a chat-completions-compatible provider.

import os
from openai import OpenAI

from toyaikit.llm import OpenAIChatCompletionsClient

groq_client = OpenAI(
    api_key=os.getenv("GROQ_API_KEY"),
    base_url="https://api.groq.com/openai/v1"
)

groq_llm_client = OpenAIChatCompletionsClient(
    model="openai/gpt-oss-20b",
    client=groq_client
)

Use the chat completions runner instead of the Responses runner:

from toyaikit.chat.runners import OpenAIChatCompletionsRunner

groq_runner = OpenAIChatCompletionsRunner(
    tools=agent_tools,
    developer_prompt=developer_prompt,
    llm_client=groq_llm_client
)

Run it with the same display callback:

messages = groq_runner.loop(
    prompt="how do I install kafka",
    callback=callback,
)

Use this optional notebook if you want to compare provider APIs. The workshop continues with OpenAI Responses API for the main path.

Questions & Answers (0)

Sign in to ask questions