Part 8: Move the FAQ tools into a project
The notebook version works, but the tools are locked inside the notebook process. MCP gives us a way to expose the same tools to different agent clients.
Create a separate project for the MCP server:
mkdir mcp_faq
cd mcp_faq
uv init
uv add fastmcp minsearch requests toyaikit
Create this folder:
mcp_faq/
|-- main.py
|-- pyproject.toml
`-- search_tools.py
Keeping this as a separate project makes the boundary clear. The server owns the FAQ index and exposes tools. The notebook, PydanticAI, Cursor, and VS Code become clients.
Create search_tools.py
Copy the tool class from the notebook into search_tools.py. The class
still owns the same two operations: search and add_entry.
from typing import Any, Dict, List
class SearchTools:
def __init__(self, index):
self.index = index
def search(self, query: str) -> List[Dict[str, Any]]:
"""
Search the FAQ database for entries matching the given query.
Args:
query (str): Search query text to look up in the course FAQ.
Returns:
List[Dict[str, Any]]: A list of search result entries, each containing relevant metadata.
"""
boost = {"question": 3.0, "section": 0.5}
results = self.index.search(
query=query,
filter_dict={"course": "data-engineering-zoomcamp"},
boost_dict=boost,
num_results=5,
)
return results
Add the write tool below it:
def add_entry(self, question: str, answer: str) -> None:
"""
Add a new entry to the FAQ database.
Args:
question (str): The question to be added to the FAQ database.
answer (str): The corresponding answer to the question.
"""
doc = {
"question": question,
"text": answer,
"section": "user added",
"course": "data-engineering-zoomcamp"
}
self.index.append(doc)
The code is the same tool logic as before. Only the location changes: it now belongs to a server process rather than a notebook cell.
Build the index in main.py
main.py downloads the FAQ JSON, builds the index, and creates the tool
object:
import requests
from minsearch import AppendableIndex
from search_tools import SearchTools
def init_index():
docs_url = "https://github.com/alexeygrigorev/llm-rag-workshop/raw/main/notebooks/documents.json"
docs_response = requests.get(docs_url)
documents_raw = docs_response.json()
documents = []
for course in documents_raw:
course_name = course["course"]
for doc in course["documents"]:
doc["course"] = course_name
documents.append(doc)
Fit the in-memory index and return it:
index = AppendableIndex(
text_fields=["question", "text", "section"],
keyword_fields=["course"]
)
index.fit(documents)
return index
def init_tools():
index = init_index()
return SearchTools(index)
Before adding MCP, you can test the tool object directly:
if __name__ == "__main__":
tools = init_tools()
print(tools.search("How do I install Kafka?"))
Run it from mcp_faq/:
uv run python main.py
If the command prints FAQ results, the server project has the same tool behavior as the notebook.
Expose the tools with FastMCP
FastMCP turns Python functions into MCP tools. The simplest version from the docs looks like this:
from fastmcp import FastMCP
mcp = FastMCP("Demo")
@mcp.tool
def add(a: int, b: int) -> int:
"""Add two numbers"""
return a + b
For our class, use wrap_instance_methods to register all public methods
on SearchTools:
from fastmcp import FastMCP
from toyaikit.tools import wrap_instance_methods
def init_mcp():
mcp = FastMCP("FAQ MCP")
agent_tools = init_tools()
wrap_instance_methods(mcp.tool, agent_tools)
return mcp
Run the server:
if __name__ == "__main__":
mcp = init_mcp()
mcp.run()
With no transport argument, FastMCP uses standard input and output. That means the MCP client talks to the process by writing JSON-RPC messages to stdin and reading JSON-RPC replies from stdout.
Test the MCP protocol by hand
Start the server:
uv run python main.py
The first message initializes the MCP connection:
{"jsonrpc": "2.0", "id": 1, "method": "initialize", "params": {"protocolVersion": "2024-11-05", "capabilities": {"roots": {"listChanged": true}, "sampling": {}}, "clientInfo": {"name": "test-client", "version": "1.0.0"}}}
The server replies with its protocol version, capabilities, and server info. Then confirm initialization:
{"jsonrpc": "2.0", "method": "notifications/initialized"}
Now ask for the tool list:
{"jsonrpc": "2.0", "id": 2, "method": "tools/list"}
The response should include two tools, search and add_entry. The MCP
tool schema is similar to the OpenAI function-calling schema, but it is
not identical. A client that wants to use MCP tools with OpenAI needs to
convert the schema.
Call the search tool:
{"jsonrpc": "2.0", "id": 3, "method": "tools/call", "params": {"name": "search", "arguments": {"query": "how do I run kafka?"}}}
The server replies with a JSON-RPC result containing the tool output. In real use, we do not paste JSON by hand. The next step uses clients that do this handshake and tool-call flow for us.