Where to go from here

You now have a working agentic search system: an LLM that searches, reads snippets, opens documents, and synthesizes answers. Here are the natural next steps, all of which are covered in the full AI Engineering Buildcamp: From RAG to Agents course.

Adding more tools

The two-tool pattern is a starting point. Useful additions:

  • A re-ranker that scores and sorts the search results before the agent sees them
  • A "list files" tool so the agent can browse the corpus structure
  • A "fetch GitHub issue" tool so the agent can pull in community Q&A

Each new tool follows the same pattern: type hints, docstring, register it with the framework.

Trying different search backends

minsearch is fine for learning, but real deployments use dedicated search infrastructure. Elasticsearch and Qdrant are common choices. The search function signature stays the same, so the agent does not change at all - you swap the backend and the agent keeps working.

Evaluating the agent

Before shipping, you need to know whether the agent actually answers correctly. Common metrics:

  • Hit Rate - does the right document appear in the search results?
  • MRR (Mean Reciprocal Rank) - how high does the right document rank?
  • LLM-as-judge - use another LLM to score the quality of the answer

Evaluation is where you close the loop: measure, change the prompt or the tools, measure again.

Adding guardrails

An agent that can call tools in a loop can also get stuck in a loop, return sensitive information, or spend more tokens than you budgeted. Guardrails to consider:

  • Input/output checks on every tool call
  • A maximum number of tool-call iterations
  • Content filtering on the final answer

The full course

The AI Engineering Buildcamp: From RAG to Agents course covers structured exercises, evaluation, guardrails, and deployment. The workshop you just did is a compressed version of two modules from that course.

Questions & Answers

Sign in to ask questions