Back to Event Recordings
Workshop Resource

Building Safe AI Agents with Guardrails

Build safe AI agents with input and output guardrails. Learn how to prevent inappropriate responses, enforce policies, and maintain academic integrity.

January 6, 2026Intermediate to Advanced
ai-agentsllm-engineeringagent-safetytooling-architectureasync-control

Timestamps

Click any timestamp to jump to that moment in the video

Core Tools

OpenAI APIOpenAI Agents SDK (guardrails, Runner)Pydantic (structured outputs)MinSearch (FAQ index)Jupyter NotebookuvGitHubPython asyncio

What You'll Learn

  • Defining guardrails as LLM-based safety checks
  • Implementing input guardrails to block irrelevant or harmful queries
  • Implementing output guardrails to validate responses
  • Preventing inappropriate promises like deadline extensions or legal advice
  • Enforcing academic integrity by blocking homework-writing
  • Chaining multiple guardrails with early stop behavior
  • Running guardrails with streaming safely
  • Implementing tool-based guardrails for frameworks without native support
  • Using asyncio to run guardrails concurrently
  • Cancelling the agent early when a guardrail trips
  • Building a framework-agnostic DIY guardrail runner

Expected Outcome

A DataTalks.Club FAQ assistant protected by input and output guardrails that block off-topic questions, unsafe or policy-violating responses, and academic dishonesty, supports multiple guardrails with clear failure handling, works with streaming, and includes a reusable async pattern to add guardrails to any agent framework