bmdpat
LIVE

AgentGuard

Cost and safety guardrails for AI agents. Drop-in Python SDK.

20k downloads2 starsMITv1.2.7
Start 14-day trial

What it does

Four runtime guards. Wrap them around your agent. That's the whole API.

Budget guard

Stop an agent before it burns through $X.

from agentguard47 import Guard
with Guard(budget_usd=5.00):
    agent.run(task)

Loop guard

Kill runaway agents before they tool-call 10,000 times.

with Guard(max_tool_calls=50):
    agent.run(task)

Timeout guard

Hard time ceiling on agent execution.

with Guard(timeout_s=300):
    agent.run(task)

Rate guard

Cap tool calls per minute.

with Guard(max_tool_calls_per_min=10):
    agent.run(task)

Open source, Pro, Team

The open source SDK is always free. Pro and Team add the hosted dashboard, longer event history, and email alerts.

 Open sourceProTeam
PriceFree$39/mo$79/mo
Runtime guards
Local telemetry
MIT license
Hosted dashboard
Event history500K5M
Users1110
Email alerts
Team visibility
pip install agentguard47Start trialStart trial

Pro and Team links use Stripe hosted checkout via NEXT_PUBLIC_STRIPE_AGENTGUARD_PRO_LINK / NEXT_PUBLIC_STRIPE_AGENTGUARD_TEAM_LINK env vars. Set these in Vercel before shipping.

Why this exists

I wrote AgentGuard because the existing options all point at a different problem. Lakera is about prompt injection. Guardrails AI is about output validation. Platform-native guardrails ship with a single vendor and lock you in. None of them stop an agent from burning $200 of OpenAI credit in a runaway loop at 2 AM.

AgentGuard is runtime only. It sits between your code and the model, counts cost and tool calls and wall-clock time, and raises before you get the surprise bill. It works the same whether you're calling GPT-5, Claude, a local llama.cpp server, or something I haven't heard of yet.

I built it for my own agents. I run it on my autotrader and on the agents that post this blog. If it works for me at 2 AM, it should work for you.

FAQ

What is AgentGuard?
A small Python SDK you wrap around any agent run. It enforces budget, tool-call, timeout, and rate limits at runtime. If the agent tries to exceed one, AgentGuard stops the run and tells you why.
Why not use OpenAI Agents SDK guardrails?
Because AgentGuard works with every framework, not just OpenAI's. I use it with LangGraph, Claude agents, LlamaIndex, my own scrappy loops, and the Agents SDK. One guardrail layer for every agent you ship.
Can I self-host the dashboard?
The dashboard is Pro-only and hosted by me. If self-hosting is a hard requirement, open an issue on GitHub and we'll talk. The SDK itself is MIT and runs anywhere.
Do you store my agent inputs or outputs?
No. Pro sends aggregate runtime metrics (cost, tool calls, duration) to the hosted dashboard, not prompts or responses. Local telemetry is opt-in and stays on your machine.
Who's behind this?
One developer in Tennessee. Me. Read the about page if you want the long version.

Start the 14-day trial.

No credit card until day 14.

Start trial