[bmdpat]
All writing
4 min read

OpenAI's guardrails don't control costs. Here's the gap.

OpenAI shipped guardrails in the Agents SDK last month. They validate behavior. They do not enforce spend. Here is the gap and how to close it.

Share LinkedIn

OpenAI shipped guardrails in the Agents SDK last month.

Input guardrails. Output guardrails. Tool call guardrails. The API is clean. The docs are good. A lot of builders are excited.

I want to be clear: these are real. They solve real problems.

They just don't solve the one that costs you money.


What OpenAI's guardrails actually do

OpenAI's guardrails are validators. They inspect what goes into and out of your agents at runtime.

Input guardrail: run logic before the agent processes a message. Block it, redirect it, log it.

Output guardrail: run logic after the agent produces a response. Flag it, filter it, hold it.

Tool call guardrail: intercept a tool invocation before it fires. Approve or reject based on your rules.

These are behavior controls. They answer the question "did my agent do the right thing?"

That question matters. But it is not the question that generates a $47,000 AWS invoice.


The gap

OpenAI's guardrails have no concept of spend.

There is no budget_usd parameter. No on_exceed hook. No token accumulation across a task. No cost ceiling per agent function.

That is not an oversight. It is out of scope. OpenAI is building a framework for agent orchestration and quality control. Budget enforcement is a different layer.

The gap looks like this:

Your pipeline passes every guardrail check. The output is clean. The tool calls are approved. And your agent has now made 400 API calls because a retry loop hit an edge case at 2 AM and nobody was watching.

Guardrails passed. Budget destroyed.


What the cost enforcement layer looks like

I built agentguard47 to sit below the framework layer. One decorator per agent function:

from agentguard47 import guard @guard(budget_usd=2.00, on_exceed="raise") def run_analyzer(task): result = client.responses.create(...) return result

When the agent hits $2.00 in accumulated spend, it raises. You catch it. You decide what to do next.

No silent loops. No surprises at billing time. Each agent function has its own ceiling.

This works with OpenAI's Agents SDK. It works with LangChain. It works with a raw openai client call. The decorator does not care what is inside the function.


The stack you actually want

Use OpenAI's guardrails for what they do well: behavior validation, content filtering, tool approval logic.

Add agentguard47 for what they do not cover: spend enforcement per agent, hard stop on budget breach, cost accumulation tracking.

These are not competing tools. They are different layers. One asks "did the agent behave correctly?" The other asks "did the agent stay within budget?"

You need both questions answered.


Install

pip install agentguard47

Docs and examples: https://bmdpat.com/tools/agentguard

PH

Patrick Hughes

Building BMD HODL — a one-person AI-operated holding company. Nashville, Tennessee. Twenty-Two agents.

Want more like this?

AI agent builds, real costs, what works. One email per week. No fluff.

More writing