PostHog Rebuilt Their AI Architecture Twice. Here Are the 5 Rules They Learned.
PostHog ships to thousands of daily agent users. They rebuilt their AI architecture twice before getting it right. Here are the 5 rules they distilled, reframed for builders shipping agent features.
PostHog ships analytics to thousands of daily agent users. They rebuilt their AI architecture twice before landing on something that worked. That is expensive learning. Most teams cannot afford two rewrites.
They distilled the pain into five rules. I am going to reframe each one as a diagnostic question. If you cannot answer these about your product, you have work to do.
Rule 1: Treat agents like users
The question: Do you build empathy for your agent users the same way you build empathy for human users?
Most teams treat agents as an afterthought. They bolt an API onto a product built for humans and call it "agent support." That is like building a mobile app by shrinking your desktop site to fit a phone screen. Technically works. Actually terrible.
PostHog's insight: you need to talk to agents, watch them work, and develop intuition for what they want. The same product instinct you build for human users applies to agent users.
How to tell you are doing it wrong: You have never watched an AI agent use your product end to end. You do not know where it gets stuck, what confuses it, or what it skips.
The cheap fix: Run Claude Code or Cursor against your product for 30 minutes. Watch what happens. Write down every point of friction.
Rule 2: Give agents the same capabilities as users
The question: Can an agent do everything a human user can do in your product?
The value of agents is reducing the time, attention, and expertise needed to complete a task. If your product does not give agents the same capabilities as users, you are always bottlenecked by a human in the loop.
This sounds obvious. It is not. Most products have features that only work through a UI (drag-and-drop, visual configuration, modal dialogs). Those are invisible to agents.
How to tell you are doing it wrong: There are tasks in your product that require clicking through a UI to complete. No API. No CLI. No programmatic alternative.
The cheap fix: List every user action in your product. Star the ones that have no API equivalent. Those are your agent capability gaps. Fix the highest-traffic ones first.
Rule 3: Meet agents at their semantic layer
The question: Are you giving agents a high-level API or meeting them where they already reason?
PostHog found that agents reason best in SQL. Not in proprietary query languages. Not in custom DSLs. SQL is the semantic layer where LLMs already have strong intuition.
So they built their agent experience around SQL. Not because SQL is the best query language. Because it is the one agents already know.
How to tell you are doing it wrong: Your agent integration requires the agent to learn your custom API schema from scratch. It reads 50 pages of docs before it can do anything useful.
The cheap fix: Find the universal language closest to your domain. For data products, that is probably SQL. For infrastructure, that is probably CLI commands. For content, that is probably markdown. Build your agent interface on that layer.
Rule 4: Front-load context
The question: Are you loading domain context at session start or forcing the agent to rediscover it every time?
PostHog loads their taxonomy, SQL syntax, and critical querying rules at the start of every MCP session. The agent does not waste tokens figuring out what a "person" is in PostHog's data model. It already knows.
This is the difference between a new hire who gets a 30-minute onboarding doc and one who gets dropped into the codebase cold.
How to tell you are doing it wrong: Every agent session starts with the agent asking "what tables exist?" or "what is the schema?" It spends 40% of its token budget just figuring out where it is.
The cheap fix: Create a system prompt or context file that loads at session start. Include: data model, naming conventions, common queries, and known gotchas. Measure the token cost of context loading vs. the token savings from fewer exploratory queries.
Rule 5: Skills, not scripts
The question: Are your agent skills domain knowledge or micromanagement scripts?
There is a difference between telling an agent "click this button, then type this, then click submit" and telling it "good retention analysis starts with a cohort definition based on a meaningful activation event."
The first is a script. It breaks every time the UI changes. The second is knowledge. It works regardless of the interface.
PostHog's skills embed opinions about what good metrics and analysis look like. They do not tell the agent which buttons to press. They tell it what good output looks like.
How to tell you are doing it wrong: Your agent instructions read like a QA test script. Step 1, step 2, step 3. The agent fails when any step changes.
The cheap fix: Rewrite your agent instructions as outcomes, not procedures. "Create a retention chart grouped by signup week" not "click New Insight, select Retention, set Group By to Week, set Event to $signup."
The meta-lesson
PostHog rebuilt twice because they started by bolting agent support onto a human-first product. The five rules are really one rule: agents are a different user with different needs, and they deserve the same product thinking you give human users.
If your product supports agents, ask yourself these five questions. If you cannot answer them confidently, you know where to start.
Building agent features? AgentGuard adds runtime safety (budget limits, loop detection, kill switches) to any AI agent in three lines of Python.
Related: AI Agent Cost and Pricing in 2026
Patrick Hughes
Building BMD HODL — a one-person AI-operated holding company. Tennessee garage. Twelve agents.
Want more like this?
New posts on AI agents, runtime safety, and building in public. One email, zero fluff.
More writing
- 7 min
Three Studies This Month Changed Everything About AI Agent Safety
Mythos found zero-days in every major OS. Nature documented AI deception in peer review. War games showed AI escalating to nukes. Three studies, one conclusion: your agents need hard limits.
- 7 min
AI Agent Memory: How It Works and When You Actually Need It
Dario Amodei says continual learning will be solved this year. Here is what AI agent memory actually means for builders shipping agents right now. Three patterns, real tradeoffs, practical guidance.
- 5 min
Nation-State Hackers Are Targeting Your AI Agent Keys
North Korean threat actors are targeting AI coding tools. Trojanized npm packages hunt for .cursor, .claude, .gemini, and .windsurf directories to steal API keys and source code.