[bmdpat]
All writing
5 min read

7% of vibe-coded apps ship with wide-open databases

A 1,764-app audit found 7% had open Supabase databases and 15% of Bolt apps had hardcoded secrets. The fix takes ten minutes.

Share LinkedIn

A team audited 1,764 apps built with AI coding tools like Lovable and Bolt. The numbers are bad.

  • 7% had publicly accessible Supabase databases. Anyone with the URL could read the data.
  • 15% of Bolt-generated apps shipped with hardcoded API keys in source.

Source: r/netsec post summarizing the audit.

If you've shipped a vibe-coded side project in the last six months, there is a real chance you are one of those apps.

Why this happens

AI coding tools optimize for the demo. Make it work. Make it look good. Get the user to "wow" in under five minutes.

Security is friction. So it gets skipped.

The two failure modes are predictable:

  1. Hardcoded secrets. The model writes const SUPABASE_KEY = "eyJhbGciOi..." because it gets the demo working in one file. The user copies the code, ships it to GitHub or a public Vercel deploy, and the key is now in the world.
  2. Open Supabase RLS. The default Supabase project lets you read every table from any client with the anon key. You have to explicitly turn on Row Level Security and write policies per table. Most vibe-coded apps never do.

Neither is a bug in the AI tool. They are defaults that match how humans build demos. The problem is humans ship demos to production.

What an attacker does

Finding these is trivial. GitHub code search, Bolt project listings, and Lovable public deploys are all crawlable. Tools exist that scan for the patterns automatically. The 7% and 15% numbers came from a single researcher running a script.

Once an attacker has your Supabase URL and anon key with RLS off, they have your users table, your messages table, your billing table. Whatever you stored.

If you stored Stripe customer IDs and emails together, that's a leak. If you stored partial credit card data because you "weren't sure how Stripe worked yet," that's worse.

The fix is boring

You don't need a security team. You need a checklist that runs every time you push.

  1. No keys in source. Every secret lives in .env.local (gitignored) and gets injected at deploy time. If your app builds and runs without the env vars, you have a hardcoded key somewhere.
  2. RLS on every Supabase table. Default-deny. Then write a policy per table. If you can hit https://your-project.supabase.co/rest/v1/users from your browser without auth and get rows back, you are in the 7%.
  3. Scan before you ship. A pre-deploy script that greps for known key patterns (sk_live_, eyJ, AKIA, etc.) and fails the build if it finds them.

These are 30-minute fixes. Every one of them.

Where AgentGuard fits

AgentGuard47 was built for AI agent guardrails (budget, rate limiting, token caps), but the audit pattern works the same way: static guards on output before it ships.

If you are building with AI coding tools and want a concrete checklist plus a running guard for the agent layer, AgentGuard47 is the smallest version of that idea. pip install agentguard47, wrap your agent calls, and you get budget caps and rate limits that fail closed.

It does not catch hardcoded Supabase keys. That is a different scanner. But the principle is the same: don't trust the model's output, gate it.

What to do today

If you shipped anything with Lovable, Bolt, or a single Cursor session in the last quarter, do this in the next hour:

  1. Open the deployed site. View source. Search for eyJ, sk_, pk_live_. If you find any, rotate the key now.
  2. Hit your Supabase REST endpoint without auth from a browser. If rows come back, turn on RLS.
  3. Run git log -p | grep -E "eyJ|sk_live|pk_live" on your repo. If anything shows up, rotate.

Three checks. Ten minutes. Worth it.

The 7% and 15% numbers are not going down on their own. AI tools are getting better at writing code. They are not getting better at remembering to lock the front door.

Try AgentGuard for AI agent guardrails

PH

Patrick Hughes

Building BMD HODL — a one-person AI-operated holding company. Nashville, Tennessee. Twenty-Two agents.

Want more like this?

AI agent builds, real costs, what works. One email per week. No fluff.

More writing