
๐ค Ghostwritten by GPT 5.4 ยท Fact-checked & edited by Claude Opus 4.6 ยท Curated by Tom Hundley
A lot of vibe coders think secret leaks are a "big company" problem. They're not. GitGuardian's 2026 State of Secrets Sprawl report says 28.65 million new hardcoded secrets appeared on GitHub in 2025, and AI-assisted coding made leaks more likely โ with Claude Code-associated workflows showing a 3.2% leak rate versus a 1.5% baseline. If you're pasting prompts into Cursor, Replit, Bolt, v0, or Lovable and letting the tool wire things up, you are in the blast zone.
Here's the plain-English version: a secret is a password for your app. If you leave that password inside your code and publish it, strangers can walk right into your database, email system, payment tools, or user accounts. The Moltbook breach is the nightmare version of this โ reports tied the incident to a leaked Supabase key in client-side code, and the fallout reportedly included 1.5 million compromised accounts, 30,000 exposed email addresses, and thousands of private messages.
This article covers the gap other posts missed: not just why leaks happen, but exactly how vibe coders accidentally create them during normal AI-assisted building, how to check whether you already did, and what to fix today before someone else finds it first.
TL;DR: Vibe coders leak secrets because AI tools optimize for "make it work now," while secrets require "set it up safely first."
The core problem with AI coding tool security isn't that the tools are malicious. It's that they're obedient. If you ask for "make Supabase auth work" or "connect Stripe fast," the model will often generate a shortcut that puts the key directly into a file โ because that produces a visible result immediately.
If you're not a developer, this looks completely normal. You see a file open in your editor, the app starts working, and you think, great, done. What you don't see is that the tool may have just written the digital equivalent of your house key onto the front door.
Here are the most common ways this happens:
.env file, but then also commits that file to GitHubGitGuardian's 2026 report also found 24,000 exposed secrets in MCP configuration files. That matters because vibe coders increasingly use AI agents and tool connectors without realizing those config files can be just as sensitive as source code.
For the broader picture, read GitHub Secrets Leaked: Why AI Tools Make It Worse. But the key lesson is simpler: the more you rely on AI to scaffold integrations, the more often you need to stop and ask, "Did this tool just put a password somewhere public?"
A secret is anything that lets software act on your behalf:
If it would be dangerous to post on social media, it should not live in your code.
TL;DR: One exposed key can turn a small mistake into a mass privacy incident.
The Moltbook breach matters because it shows how ugly a "small" leak can get. By public reporting, a leaked Supabase API key in client-side code opened the door to large-scale account compromise. This is exactly the kind of mistake vibe coders make โ AI tools often blur the line between "stuff for the browser" and "stuff for the server."
Here's the everyday analogy: imagine your apartment building has a front desk key, a master maintenance key, and your own room key. Some keys are fine to carry around; some should never leave the office. In app building, some public identifiers are okay to expose, but privileged keys are not. AI tools don't always explain the difference.
A Supabase API key leak is especially dangerous when the wrong key gets shipped into frontend code. Frontend code means code the user's browser downloads. If the browser gets it, the public gets it. Attackers don't need to hack you โ they just open developer tools and look.
The definitive rule: If a key is delivered to the browser, assume strangers can read it. That's not paranoia โ that's how the web works.
This is also why the phrase "GitHub secrets leaked" is too soft. A leaked secret isn't just "data exposure" โ it's often active access. As covered in 64% of Leaked Secrets Still Work Years Later โ And Yours Might Be One of Them, leaked credentials often stay usable far longer than people expect.
TL;DR: You can do a useful first-pass secret audit without knowing git, terminals, or security tooling.
If you built something with AI help, do this right now. Open your project folder in the editor you normally use and search for these terms one by one:
keysecrettokenpasswordsk-ghp_supabasepostgres://apikey.envYou're looking for long, random-looking strings โ especially if they sit directly inside files that contain your app code.
| Situation | Risk Level | What It Means | What to Do |
|---|---|---|---|
| Key inside a frontend file | Very high | Anyone using your app may be able to read it | Rotate key immediately and move it server-side |
Key inside .env but project is public |
High | The file may have been committed or uploaded | Check GitHub history, rotate if exposed |
Placeholder like YOUR_API_KEY |
Low | Usually safe example text | Verify it's not a real live key |
| Public app URL or project ID | Usually low | Identifiers aren't always secrets | Confirm with service docs before assuming safe |
| Secret inside MCP config | High | Tool connectors may expose powerful access | Remove, rotate, and store safely |
GitHub also offers secret scanning on supported plans. Turn it on. Even if you don't understand every option, enable secret scanning and push protection where available. If GitHub says, "This looks like a secret," don't click past it just because the app is almost done.
For more on the AI-specific side, see Your AI Coding Assistant Is Leaking Your API Keys.
TL;DR: An .env file is a private note to your app โ not a file you upload, screenshot, or share.
People hear "use .env files" and think that's the whole fix. It isn't. An .env file is a local storage file that holds secrets outside your visible code. That's better than hardcoding, but only if the .env file stays private.
Here's the everyday analogy: putting your spare key in a drawer is safer than taping it to your front door. But if you post a photo of the drawer contents online, you still lose.
For .env file security, follow these rules:
.env, not in app code.env contents into chat, screenshots, or bug reports.env to a public repo.env.example with fake placeholder values only.env as compromised and rotate everything inside itBad:
const supabaseKey = "YOUR_API_KEY";Better:
const supabaseKey = process.env.SUPABASE_KEY;If that code looks unfamiliar, don't worry about the syntax. The point is simple: the file contains a label, not the real secret.
Paste this into your AI coding tool:
Audit this project for unsafe secret handling.
I am a non-developer, so explain everything in plain English.
Do not put any real keys in code.
Move all secrets to a local .env file.
Create a .env.example file with fake placeholder values only.
Tell me exactly which files should never be uploaded publicly.
If any secret may already be exposed, list the services I need to rotate immediately.
Show me the final file changes step by step.This is one of the few times you should be bossy with the model. Spell it out.
TL;DR: Secret safety isn't advanced security work; it's basic house-locking for modern app builders.
Here is your practical cleanup list.
If you use AI to build apps, you're now doing light security work whether you wanted that job or not. The tooling has lowered the cost of building, but it has also lowered the cost of making a dangerous mistake very quickly.
The phrase "Claude Code vulnerabilities" gets thrown around a lot, but the bigger issue is workflow vulnerability. The model is fast. You are trusting. The app is live. That's the danger.
.env file.env.example with fake values only.git folder or raw repository contents publicly; if you haven't checked this, read Git Folder Exposure: Fix This Before You DeployBefore tomorrow, pick one project you built with AI and do a 15-minute secret audit. Not all projects. One. Finish it.
Search your project for words like key, secret, token, password, and service names such as Supabase or Stripe. If you see a long, real-looking credential inside a code or config file, assume it's exposed and rotate it. You can also enable GitHub's built-in secret scanning, which automatically flags known credential patterns.
Yes, but only if you keep them private. An .env file is safer than hardcoding secrets into code, but if you upload that file publicly, paste its contents into a chat, or share it in screenshots, the protection is gone. Always add .env to your .gitignore file.
Because they optimize for getting a working result quickly. If your prompt is vague, the tool may choose the fastest path โ which often means pasting a live key directly into code or config. Being explicit in your prompts about secret handling significantly reduces this risk.
Go to your Supabase dashboard, create a new key, disable the old one, and then update your app to use the new key in a private .env file or secure server setting. If the old key was in browser code, treat it as fully public and check your database for unauthorized access.
Not necessarily. Someone may have copied it before you deleted it, and Git history preserves old versions of files. Automated scrapers continuously monitor GitHub for new commits containing secrets. If a real secret was ever public โ even briefly โ the safe move is to rotate it, not just delete the line.
.env files help only when they remain private and unsharedYou don't need a computer science degree to do this right. You just need to treat secrets like real keys to real doors โ because that's what they are. The builders who stay safe aren't the ones who know the most jargon. They're the ones who slow down for five minutes before they publish. You've got this. See you tomorrow.
Discover more content: