
๐ค Ghostwritten by GPT 5.4 ยท Fact-checked & edited by Claude Opus 4.6 ยท Curated by Tom Hundley
A lot of vibe coders think secret leaks happen to "real engineers" with giant systems. Not true. GitGuardian's 2025 State of Secrets Sprawl report found 23.77 million new secrets exposed on public GitHub repositories โ a 25% year-over-year increase. The same report found that commits from AI coding tools (Copilot-authored code specifically) showed a roughly doubled secret-leak rate compared to human-only commits. If you use Claude, Cursor, Bolt, Replit, v0, or Lovable to build apps fast, you are in the blast zone.
Here's the plain-English version: AI tools help you move faster, but they also make it easier to paste keys, tokens, passwords, and config files into the wrong place. They autocomplete dangerous patterns. They generate setup files you do not fully understand. And when something "just works," people commit it without checking what got saved.
That is the gap most articles miss. The problem is not only hardcoded keys. It is the full chain: AI suggests the code, the config, the helper file, the plugin connection, and the commit. One bad suggestion can leak your house keys, your office keys, and the map to the house in one shot. I'm telling you this because I don't want you to learn it the hard way.
TL;DR: AI coding tools are not just writing code โ they are creating and moving secrets through files, tools, and connections faster than most vibe coders can inspect.
The headline number matters: GitGuardian's 2025 report documents nearly 24 million secrets exposed on public GitHub. That alone should get your attention. But the more important detail for non-developers is the pattern behind it.
When you ask an AI assistant to "hook up Stripe," "connect Supabase," or "add OpenAI," it often gives you a ready-to-paste setup. Sometimes that includes a key directly inside a file. Sometimes it tells you to put the key into a config file that later gets uploaded. Sometimes it creates example files and forgets to tell you which ones are safe to publish.
That is why AI coding tool credential exposure is rising. The machine is excellent at getting you unstuck, but it does not feel fear. It does not grasp that one pasted key can lead to surprise bills, stolen data, or an attacker using your account as their playground.
GitGuardian also reported that misconfigured MCP (Model Context Protocol) servers exposed thousands of secrets. If you have no idea what that means, think of MCP as a tool bridge. It lets your AI assistant connect to outside tools and data sources. Useful? Very. Dangerous when set up loosely? Absolutely. Bad MCP configuration security is like leaving a side door open because you were focused on decorating the front room.
And this is not only about source code. GitGuardian found millions of web servers exposing .git directories, with a large number containing credentials. If you have not read Git Folder Exposure: Fix This Before You Deploy, read it next. Exposed history files can reveal old secrets even after you "deleted" them.
TL;DR: Most leaks happen through generated code, generated config, and generated tool connections โ not just obvious passwords pasted into app files.
Here is the part I want you to remember: secret leaks are usually boring, accidental, and completely preventable.
This is the classic one. You ask for a working demo, and your AI helper puts the secret right in the file so the app runs immediately. It feels helpful. It is also how people end up searching for "github secrets leaked" after the damage is done.
A lot of vibe coders do not know which files are private and which are safe to publish. AI tools often create project files, deployment files, and settings files. Some belong on your private machine only. Some belong in your repo. Some should contain placeholders, not real credentials.
This is where Claude, Cursor, and Bolt security gets messy. Modern AI coding tools can connect to outside services, databases, automation tools, and hosted platforms. That convenience creates more places where secrets can sit, sync, or leak. If a connector is misconfigured, your assistant may have more access than it needs.
| Leak path | What it looks like | Why vibe coders miss it | What to do |
|---|---|---|---|
| Hardcoded secret in code | Key pasted directly into app file | App works instantly, so it feels correct | Replace with a private setting and rotate the leaked key |
| Secret in config file | Real credentials saved in setup file | File names look harmless | Review every generated config before publishing |
| MCP or tool connection leak | AI tool connected too broadly to outside systems | Permissions are invisible or confusing | Audit every tool connection and remove what you do not need |
| Exposed .git folder | Old versions of files still reachable on server | Deleting the file later feels like enough | Block public access and redeploy safely |
If you want the broader pattern, see Your AI Coding Assistant Is Leaking Your API Keys. This article is narrower on purpose: I want you to see how the whole chain breaks, not just one file.
TL;DR: If your AI tool can read, write, connect, and commit, then you need to treat it like a very fast intern who must be supervised.
This is the practical difference between old-school mistakes and today's AI-assisted mess. Before, you usually had to manually paste a secret into the wrong place. Now your assistant may suggest it, insert it, save it, and help publish it.
That matters because speed changes behavior. People inspect less when momentum is high. They trust generated files they did not write. They assume a working demo is a safe demo. It usually is not.
GitHub has responded by expanding its secret scanning program, including new detectors for additional provider keys. If you have not seen GitHub Secret Scanning Now Detects Vercel and Supabase Keys, the big idea is simple: GitHub secret scanning is improving, but it only helps if you turn the protections on and pay attention to alerts.
There is another ugly angle. Attackers know AI tools consume surrounding context. That means issue threads, notes, pasted logs, helper files, and generated instructions can all influence output. Our article on Hidden Prompts in GitHub Issues Explained covers how attackers can hide instructions where AI tools might read them. If your assistant is connected to more than it needs, a small trick can become a big mess.
The most dangerous secret leak is not the one you paste on purpose. It is the one your tool moves automatically while you are feeling productive.
TL;DR: Turn on scanning, check your AI tool connections, stop publishing generated config blindly, and add a secret check before every commit.
You do not need a computer science degree for this. Do these five things today.
Open your repository on GitHub. Click Settings. Look for Security or Code security and analysis. Turn on secret scanning and any push protection options GitHub offers for your plan.
If you see warnings later, do not ignore them. A warning means "someone may have left the key in the door."
Open Cursor, Claude, Bolt, Replit, or whatever you use. Look for integrations, connectors, tools, plugins, or MCP servers. Make a list.
Then ask, one by one:
If you do not need it, disconnect it.
Use your editor's search. Look for terms like:
keysecrettokenpasswordsk-ghp_You are not trying to be perfect. You are trying to catch the dumb stuff before the internet does.
Yes, that phrase sounds technical. In plain English, a pre-commit check is a gate that looks through your files before they are saved into project history. Ask your AI tool to set this up for you.
Paste this:
"Set up a pre-commit secrets scanner for this project. I am a non-developer, so explain each step in plain English. Before making changes, first scan the repo for hardcoded API keys, tokens, passwords, and private config files. Then create a safety check that blocks future commits containing secrets. Use placeholders like YOUR_API_KEY instead of real values, and show me which files should never be published."
If your AI gives you steps you do not understand, tell it: "Explain it like I've never done this before." Keep going until it does.
If a real key touched a public repo, assume it is burned. Delete it at the provider, create a new one, and update your app with the replacement. Deleting the file is not enough.
If you want another angle on prevention, read AI Coding Tools Can Double Your Secret Leak Rate โ Here's How to Fix It. That piece focuses more on prevention patterns; this one is about understanding why the leak chain is bigger than most people think.
A secret is anything that unlocks a service, account, database, automation, or paid tool. Think API keys, tokens, passwords, private URLs, and certain config values. If someone else could use it to access your stuff or spend your money, treat it like a house key.
No. Git keeps history, and other people or bots may have already copied it. Rotate the key with the service provider, then clean up the repository. Tools like git filter-repo can help scrub history, but rotation is the non-negotiable first step.
It means making sure your AI tool's external connections โ via the Model Context Protocol โ are set up safely. If MCP is the bridge, security means only opening the bridge you actually need, for the minimum traffic, for the shortest time.
Not by themselves. The risk comes from how much access you give them, what files they can read, what they generate, and how quickly you publish without checking. The tool is not evil; blind trust is the problem.
Yes, if you are publishing code, you need a last-second check. Think of it like the beep when you leave your car headlights on. It will not make you perfect, but it can save you from a dumb, expensive mistake.
The scary part of the millions-of-secrets-on-GitHub story is not just the number. It is how normal the mistakes are. A helpful AI suggestion, a rushed setup, one click too many, and suddenly your private keys are sitting in public.
So slow down at the exact moments when the tool makes you feel fastest. Check generated files. Review tool connections. Turn on GitHub secret scanning. Add a secret check before commits. Share this with someone who needs it, and come back tomorrow for the next lesson.
You've got this. See you tomorrow.
Discover more content: