
๐ค Ghostwritten by Claude Opus 4.6 ยท Curated by Tom Hundley
Someone recently lost over $82,000 because their AI coding assistant put a Gemini API key directly into their source code. Not hidden, not encrypted โ just sitting there in plain text, like writing your bank PIN on a sticky note and taping it to your laptop screen. Bots found it, racked up massive charges on the stolen key, and the bill landed on the developer's desk.
This isn't a freak accident. Cursor, Copilot, and Windsurf all routinely generate code with hardcoded API keys, secrets stuffed into comments, and debug logging that prints your credentials to anyone watching. These tools learned from millions of lines of public code โ and a shocking amount of that code had terrible security habits baked in.
If you're building with AI tools and you don't have a computer science background, this is the single most expensive mistake you can make without realizing it. I'm going to show you exactly why it happens, how to spot it, and what to do right now so you don't become the next horror story.
TL;DR: A developer's AI-generated code exposed a Gemini API key in their source files, and automated bots stole it within hours, running up over $82,000 in charges.
Here's what happened: a developer was building a project using an AI coding assistant. The AI generated code that included their Google Gemini API key right in the source file โ something like:
api_key = "AIzaSyD...your-actual-key-here"That code got pushed to a public repository on GitHub. Within hours, automated bots โ programs that constantly scan GitHub for exactly this kind of mistake โ found the key and started using it. They ran thousands of API calls, burning through the developer's billing quota. The final damage: over $82,000.
This isn't rare. According to GitGuardian's 2024 State of Secrets Sprawl report, over 12.8 million new hardcoded secrets were detected in public GitHub commits in a single year โ a 28% increase from the prior year. Bots scan every new public commit in real time. The window between "I accidentally pushed a key" and "someone stole it" is often measured in minutes, not days.
TL;DR: AI coding assistants learned from billions of lines of public code, and much of that code contained terrible security practices โ so the AI reproduces those same dangerous patterns.
Think of it this way: if you learned to cook by watching a thousand YouTube videos, and 400 of those videos showed people leaving the stove on when they left the kitchen, you'd probably pick up that habit too. You wouldn't think of it as dangerous โ it's just what you saw everyone else doing.
That's exactly what happened with AI coding tools. They were trained on massive amounts of public source code, and developers have been hardcoding secrets into source files for decades. The AI doesn't understand that a line of code is a security risk. It just knows that pattern shows up constantly in its training data.
Here's what AI coding assistants generate that will get you in trouble:
| Dangerous Pattern | What It Looks Like | Why It's Bad |
|---|---|---|
| Hardcoded keys in source files | api_key = "sk-abc123..." |
Anyone who sees your code sees your key |
| Secrets in code comments | # Use key: sk-abc123 for production |
Comments get pushed to repositories too |
| Debug logging with credentials | print(f"Calling API with key: {api_key}") |
Your key shows up in logs that others can read |
Every one of these patterns is something AI tools generate regularly. Not occasionally โ regularly. If you've been vibe coding for more than a week, there's a good chance one of these is already in your project.
TL;DR: If you've ever pasted an API key into a prompt or let your AI tool write code that connects to any service, you need to check your project files right now.
Let me be blunt: if your API key is in your code and that code is anywhere public โ GitHub, Replit, a shared Bolt project โ someone can steal it. And "public" might include places you don't expect. Replit projects default to public. GitHub repositories default to public unless you specifically choose private.
What happens when your key gets stolen depends on the service:
According to IBM's 2024 Cost of a Data Breach Report, the average cost of a data breach involving stolen credentials reached $4.81 million globally. You're not IBM โ but even at a tiny scale, an exposed key can ruin your month or your bank account.
TL;DR: Move every API key into a .env file, add .env to your .gitignore, and never paste a real key into an AI prompt.
Open your project in whatever tool you're using (Cursor, Replit, etc.) and search for these terms across all your files:
api_keysecretpasswordtokensk- (OpenAI keys start with this)AIzaSy (Google keys start with this)If you find any of these with actual values next to them (not process.env. or os.environ), you have an exposed secret.
I wrote a full guide on this: .env Files Explained: Protect Your API Keys Today. The short version:
.env in your project's main folderOPENAI_API_KEY=sk-abc123yourkey.env to your .gitignore file so it never gets uploaded anywhereIf a key was ever in your source code โ even briefly โ treat it as compromised. Go to the service's dashboard (OpenAI, Google, Stripe, whatever) and generate a new key. Delete the old one. This takes two minutes and could save you thousands.
Paste this into Cursor, Copilot, or whatever AI coding tool you're using before you start building anything that connects to an external service:
IMPORTANT SECURITY RULES FOR THIS PROJECT:
1. NEVER hardcode API keys, secrets, tokens, or passwords in any source file
2. ALWAYS use environment variables loaded from a .env file
3. NEVER log, print, or console.log any API key or secret, even for debugging
4. NEVER put secrets in code comments
5. If I paste an API key in this chat, remind me to use an environment variable instead
For this project, assume all secrets are stored in a .env file and accessed via environment variables.This won't make your AI tool perfect โ but it dramatically reduces the chance it'll generate dangerous patterns. Think of it like telling a new employee the house rules on their first day.
Before tomorrow's lesson, do one thing: search your current project for hardcoded keys using the terms from Step 1 above. If you find any, move them to a .env file tonight. Not tomorrow, not this weekend โ tonight. The bots don't take days off.
No. As of 2025, AI coding assistants like Cursor, Copilot, and Windsurf do not automatically detect or prevent hardcoded secrets in the code they generate. They reproduce patterns from their training data, which frequently included insecure practices. You need to explicitly instruct them to use environment variables, and you should always manually verify that no keys ended up in your source files.
Immediately rotate the key โ go to the service provider's dashboard, generate a new key, and delete the old one. Removing the key from your code and pushing a new commit does NOT fix the problem, because the old key is still visible in your git history. GitHub also has a feature called "secret scanning" that may alert you, but don't rely on it. Assume the key was stolen the moment it was pushed.
It's risky. Anything you paste into an AI coding assistant may be logged, stored, or used for model training depending on the tool's privacy policy. Cursor and Copilot have different data retention policies. The safest practice is to never type or paste a real API key into any AI chat. Instead, use a placeholder like YOUR_API_KEY_HERE and store the real key in your .env file.
A hardcoded key is your actual secret written directly in your code โ like taping your house key to your front door. An environment variable is a value stored separately from your code (usually in a .env file) that your application reads at runtime. The key difference: your code can be shared, published, or viewed by others without ever revealing the secret, because the secret lives outside the code.
Yes, but they require some setup. Gitleaks is a free tool that scans your code for accidentally committed secrets and can catch over 95% of common leaks. If you're comfortable following a tutorial, it's worth installing. If the command-line setup feels intimidating, start with the basics first โ use .env files, add .env to .gitignore, and search your code manually for key patterns. That alone will prevent most disasters.
.env files โ never let a secret live in your source codeThis stuff isn't complicated, but it is urgent. The gap between "I didn't know" and "I owe $82,000" is a single file with a key in it. Now you know. Go check your project.
You've got this. See you tomorrow.
โ Tom
Discover more content: