
๐ค Ghostwritten by Claude Opus 4.6 ยท Fact-checked & edited by GPT 5.4 ยท Curated by Tom Hundley
AI coding assistants can increase the odds that secrets end up in source code if you use them carelessly. GitGuardian's 2025 State of Secrets Sprawl report found a higher secret incidence rate in AI-assisted commits than in other commits: 3.2% versus 1.5%. The same report said 23.8 million new hardcoded secrets were detected on public GitHub in 2024, and credentials tied to AI and MLOps tooling grew 38% year over year.
If you're building with Cursor, Bolt, Replit, Lovable, or v0, the risk is straightforward: these tools are optimized to get working code on the screen fast. Unless you tell them otherwise, they may suggest patterns that place API keys, tokens, or database credentials directly in files that later get committed.
The fix is also straightforward. Use environment variables, enable GitHub secret scanning and push protection where available, and rotate any credential that has already been committed. This article shows why leaks happen, how to check your repositories, and what to change today.
TL;DR: AI tools optimize for working code, not secure defaults, so they often suggest hardcoded credentials unless you explicitly require safer patterns.
AI coding assistants are built to complete tasks quickly. If you ask one to connect your app to Supabase, Resend, Stripe, or OpenAI, the shortest path to a working demo may be to place a credential directly in code. That's convenient in the moment and risky the second the file is committed.
Three reasons this happens more often with AI-generated code:
They optimize for immediate success. Without explicit instructions, many assistants favor the fastest implementation over the safest one.
They generate a lot of code quickly. More generated code means more opportunities for a secret to appear in a config file, test fixture, example snippet, or deployment script.
They reproduce common patterns from public code. Public repositories include both good and bad examples. Models can echo insecure patterns unless the prompt, tooling, or review process corrects them.
I covered the practical side of this in Your AI Coding Assistant Is Leaking Your API Keys. The short version: if you don't set guardrails, the assistant may treat a live credential like any other string literal.
TL;DR: A secret is any credential that grants access to a service, and a leak can lead to fraud, data exposure, account takeover, or surprise cloud bills.
A secret is any value that authenticates your app or your team to a service. Common examples include:
| Secret Type | What It Looks Like | What Happens If Leaked |
|---|---|---|
| API key | sk-... or key_live_... |
Someone can use your paid service or impersonate your app |
| Database connection string | postgresql://user:password@host/db |
Someone may read, alter, or delete data |
| OAuth client secret | Provider-specific token | Someone may abuse your app integration |
| Webhook signing secret | whsec_... |
Someone may forge events to your application |
| Hardcoded env value in source | RESEND_API_KEY="..." |
The credential becomes visible to anyone with repo access |
GitGuardian's 2025 report found especially strong growth in leaked credentials tied to AI and MLOps ecosystems. That tracks with how people build now: more prototypes, more integrations, and more generated code touching third-party APIs.
The other reason this matters is speed. Public repositories are routinely monitored by automated systems looking for exposed credentials. The exact time to exploitation varies by provider and secret type, but the operational rule is simple: once a real key is committed publicly, treat it as compromised.
TL;DR: Start with GitHub secret scanning, then manually review your codebase for hardcoded credentials and rotate anything that has already been committed.
GitHub secret scanning is the fastest first check for most teams. GitHub has continued expanding partner patterns and generic detection over time, including support for many widely used cloud and developer platforms. If you want more background, see GitHub Secret Scanning Now Detects Vercel and Supabase Keys.
A note on availability: GitHub provides secret scanning for all public repositories. Availability for private repositories depends on your GitHub plan and feature set, which GitHub has changed over time, so verify the current options in your account.
If you find an exposed secret, do not stop at deleting it from the latest version of the file. Revoke or rotate the credential in the provider dashboard first, because Git history may still contain the old value.
Search for common indicators such as:
sk-key_livewhsec_tokenpasswordsecretDATABASE_URLSUPABASE_SERVICE_ROLE_KEYBe careful with provider-specific prefixes. They change over time, and some services use multiple formats. The goal is not perfect pattern matching; it's finding anything that looks like a live credential rather than a placeholder.
TL;DR: Tell the assistant to use environment variables, keep secret files out of version control, and add automated checks before code reaches GitHub.
Here's the practical playbook:
1. Tell your AI assistant to use environment variables. Add this to prompts that involve third-party services:
"Use environment variables for all credentials. Do not hardcode real keys, tokens, passwords, or connection strings. Show placeholder names and explain where to set them."
2. Keep secret files out of Git. Use a .env file for local development, and make sure .gitignore excludes files such as .env, .env.local, and other local secret variants your framework uses.
3. Use platform secret stores in production. For deployed apps, prefer your hosting platform's secret manager or environment variable settings over shipping credentials in code or config files.
4. Turn on push protection. If GitHub detects a supported secret pattern during push, it can block the push before the credential lands in the remote repository.
5. Scan before deploy. Add a pre-commit hook, CI secret scan, or both. GitHub's controls help, but defense in depth is better.
6. Check for other exposure paths. Even if your source files are clean, deployment mistakes can still leak sensitive data. For example, make sure your .git folder isn't publicly exposed.
TL;DR: A good audit prompt won't replace real scanning, but it can help your assistant find obvious mistakes before you commit.
Paste this into Cursor, Bolt, Replit, or your preferred tool:
"Audit this project for hardcoded secrets and insecure credential handling. Check source files, config files, examples, tests, and deployment scripts for API keys, tokens, passwords, private URLs, or connection strings. Verify that
.gitignoreexcludes local secret files. Return a numbered list of findings with file names, line numbers, and a safer alternative for each issue."
Use the output as a review aid, not a guarantee. AI can miss secrets, especially custom formats or values split across files.
They tend to optimize for speed and successful execution. If your prompt says "make this work," the assistant may choose the shortest path, which can include hardcoding a credential. They also generate more code per session than most humans write manually, which increases the number of places a secret can appear.
Open the repository settings, then look for the security or code security section. If your repository and plan support it, enable secret scanning and push protection there. GitHub's UI labels change occasionally, so the exact menu name may differ slightly.
Rotate or revoke the key in the provider dashboard immediately. Then remove the exposed value from code, update your app to use the replacement credential through environment variables or a secret manager, and review commit history, forks, logs, and deployment settings to make sure the old value is no longer in use.
No. It is strong for supported partner patterns and many common formats, but it will not catch every custom token, internal credential, or obfuscated value. Use it as one control in a broader process that includes code review, CI scanning, and safer secret handling.
Safer than public repositories, yes; safe enough, no. Anyone with repository access can still see the values, and secrets often spread through forks, logs, screenshots, CI output, and accidental publication. The right default is to keep real secrets out of source control entirely.
AI coding tools are useful, but they don't naturally distinguish between "working" and "safe." That's your job. If you give the model better instructions and back those instructions with GitHub protections, .gitignore, environment variables, and secret rotation, you can keep the speed without accepting the leak risk.
If your team is building quickly with AI and wants a security review before bad habits become production incidents, Elegant Software Solutions can help you audit workflows, harden defaults, and put practical guardrails in place.
Share this with someone building with AI tools. Odds are they need the reminder.
Discover more content: