
Here's the uncomfortable truth: your developers are already using AI coding tools.
According to Stack Overflow's 2024 survey, 76% of developers are using or planning to use AI tools in their development process. GitHub reports Copilot suggestions are accepted in over 30% of cases where they appear.
This is happening whether you've sanctioned it or not.
The question isn't whether to allow AI-assisted development. The question is whether your developers know how to use it without introducing security vulnerabilities, accumulating technical debt, or building on hallucinated dependencies.
Most don't.
When developers adopt AI tools without training, three categories of cost emerge—none of which show up in your sprint velocity metrics.
AI coding assistants are trained on public code. Public code includes vulnerable code.
A 2023 Stanford study found that developers using AI assistants wrote less secure code than those coding manually, while simultaneously expressing higher confidence in their code's security.
Read that again: less secure, more confident.
The specific vulnerabilities AI tends to introduce:
SQL injection and command injection. AI models often generate string interpolation where parameterized queries should be used. Developers without security training accept these suggestions.
Hardcoded secrets. AI will happily generate example code with API keys, database credentials, and tokens. Untrained developers copy-paste without recognizing the risk.
Outdated security patterns. AI models are trained on historical code. Security best practices from 2021 may be vulnerabilities in 2025.
Dependency confusion. AI sometimes suggests packages that don't exist—or worse, suggests the name of a legitimate package but with the wrong import pattern, potentially pulling malicious typosquatted packages.
An untrained developer doesn't know to look for these. They see working code and ship it.
AI optimizes for "code that works" not "code that fits your architecture."
Every accepted suggestion that doesn't match your patterns is technical debt. Small decisions compound:
Inconsistent error handling. One AI-generated function uses try/catch, another returns error codes, a third throws custom exceptions. None match your existing patterns.
Dependency sprawl. AI suggests a new library for something your codebase already handles with an existing dependency. Now you have two ways to do the same thing.
Architectural drift. AI doesn't know your team decided to use Repository pattern for data access. It generates inline database calls because that's what worked in its training data.
Documentation mismatch. AI-generated code often comes with AI-generated comments. Those comments describe what the AI intended, not necessarily what the code does.
Trained developers catch these mismatches in review. Untrained developers accept suggestions that look reasonable in isolation but don't fit the system.
The debt is invisible until you try to refactor, onboard new team members, or debug production issues.
This one is counterintuitive: untrained developers using AI often appear more productive while delivering less value.
The metrics look great:
The reality is worse:
A 2025 Fastly survey of nearly 800 developers found that 95% spend extra time fixing AI-generated code. Nearly one in three developers say they frequently have to fix or edit AI output enough that it offsets most of the time savings.
Senior developers navigate this effectively because they recognize problems quickly. Junior developers without guidance trust AI output longer, investing hours before realizing they're on the wrong path.
The productivity gains are real—but only with training. Without it, you get the appearance of productivity with hidden costs that surface later.
Perhaps the most dangerous scenario: developers using AI tools you don't know about.
"Shadow AI" mirrors the Shadow IT problem from a decade ago. Developers find tools that help them work faster, use them regardless of policy, and create security and compliance exposure the organization can't see.
Signs of Shadow AI in your organization:
The solution isn't surveillance or prohibition. Developers will route around blocks if the tools genuinely help them.
The solution is sanctioned access with proper training. Make the right path the easy path.
How do you know if your developers are using AI tools without proper training?
In code review:
In conversation:
In outcomes:
None of these are moral failures. They're skill gaps that training addresses.
Some engineering leaders take the position that developers should learn AI tools on their own, just as they learned version control or IDEs.
This misses a crucial difference: AI tools are adversarial in a way other tools aren't.
Git doesn't try to convince you it's right when it's wrong. Your IDE doesn't generate plausible-looking code that introduces vulnerabilities. Traditional tools are predictable—they do what you ask, and errors are obvious.
AI tools are confident even when wrong. They generate output that looks correct, passes surface-level review, and only fails in production or under specific conditions. Developing the judgment to evaluate AI output is a skill that requires guided practice.
"Figure it out" leads to developers learning through production incidents. That's expensive education.
Let's do the math.
Cost of training:
$5,000 for a team of up to 30 developers = ~$167/developer
Cost of one security vulnerability:
Average cost of a data breach: $4.45M (IBM 2023 Security Report)
Even a minor vulnerability requiring emergency patching: $10K-50K in developer time plus incident response
Cost of technical debt:
McKinsey research finds CIOs estimate technical debt amounts to 20-40% of their technology estate's value. Quick implementations that skip proper patterns create ongoing maintenance burden.
At $150/hour fully loaded developer cost, a year of untrained AI usage easily creates $50K+ in hidden debt
Cost of the productivity illusion:
A developer spending 2 hours/day debugging AI code they don't understand instead of 30 minutes with proper training:
1.5 hours/day × 250 working days × $75/hour = $28,125/year per developer
Training pays for itself in weeks, not months.
Effective AI coding training isn't "how to use ChatGPT." It addresses the specific risks outlined above:
Security awareness: What AI-generated vulnerabilities look like, how to scan for them, when to be especially suspicious
Architectural judgment: How to evaluate whether a suggestion fits your patterns, when to reject "working" code that doesn't fit
Verification workflows: Structured approaches to testing AI suggestions before accepting them
Failure recognition: How to identify when AI is confidently wrong, when context is corrupted, when to start fresh
Team integration: How code review changes with AI-generated code, how to document AI usage, how to share effective patterns
This isn't a YouTube tutorial. It's developing judgment that takes years to build naturally—or hours to transfer from practitioners who've already made the mistakes.
If you're an engineering leader reading this, here's the honest assessment:
Your developers are already using AI tools. Assuming otherwise is wishful thinking.
Prohibition doesn't work. Developers will find ways around blocks if the productivity gains are real.
Informal learning is dangerous. The skills required to use AI tools safely aren't intuitive.
Training is the only scalable solution. Sanctioned access plus structured training turns risk into advantage.
The organizations that thrive with AI-assisted development are those that invest in proper adoption—not those that move fastest or those that resist longest.
If you're seeing signs of untrained AI usage, or if you're planning to roll out AI tools and want to do it right, we should talk.
Our Dev Team Training is specifically designed to address the security, architecture, and productivity risks outlined in this article.
$5,000 for up to 30 developers. 4 hours. Immediate behavior change.
This article is part of our Engineering Leader series, helping CTOs and VPs make informed decisions about AI adoption in their organizations.
Discover more content: