Part 4 of 8
🤖 Ghostwritten by Claude Opus 4.5 · Edited by GPT-5.2 Codex · Curated by Tom Hundley
This article was written by Claude Opus 4.5, fact-checked by GPT-5.2 Codex, and curated for publication by Tom Hundley.
This is Part 4 of the Professional's Guide to Vibe Coding series. Start with Part 1 if you haven't already.
After you've spent enough hours vibe coding with AI tools, something clicks.
You start predicting failures before they happen. You develop a sense for when the AI is about to go off the rails. You learn which prompts will work and which will waste your time.
This isn't magic—it's pattern recognition developed through exposure. The same way a senior developer "smells" a race condition or a security hole, you learn to smell an impending AI failure.
This article is my attempt to articulate those patterns. The goal isn't to give you rules to memorize, but to accelerate the intuition-building process.
Here's the most important pattern I've learned:
AI can be most confident when it's wrong.
When AI hedges—"I'm not sure, but perhaps..." or "You might want to verify this..."—it's often actually correct. The uncertainty triggers caution, and the output tends to be conservative and accurate.
When AI asserts confidently—"Here's exactly how this works..." or "The correct approach is..."—that's when you should be most vigilant. Confidence without hedging often signals the AI is generating from patterns rather than knowledge.
Practical application: Trust hedged answers more than confident ones. When the AI sounds absolutely certain about something you can't verify, that's a red flag.
Through experience, I've identified patterns that reliably trigger hallucinations:
AI training data has cutoffs. Anything released after that cutoff—or anything too niche to appear frequently in training—gets fabricated.
Signs you're in danger:
Mitigation: For obscure libraries, provide the actual documentation as context. Don't expect AI to know it.
When two systems need to work together, AI often confabulates the connection details.
High-risk scenarios:
Mitigation: Always verify integration details against official documentation. Treat any AI-generated configuration as a starting point, not a solution.
AI mixes up platform-specific details constantly:
Mitigation: Specify your platform explicitly in prompts. Verify commands before running them in production environments.
AI tools have context window limits. When those limits are approached, behavior degrades in predictable ways:
I've found that issues typically begin when the context window is around 50% full. This isn't a hard boundary, but it's a useful heuristic.
Practical application: For complex tasks, start fresh sessions more often than you think necessary. It's better to re-establish context than to fight degraded performance.
When you ask AI to perform more than 10 sequential steps without checkpoints, failure becomes likely.
Why it happens: Each step builds on assumptions from previous steps. Errors compound. By step 15, the AI is working from a context that has drifted significantly from reality.
Fix: Break long tasks into phases. Verify each phase before continuing.
AI struggles to maintain consistency across multiple files simultaneously.
Why it happens: It can hold the content of several files in context, but it loses track of dependencies and relationships between them.
Fix: Refactor one file at a time. Explicitly re-verify cross-file dependencies after each change.
When AI can't solve a problem, it often tries the same approach repeatedly with minor variations. Three failed attempts at the same strategy rarely lead to success on attempt four.
Signs you're in the loop:
Fix: When you see this pattern, stop. Either provide fundamentally different context or solve the problem manually. Iteration without new information rarely helps.
Some domains are poorly represented in training data:
Fix: For these domains, you need to provide the knowledge. Feed AI the actual specifications, don't expect it to know them.
Knowing when to abandon a conversation and start fresh is a key skill:
If the same issue persists after 3 attempts with different prompts, something is fundamentally wrong with the context. Starting fresh is usually faster than continuing.
Fresh prompt (same session): Useful when you need to try a different approach but the context is still healthy.
Fresh session: Necessary when the context itself has become counterproductive—too much error correction, conflicting requirements, or general confusion.
I was vibe coding a Dynamics 365 integration and hit a wall. The AI kept generating code that looked right but failed in ways that weren't obvious.
After two hours of iteration, I finally asked: "What version of the Dynamics 365 SDK are you assuming?"
It was using patterns from a version released three years ago.
Lesson: When nothing works, check version assumptions. AI training has cutoffs, and APIs change.
AI generated a Next.js dynamic route that followed all the right patterns—but for Next.js 14, not Next.js 15.
The code compiled. The routes registered. But the parameters weren't being passed correctly due to a breaking change in how dynamic routes work.
Lesson: Framework version changes are a common source of "looks right, works wrong" bugs. Always verify against current docs.
For one month, note every AI failure:
Patterns will emerge. Your patterns may differ from mine based on your domain and tools.
I use these categories:
Categorizing helps you recognize patterns faster.
Over time, you'll develop a personal list of tasks where AI consistently fails for you. Don't keep fighting those battles—route around them.
My current list includes:
Your list will be different.
AI failure isn't random. With experience, you can predict:
This intuition takes time to develop, but the process can be accelerated by paying attention to patterns rather than just pushing through.
Next in the series: Junior Developer Survival Guide: Learning While Vibe Coding
Ready to level up your team's AI development practices?
Elegant Software Solutions offers hands-on training that takes you from AI-curious to AI-proficient—with the professional discipline that production systems require.
Discover more content: