
🤖 Ghostwritten by Claude Opus 4.5 · Curated by Tom Hundley
This article was written by Claude Opus 4.5 and curated for publication by Tom Hundley.
You're reading this article on a blog that was built primarily by AI.
Not by a team of developers with AI assistance. Not by a junior developer using GitHub Copilot for autocomplete. The website you're looking at—the Next.js 15 frontend, the Supabase backend, the semantic search that finds articles using AI embeddings, the admin dashboard, the API architecture—was built through conversations with Claude Code.
I'm not a developer. I'm a business leader who wanted to understand what AI-assisted development actually looks like in practice. So I built something real. This is what I learned.
Earlier this year, I faced a problem familiar to many business leaders: I wanted a content platform for Elegant Software Solutions, but I didn't want to use a generic template, and I wasn't ready to commit significant resources to custom development until I understood exactly what we needed.
I'd been hearing about AI-assisted development tools. The claims sounded impressive but abstract. Rather than evaluate from the sidelines, I decided to try building something myself.
The goal was modest at first: a simple blog. That blog has since grown into a full-featured content platform with over 100 articles, AI-powered semantic search, topic taxonomies, series support, and an admin system. The codebase runs to thousands of lines. It handles real traffic and serves real business purposes.
Almost all of it was created through natural language conversations with Claude Code.
For business leaders unfamiliar with the tool, Claude Code is a terminal-based development assistant created by Anthropic. It runs in your command line—the text-based interface developers use to interact with their systems—and connects to your project files directly.
But that description undersells what makes it different. Claude Code doesn't just generate code snippets that you copy and paste. It understands your entire codebase. It can read your files, create new ones, edit existing code, run tests, make commits to version control, and push changes to GitHub. It maintains context across conversations when configured properly—through CLAUDE.md files that describe your project or by using the --continue flag to resume previous sessions.
More recently, Anthropic added a VS Code extension and web interface, but the terminal-based workflow remains central to how I use it.
The key insight is that Claude Code functions more like a capable junior developer than a sophisticated autocomplete tool. You describe what you want in plain English, and it figures out how to implement it. When something goes wrong, you describe the problem, and it proposes solutions. When you don't know how to approach a feature, you can discuss the architecture before writing any code.
There's a term making rounds in the development community that captures this new way of working: vibe coding.
The term was coined by Andrej Karpathy, co-founder of OpenAI, in early 2025. He described it as "a new kind of coding where you fully give in to the vibes, embrace exponentials, and forget that the code even exists."
Collins Dictionary named it their Word of the Year for 2025. Y Combinator reported that 25% of startups in their Winter 2025 batch had codebases that were 95% AI-generated. Both Microsoft CEO Satya Nadella and Google CEO Sundar Pichai have claimed that approximately 30% of their companies' code is now AI-generated.
For business leaders, the relevant translation is this: the barrier between "having an idea" and "having working software" has collapsed.
Traditional software development requires translating business requirements into technical specifications, then translating those specifications into code. Each translation step loses fidelity and takes time. Vibe coding compresses this: you describe what you want, and the AI handles the translation to code directly.
This doesn't mean anyone can build anything without understanding what they're doing. It means that someone who understands the problem domain—the business logic, the user needs, the desired outcomes—can now participate more directly in building solutions. The bottleneck has shifted from "can we code this?" to "do we know what we actually want?"
Let me be concrete about what a typical session looks like.
I start by opening my terminal and navigating to the project directory. I launch Claude Code and, because it remembers context from previous sessions, it already knows the project structure—where the API routes live, how the database is organized, what design patterns we've established.
A recent example: I wanted to add support for multi-part article series, where several related posts would be grouped together with "Part 1 of 5" navigation. Here's roughly how that conversation went.
"I want to add series support to the blog. Articles should be able to belong to a series with a name, and we need to track what part number each article is and how many total parts exist."
Claude Code responded by asking clarifying questions. Should all series fields be required, or should individual articles be able to exist outside any series? How should we handle consistency—what if someone marks an article as "part 3 of 5" but we only have two articles in the series? What should the URL structure look like for series pages?
After we discussed the design, Claude Code created the database migration, updated the API endpoints, modified the admin interface, and created new public-facing components for series navigation. It ran the tests, identified one that needed updating, fixed it, and committed the changes.
The entire feature—database schema changes, API modifications, frontend components, and tests—was implemented in a single afternoon. I wrote zero code directly. But I made every decision about how the feature should work.
Several aspects of this approach exceeded my expectations.
Rapid iteration on ideas. When you can describe a feature and see it working within an hour, you test ideas you'd never bother to spec out formally. Some of those ideas turn out to be valuable. The semantic search feature—which uses AI embeddings to find articles based on meaning rather than just keywords—started as a "what if we tried this?" conversation.
Learning through building. I understand Next.js, Supabase, and modern web architecture far better now than I did before. Not because I studied documentation, but because I built a real application and Claude Code explained concepts as we encountered them. Every decision became a learning opportunity.
Capability expansion. Features I would have assumed required significant development investment—AI-powered search, role-based access control, comprehensive API architecture—became achievable. Not because they're simple, but because the AI handled the implementation complexity while I focused on requirements.
Documentation as a byproduct. The codebase includes a comprehensive CLAUDE.md file that documents architecture decisions, patterns, and conventions. This emerged naturally from working with Claude Code—it needs this context to work effectively, and once created, it serves as excellent documentation for humans too.
Equally important: the limits I encountered.
Architecture decisions. Claude Code can implement any architecture, but it can't tell you which architecture is right for your needs. When we designed the semantic search system, I had to decide whether to use simple keyword matching, external search services, or AI embeddings. Claude Code helped me understand the tradeoffs, but the decision required business judgment about our priorities.
Quality assessment. The AI doesn't know when something is "good enough." It will make whatever changes you describe, whether those changes improve the product or degrade it. Knowing when to stop iterating, when a feature is complete, when simplicity beats cleverness—these remain human responsibilities.
Brand voice and content. Claude Code can write the articles themselves (this one included), but maintaining consistent voice, ensuring accuracy, and deciding what topics matter to our audience requires editorial judgment. The AI is a tool for execution, not strategy.
Security and edge cases. Vibe coding has documented risks. According to Veracode's 2025 GenAI Code Security report, nearly half of all AI-generated code contains security flaws, despite appearing production-ready. I treated Claude Code's output with the same scrutiny I'd apply to code from a junior developer: trust but verify.
When things went wrong. Complex multi-file changes sometimes introduced subtle bugs. The AI's changes looked correct—the code ran, the tests passed—but edge cases broke. I learned to test thoroughly and maintain skepticism, especially for changes that touched many files at once.
If you're a business leader thinking about AI-assisted development, here are the strategic implications I've drawn from this experience.
The developer role is evolving, not disappearing. Vibe coding doesn't eliminate the need for development expertise. It changes what that expertise looks like. The most valuable skills become: understanding what to build, recognizing when generated code is correct, designing systems that the AI can work with effectively, and knowing when to trust automation versus when to intervene manually.
Junior and senior roles will shift. Much of what junior developers traditionally did—implementing well-specified features, writing boilerplate code, making routine changes—the AI handles capably. Senior work—system design, debugging complex issues, mentoring, quality oversight—becomes more valuable, not less.
Speed of iteration increases dramatically. Projects that once required weeks of development time can potentially be prototyped in days. This changes how you think about experimentation. Building something to validate an idea becomes cheap enough to do routinely.
The bottleneck moves to decision-making. When implementation is fast, the limiting factor becomes knowing what to implement. Clear requirements, good product judgment, and rapid feedback loops matter more than ever. Organizations that are bottlenecked on "we can't build that fast enough" may soon be bottlenecked on "we're not sure what we should build."
Developer productivity is genuinely increasing. According to Stack Overflow's 2025 Developer Survey, 65% of developers now use AI coding tools at least weekly. The tools are no longer experimental—they're becoming standard. Teams that resist adoption may find themselves at a competitive disadvantage for attracting talent and shipping products.
I won't invent statistics about time savings or cost reductions—the project specifics vary too much for general claims to be meaningful.
What I can say honestly: this blog exists because of Claude Code. Without AI-assisted development, I would not have built it. I would have used a WordPress template or waited until we had resources to hire developers. The option of building a custom platform with AI-powered search and comprehensive content management would simply not have been on the table.
The ROI calculation for AI-assisted development isn't "developer costs times hours saved." It's "what becomes possible that wasn't possible before?" If your organization has ideas sitting in backlogs because development resources are constrained, that's the real cost you're paying.
If you're evaluating AI-assisted development for your organization, here's my honest advice.
Start with a real project, not an evaluation. You won't understand these tools by reading about them or watching demos. Pick something you actually need built—modest in scope but genuinely useful—and try building it. The learning comes from encountering real problems.
Expect a learning curve. AI-assisted development requires new skills: effective prompting, result evaluation, architectural thinking. Your first project will be slower than traditional development as you learn the workflow. The speed benefits come with experience.
Supervision remains essential. These tools are not autonomous. Every output requires human review. Plan for thorough testing, code review, and quality oversight. The AI makes you faster at producing code; it doesn't eliminate the need to verify that code is correct.
Security requires attention. AI-generated code can contain vulnerabilities. Standard security practices—code scanning, dependency auditing, security review—remain important. Don't assume the AI produces secure code by default.
The human-AI collaboration matters. The best results come from treating the AI as a collaborator, not a replacement. Describe what you want, ask clarifying questions, explain your reasoning, push back when suggestions don't fit your needs. The quality of the output correlates with the quality of the conversation.
What I built is modest: a blog with some nice features. But the implications extend far beyond any individual project.
We're at an inflection point in how software gets created. The traditional model—business leaders describe requirements, developers translate requirements into code—is being compressed. The gap between "I want this" and "this exists" is shrinking.
This creates opportunities for organizations that can adapt. It also creates risks for those that can't: competitors will move faster, talent expectations will shift, and the cost of being unable to build will increase.
I don't think the path forward is turning every business leader into a vibe coder. That's not realistic or necessary. But understanding what AI-assisted development actually looks like—its capabilities, its limits, its implications—is becoming essential knowledge for anyone making decisions about technology investments.
This blog was built with AI. The next generation of business applications probably will be too.
Discover more content: