Part 1 of 5
🤖 Ghostwritten by Claude · Curated by Tom Hundley
This article was written by Claude and curated for publication by Tom Hundley.
If youve been working with Claude or other AI models, youve probably hit the same frustration: your AI assistant is brilliant, but it cant access your database, cant read your files, and definitely cant interact with your internal tools. You end up copy-pasting data back and forth like its 1995.
Anthropics Model Context Protocol (MCP) was built to fix this. Released in late 2024, MCP is an open protocol that lets AI models like Claude connect to external data sources, tools, and APIs in a standardized way. Think of it as USB for AI—one protocol, infinite possibilities.
This is Part 1 of our five-part series on Understanding MCP. Well cover what MCP is, why it exists, and why you should care. Parts 2 and 3 will dive into architecture and hands-on implementation.
The Model Context Protocol is an open-source protocol developed by Anthropic that standardizes how AI applications connect to external data sources and tools. Its a client-server architecture that uses JSON-RPC messaging over stdio (standard input/output) or HTTP transports.
Heres the straightforward version: MCP creates a common language for AI models to talk to your databases, APIs, file systems, and business tools. Instead of building custom integrations for every AI application you create, you build one MCP server, and any MCP-compatible AI client can use it.
The protocol defines three core primitives:
MCP is open source (MIT license), which means you can inspect the code, contribute to it, and build on it without worrying about vendor lock-in. The specification is public, and Anthropic has made it clear they want this to be a community-driven standard.
Before MCP, connecting AI models to external tools was a mess. Every AI application needed custom code to integrate with databases, APIs, and business systems. If you wanted Claude to query your PostgreSQL database, read from your Notion workspace, and update your CRM, youd need to build three different integrations—and maintain them.
Worse, these integrations were brittle. APIs change, authentication breaks, and suddenly your AI assistant cant access the data it needs. Multiply this across every AI tool youre building, and youve got a maintenance nightmare.
The fundamental problem was lack of standardization. There was no common protocol for AI models to request data, execute functions, or maintain context across different tools. Every integration was a snowflake.
MCP solves this with a standard protocol. Build one MCP server for your PostgreSQL database, and any MCP-compatible AI client can use it. Build an MCP server for your internal API, and it works across all your AI applications. The integration effort becomes one-to-many instead of one-to-one.
This matters more as AI agents become more autonomous. An AI that can only answer questions is interesting. An AI that can query your analytics database, update your project management tool, and send notifications to your team is transformative. MCP makes the second scenario tractable.
MCP uses a client-server architecture. The AI application (like Claude Desktop) is the client, and your data sources or tools are exposed through servers.
When Claude needs to access external data, it sends a JSON-RPC request to an MCP server. The server processes the request, interacts with the underlying data source or API, and returns the result. Claude receives the data, incorporates it into its context, and uses it to generate a response.
The communication happens over two transports:
stdio (standard input/output): The MCP server runs as a local process, and the client communicates with it through standard streams. This is perfect for local tools, file systems, and development environments.
HTTP: The MCP server runs as an HTTP endpoint, and the client makes HTTP requests to it. This works for remote servers, cloud services, and enterprise deployments.
The protocol is stateful—the server can maintain context across multiple requests. This means Claude can query your database, remember the results, and use that information in subsequent queries without you having to re-provide the data.
Heres a simplified flow:
Well dig into the protocol details, message formats, and architecture patterns in Part 2. For now, just know its JSON-RPC over stdin/stdout or HTTP.
An MCP server is a program that exposes resources, tools, and prompts to AI clients. You can think of it as an adapter layer between your data sources and the AI model.
Example MCP servers you might build:
MCP servers are typically lightweight. Most of the logic lives in your existing systems—the MCP server is just a thin wrapper that translates between the AIs requests and your APIs.
An MCP client is an AI application that uses MCP to connect to servers. The most prominent example is Claude Desktop, which has native MCP support. But the protocol is open, so any AI application can implement it.
As a developer, youll mostly be building servers. The client side is typically handled by the AI tool youre using.
Resources are data sources that the AI can read from. Theyre defined by a URI scheme (similar to how HTTP URLs work) and can represent anything from database tables to API endpoints to files.
A resource might be:
postgres://localhost/analytics/users (a database table)file:///home/user/documents/report.pdf (a file)api://internal/customer/12345 (an API endpoint)Resources are read-only from the AIs perspective. The AI can request data from a resource, but it cant modify it directly. (Thats what tools are for.)
Tools are functions that the AI can execute. Unlike resources, tools can have side effects—they can write data, trigger workflows, send messages, or make changes to your systems.
Example tools:
run_sql_query: Execute a SQL query against your databasecreate_jira_ticket: Create a new ticket in Jirasend_slack_message: Post a message to a Slack channelupdate_customer_record: Modify a customer record in your CRMTools are where MCP gets powerful. The AI can reason about which tools to use, chain multiple tools together, and accomplish complex tasks autonomously.
Prompts in MCP are reusable, parameterized templates that can be shared across applications. Theyre not strictly required, but theyre useful for standardizing how your team interacts with specific MCP servers.
For example, you might create a prompt template called analyze_sales_data that includes the right context, formatting instructions, and tool calls to generate a sales report. Other developers can use that prompt without needing to know the details of your analytics setup.
If youre building applications with Claude or other AI models, MCP is essential. It standardizes how you connect AI to external data and tools, which means less custom integration code and more time building actual features.
MCP is particularly valuable if youre building:
For enterprises, MCP solves a governance problem. Instead of every team building their own AI integrations, you can create centralized MCP servers that expose approved data sources and tools. This gives you:
If youre building developer tools, APIs, or SaaS platforms, supporting MCP means your product can be easily integrated with AI applications. Its similar to building an API—youre making your service accessible to a new class of applications.
Early MCP adoption could be a competitive advantage. As AI agents become more prevalent, tools with native MCP support will be easier to integrate and more attractive to AI-forward teams.
If youre experimenting with AI agents, multi-step reasoning, or autonomous systems, MCP provides a clean abstraction for tool use. You can focus on the agents reasoning and decision-making logic without getting bogged down in integration details.
The protocol is open source and well-documented, which makes it ideal for learning and experimentation.
MCP is still new, but the ecosystem is growing fast. Anthropic has released the core specification, reference implementations, and several official MCP servers for common use cases (PostgreSQL, file systems, Google Drive, etc.).
In this series, well take you from understanding to implementation:
If you want to start experimenting now, check out Anthropics MCP documentation and the official GitHub repository. Claude Desktop has native MCP support, so you can start using MCP servers immediately.
The Model Context Protocol is Anthropics answer to a fundamental problem: AI models are powerful, but theyre isolated. They cant access your data, cant use your tools, and cant interact with your systems without custom integration work.
MCP standardizes this. Its an open protocol that lets AI models connect to external resources and tools in a consistent, secure, and maintainable way. Its not magic—its plumbing. But its the kind of plumbing that makes transformative AI applications possible.
Were still early. The protocol is young, the ecosystem is nascent, and best practices are still emerging. But the potential is clear: a world where AI assistants can seamlessly work with your data, tools, and workflows without requiring custom integration for every use case.
Thats worth paying attention to.
Ready to go deeper? Part 2 of this series breaks down MCPs architecture, message formats, and protocol mechanics. If youre planning to build MCP servers or integrate MCP into your applications, thats your next stop.
This article is a live example of the AI-enabled content workflow we build for clients.
| Stage | Who | What |
|---|---|---|
| Research | Claude Opus 4.5 | Analyzed current industry data, studies, and expert sources |
| Curation | Tom Hundley | Directed focus, validated relevance, ensured strategic alignment |
| Drafting | Claude Opus 4.5 | Synthesized research into structured narrative |
| Fact-Check | Human + AI | All statistics linked to original sources below |
| Editorial | Tom Hundley | Final review for accuracy, tone, and value |
The result: Research-backed content in a fraction of the time, with full transparency and human accountability.
Were an AI enablement company. It would be strange if we didnt use AI to create content. But more importantly, we believe the future of professional content isnt AI vs. Human—its AI amplifying human expertise.
Every article we publish demonstrates the same workflow we help clients implement: AI handles the heavy lifting of research and drafting, humans provide direction, judgment, and accountability.
Want to build this capability for your team? Lets talk about AI enablement →
Discover more content: