
🤖 Ghostwritten by GPT 5.4 · Fact-checked & edited by Claude Opus 4.6 · Curated by Tom Hundley
If your agents can install MCP servers, follow deeplinks, or trust unpinned SDK dependencies, you have a real exposure path today. The recent wave of roughly 30 Model Context Protocol vulnerabilities in about 60 days turned MCP security from a nice architecture conversation into a patch-right-now problem. Most teams treated MCP like a developer convenience layer when they should have treated it like remote code execution with a friendly face.
Two examples tell the story. CVE-2026-23744 exposed unauthenticated MCP server installs in MCPJam Inspector. CVE-2026-23523 showed how malicious deeplinks in Dive MCP Host could drive unsafe behavior. Add TypeScript SDK supply chain flaws to the mix, and the pattern is clear: MCP server security is not just about transport encryption or auth headers. It is about controlling what tools can be introduced, what capabilities they get, and what the client is allowed to do on behalf of the model.
A lot of existing guidance explains how to secure an MCP server in production. Useful, yes. But the current incident pattern is more specific: install paths, client behaviors, protocol trust boundaries, and policy controls are where teams are getting burned. This piece focuses on immediate fixes developers can ship today.
TL;DR: The most dangerous MCP security failures right now happen before a request reaches your business logic — server installation flows, deeplink handling, client trust, and dependency ingestion.
If you need a refresher on the protocol itself, start with What is MCP? The Model Context Protocol Explained. For everybody else, here is the blunt truth: MCP changes your attack surface because it standardizes how models discover and use tools. Standardization is great for interoperability. It is also great for attackers, because repeated patterns create repeated mistakes.
The recent spike matters because it shows weak points clustering. Around 30 CVEs hit MCP tooling in the first 60 days of 2026. That does not mean MCP is uniquely broken. It means the ecosystem is moving from prototype speed to attacker attention — where weak defaults get punished.
CVE-2026-23744 is the kind of bug that should make every developer uncomfortable. Unauthenticated server installs mean an attacker may not need to break your crypto or exploit a parsing bug. They just need to convince the client or operator to accept a server they should never have trusted. That is the software equivalent of plugging in a random USB device because the label looked helpful.
CVE-2026-23523 points at another ugly class of Model Context Protocol vulnerabilities: malicious deeplinks. If your MCP host accepts deeplinks or external activation paths without strict validation, you have built a side door around the normal trust review process. Attackers love side doors.
The deeper lesson: MCP clients are now part of your security perimeter. Not just the server. Not just the API behind the tool. The host app, installer workflow, deeplink parser, dependency resolver, and permission model all sit inside the blast radius.
Many teams have already read articles like Securing MCP Servers: Enterprise Implementation Patterns or MCP Security: Best Practices for Production Deployments. Those pieces usually focus on the server side: auth, authorization, transport security, audit logging, and tool permissions. All good.
What they often underemphasize is the pre-auth chain of custody:
That is the gap the current CVE wave exposed.
TL;DR: CVE-2026-23744 and CVE-2026-23523 are different bugs, but they teach the same lesson — never let MCP tool introduction bypass explicit trust and capability checks.
If a tool such as MCPJam Inspector can install or register an MCP server without meaningful authentication and approval, the attacker path is straightforward:
That is not a theoretical chain. It is how convenience features turn into persistence mechanisms.
What makes this class nasty is that the tool may not look malicious. A fake "read-only Git helper" can still exfiltrate a repo. A fake "ticket summarizer" can still vacuum up issue metadata and secrets. If you have read Stop Hardcoded API Keys in AI Code, you already know how often secrets are lying around waiting to be found.
Deeplinks are one of those features people ship because they feel ergonomic. Click a link, open the right host, launch the right task, save a few steps. Fine — until the deeplink carries an instruction that changes trust state, installs a server, broadens permissions, or auto-connects to something dangerous.
If your host treats deeplinks as data, validates them strictly, and requires re-approval for sensitive actions, you are in decent shape. If your host treats deeplinks as commands from a trusted universe, you are in trouble.
A secure host should assume deeplinks are hostile until proven otherwise:
The TypeScript SDK issues matter because many teams use the SDK as if it were just plumbing. It is not. It is part of your execution path. If your MCP host or server depends on loosely pinned packages, transitive updates can change behavior under your feet.
JavaScript remains one of the largest package ecosystems on GitHub, which means the blast radius of npm supply chain problems is never small. The lesson for AI agent security is simple: the protocol may be standardized, but your package graph is still chaos unless you lock it down.
TL;DR: Treat every MCP integration as a privileged plugin system — explicit allowlists, least privilege, signed provenance, and a policy engine between the model and the tool.
MCP is not "just another API integration." It is a capability delivery mechanism for autonomous or semi-autonomous software. Your security design should look more like browser extension security or plugin sandboxing than classic REST integration.
| Area | Unsafe pattern | Safer pattern |
|---|---|---|
| Server installation | Auto-install from URLs or prompts | Approved registry plus admin review |
| Tool permissions | Broad default read/write | Capability-scoped per tool and per environment |
| Deeplink handling | Implicit trust of action links | Parse, validate, re-confirm, log |
| SDK dependencies | Caret ranges and transitive drift | Lockfiles, pinning, SBOM, review gates |
| Host-to-tool access | Direct unrestricted calls | Policy broker with deny-by-default rules |
| Secret access | Shared host env vars | Brokered short-lived credentials |
| Auditing | Best-effort logs | Immutable action logs with actor and tool context |
The three pillars of production MCP security:
If you miss the first pillar, the rest gets harder fast.
On March 12, 2026, SurePath AI released real-time MCP policy controls aimed at governing AI interactions, including read-only enforcement, blocking unauthorized tools, and catch-all actions. No vendor feature is magic, but this release matters because it addresses the exact control plane many teams are missing.
A policy layer is useful when it can do four things well:
That last point is not glamorous, but it is the difference between "we intended to lock it down" and "it actually fails closed."
If you are evaluating gateways and enforcement layers, Enterprise MCP Gateway Implementation Guide for 2026 is the more architectural companion to this article. The narrower point here is that MCP policy controls are no longer optional once your agent touches real systems.
OWASP has spent years hammering the same lesson across web and API security: insecure defaults and broken access control remain among the most common causes of serious exposure. The current MCP server security problem is that same old story wearing new clothes.
TL;DR: Disable auto-install paths, lock down deeplinks, pin dependencies, enforce per-tool policy, and make your MCP host fail closed.
If your MCP client or host supports auto-install or opportunistic registration, disable it unless you can guarantee authenticated provenance.
// bad: dynamic registration from untrusted input
await host.registerServer({
name: userInput.name,
url: userInput.url
});
// better: registry-backed install with explicit approval
const approved = approvedRegistry.findById(request.serverId);
if (!approved) throw new Error("Server not approved");
if (!currentUser.isAdmin) throw new Error("Approval required");
await host.registerServer({
name: approved.name,
url: approved.url,
checksum: approved.checksum,
signature: approved.signature
});Do not let a deeplink mutate trust state without explicit confirmation.
const allowedSchemes = new Set(["mcpapp"]);
const allowedActions = new Set(["open", "preview"]);
function handleDeepLink(raw: string) {
const url = new URL(raw);
if (!allowedSchemes.has(url.protocol.replace(":", ""))) {
throw new Error("Unsupported deeplink scheme");
}
const action = url.searchParams.get("action");
if (!action || !allowedActions.has(action)) {
throw new Error("Unsupported deeplink action");
}
if (["install", "grant", "connect"].includes(action)) {
throw new Error("Sensitive actions cannot be deeplinked");
}
return openPreview(url);
}Do not let the model call tools directly if the host can mediate capabilities.
type Capability = "repo.read" | "issues.read" | "issues.write";
async function invokeTool(toolName: string, capability: Capability, input: unknown) {
if (!policyEngine.isAllowed({
actor: "model",
toolName,
capability,
env: process.env.NODE_ENV || "dev"
})) {
throw new Error(`Blocked by policy: ${toolName}:${capability}`);
}
return toolRuntime.invoke(toolName, input);
}If you are using npm:
{
"dependencies": {
"@modelcontextprotocol/sdk": "1.12.3"
},
"overrides": {
"some-transitive-package": "2.4.1"
}
}Then add automated checks in CI:
npm ci
npm audit --production
npx @cyclonedx/cyclonedx-npm --output-file sbom.jsonAlso add dependency review gates in pull requests and signed build provenance where your platform supports it.
Many AI agent security failures are really secret handling failures. If the host process has broad environment access and tools inherit it, a compromised MCP server has already won. Use short-lived tokens and pass only what a tool needs. Nothing else.
If you need a reminder of how ugly secret exposure gets, the Moltbook Breach: 150K API Keys Leaked by Missing RLS article is a good parallel: one trust boundary mistake can cascade far beyond the original bug.
You want to know when the model tried to install an unapproved server, invoke a write-capable tool from a read-only session, or follow a blocked deeplink. Those denied events are your early warning system.
TL;DR: Use your AI coding assistant to generate defensive wrappers, not raw MCP integrations.
Paste this into your coding tool:
I have an MCP host in TypeScript. Refactor my integration so that: (1) server registration only works from an approved registry, (2) all tool calls go through a deny-by-default policy engine, (3) deeplinks cannot trigger install, grant, or connect actions, (4) secrets are replaced with short-lived brokered credentials, and (5) structured audit logs capture both allowed and denied actions. Generate tests for malicious deeplinks, unauthorized installs, and privilege escalation attempts.
That prompt is useful because it asks for control points and tests, not just code generation. If your assistant only produces happy-path integration code, push it harder.
TL;DR: Before tomorrow, find the one place your MCP host can silently expand trust and remove that path.
Do this before you close your laptop:
install, register, connect, deeplink, and openExternalIf you do nothing else, do that.
Disable any unauthenticated or automatic MCP server installation path first. If a malicious server can get registered, the attacker may not need an exploit in your business logic at all. After that, lock down deeplinks and force all tool calls through a deny-by-default policy layer.
CVE-2026-23744 shows that the install and registration workflow is part of your attack surface, not just the running server. In practice, it means you need authenticated provenance, approved registries, and explicit review before a host trusts a new server. Treat server onboarding like installing a privileged plugin.
No. MCP policy controls are necessary but not sufficient. You still need dependency pinning, deeplink validation, secret isolation, audit logging, and least-privilege capabilities because some vulnerabilities land before a policy decision ever gets made.
A secure MCP host should treat deeplinks as hostile input. Allow only known schemes and low-risk actions, require explicit confirmation for anything sensitive, and completely block install or permission-grant flows from deeplinks. Every attempt should be logged.
Pin exact versions, review transitive dependencies, generate an SBOM, and gate updates through CI rather than floating on broad semver ranges. Also isolate tool credentials from the host runtime so a compromised dependency cannot immediately access everything. Think supply chain first, not last.
The real lesson from these 2026 MCP vulnerabilities is not that developers should avoid MCP. It is that we need to stop treating agent tooling like harmless glue code. It is a privilege boundary. It deserves the same suspicion you already bring to browser extensions, CI runners, and production credentials.
Lock down how servers are introduced. Lock down what tools can do. Log every denied move. And do not let deeplinks or auto-install paths make security decisions for you.
Come back tomorrow for the next lesson, and share this with someone who is wiring up MCP right now. You've got this. See you tomorrow.
Discover more content: