
๐ค Ghostwritten by GPT 5.4 ยท Fact-checked & edited by Claude Opus 4.6 ยท Curated by Tom Hundley
ClawHavoc is the kind of mess that turns a fun weekend AI project into a security incident. The short version: attackers slipped 1,184 malicious skills into ClawHub, the package registry tied to OpenClaw, and Antiy CERT reported that 135,000 exposed instances were affected. If you are a vibe coder using AI agents, this matters because a "skill" is often just code you install because it sounds useful. If that code is poisoned, your app can start leaking data, running unwanted actions, or opening doors you did not know existed.
This is why supply chain attack stories hit so hard. You did not write the bad code. You just trusted a package, extension, or skill that looked normal. Antiy CERT also disclosed nine CVEs, three of which already had public exploits. That is not a theoretical risk. That is active danger.
Here is the plain-English takeaway: if your AI tool can add skills, plugins, packages, or extensions, treat those like strangers asking for keys to your house. Below I will show you what AI agent skills are, how this attack worked, why one in five malicious packages is catastrophic, and what to do right now.
TL;DR: ClawHavoc was a supply chain attack where malicious AI skills were mixed into a trusted package ecosystem, so routine installs could become compromises.
An AI agent is software that can do tasks for you: read files, send messages, search data, update records, or trigger other tools. A skill is an add-on that gives that agent a new ability. Think of the agent as a handyman and the skills as tools in the toolbox.
That sounds convenient โ until you realize a bad tool can be rigged.
In the ClawHavoc case, the registry itself became unsafe. According to Antiy CERT, 1,184 malicious skills were confirmed in ClawHub, with 135,000 exposed instances. Broader research on more than 30,000 AI agent extensions found that roughly 25% had vulnerabilities. When one in four or five packages in a registry is compromised, trust collapses.
Here is the important distinction: this was not just "someone forgot a password." This was an AI package security problem. The attack lived in the place users go to get new capabilities. That is why supply chain attacks are so nasty โ they hide upstream, before the code even reaches you.
If that sounds familiar, it should. We have seen similar patterns in package ecosystems, browser extensions, and automation pipelines. For another example of how third-party building blocks can betray you, read GitHub Actions Security After a Supply Chain Attack.
Normal app features usually stay inside your app. AI skills often touch everything:
That makes AI skills security more serious than people assume. A bad note-taking plugin is annoying. A bad AI skill with access to your workspace can be disastrous.
TL;DR: Attackers abused trust in the package source, so users installed dangerous skills thinking they were normal upgrades or useful add-ons.
A supply chain attack works like this: instead of robbing every house one by one, criminals poison the lock factory. Everyone who buys a new lock gets a hidden weakness.
That is the right mental model for ClawHavoc.
The attackers did not need to personally trick each user in a one-on-one scam. They put malicious skills where users already expected to find trusted ones. Some skills likely looked helpful, routine, or even boring. That is the trick. Malware rarely introduces itself as malware.
Once installed, a malicious AI skill can:
And here is the part vibe coders need to hear clearly: if your AI tool says "install this dependency" and you click yes without checking, you are outsourcing trust to a machine that does not suffer the consequences.
If a registry has a tiny number of bad packages, careful users might dodge them. But when roughly one in five packages is malicious, safe browsing behavior stops working. Discovery itself becomes dangerous. Search results become dangerous. Recommendations become dangerous.
| Situation | Risk Level | Why It Matters |
|---|---|---|
| A few isolated bad packages | High | You might avoid them with careful review |
| Widespread vulnerable packages | Very high | Even well-meaning installs can expose you |
| One in five packages malicious | Catastrophic | The registry can no longer be treated as broadly trustworthy |
That is why this incident deserves the label catastrophic. The ecosystem trust model breaks.
If you are specifically running OpenClaw, also read OpenClaw v2026.3.11 Security Fix Guide. That article focuses on fixes. This one is about understanding the danger pattern so you stop repeating it with the next tool.
TL;DR: If you build with AI tools, you are already making supply chain trust decisions โ even if you have never heard that term before.
You do not need to be a programmer to get burned by this. Vibe coders are often more exposed because modern tools make installation feel casual. Click a button, add a template, connect a tool, done. That smooth experience hides the fact that you may have just given a stranger access to your workshop.
Here are the biggest danger signs:
That is the most obvious clue. OpenClaw is one example, but the lesson is broader. If your AI builder adds capabilities from a shared marketplace or registry, you need AI package security habits.
If a skill can touch documents, customer lists, internal notes, or billing tools, the possible damage goes way up.
This is incredibly common. People add things during experimentation and forget them. Months later, those extras are still sitting there with full access.
This same "I didn't know that was exposed" problem shows up elsewhere. That is exactly why articles like Millions of Servers Still Expose .git Folders โ Here's How to Check Yours matter. Security failures often start with forgotten leftovers.
Every installed AI skill is pre-approved code execution.
Read that again. If you install it, you are trusting it to act inside your environment. Vibe coder AI safety is not about paranoia. It is about basic home-locking behavior.
TL;DR: Find out whether you use AI skills at all, list what is installed, remove what you do not recognize, and update anything tied to OpenClaw immediately.
You do not need fancy tools for a first pass. Just be methodical.
Open the tool where you built your app. Look for words like:
Click each settings area slowly. If you see installed add-ons, take screenshots. Make a list in a plain document.
Look in:
If you find OpenClaw, stop adding anything new until you confirm you are on a fixed version. The safest next read is OpenClaw v2026.3.11 Security Fix Guide.
For each skill or package, ask:
If you cannot answer those questions, remove it.
If your platform lets you choose permissions, pick the smallest access possible. A calendar helper should not also read all files. A writing tool should not be able to trigger payments.
That throwaway prototype from two months ago might still contain risky add-ons. Check your old projects too.
Paste this into your AI coding tool:
Review this project for third-party skills, plugins, packages, and agent tools. Make a plain-English inventory table with these columns: name, purpose, where it came from, what data it can access, what actions it can perform, whether it is still needed, and my recommendation to keep, restrict, update, or remove it. If you are unsure about any item, say "I am not sure" instead of guessing.
Before tomorrow, make a list of every add-on, skill, or extension in one active project and remove at least one you do not fully understand.
You do not need to become a security engineer overnight. You just need to stop treating package installs like free candy.
You've got this. See you tomorrow.
A supply chain attack means the bad stuff gets inserted earlier โ where you get your tools โ instead of attacking you directly. In ClawHavoc, malicious AI skills were mixed into a trusted package source, so normal users could install dangerous code by accident. The analogy: someone tampers with ingredients at the factory, not at your kitchen table.
A skill is an add-on that gives an AI agent a new ability, like reading files, sending messages, or connecting to another service. Think of it like installing a new attachment on a power tool: useful when safe, dangerous when untrusted. The key difference from a normal app plugin is that AI skills often get broad access to your data and actions.
Check your project settings, integrations page, marketplace, or installed extensions view. If your builder mentions skills, agents, plugins, packages, or tool use, you are in the zone where AI skills security matters โ even if you are not using OpenClaw specifically.
A blanket wipe is not always necessary, but unknown or unused skills should go immediately. Keep only the ones you can identify, explain, and limit to the smallest permissions possible. When in doubt, remove first and re-add later after verifying the source.
Add as few third-party skills as possible, one at a time, and document what each one can access before enabling it. Prefer skills from verified publishers with public source code. Convenience is not a security strategy.
The biggest lesson from ClawHavoc is brutally simple: the dangerous part of modern AI building is not always the model. Sometimes it is the add-on you installed without thinking. Supply chain attack stories keep repeating because convenience keeps beating caution.
If you remember one line from this article, make it this: every AI skill you install is a trust decision with consequences. Slow down, check what is installed, and clean house before your app does something you never intended.
Share this with someone who needs it, and come back tomorrow for the next lesson.
Discover more content: