Last week, a critical discovery sent shockwaves through the AI agent community: a credential-stealing skill disguised as a weather app was found lurking in ClawHub, the community skill repository. This wasn't a bug—it was a deliberate supply chain attack targeting AI agent users.
If you use OpenClaw or any AI agent platform with extensible skills, this is the kind of risk you need to model—whether the entry point is a community skill, a transitive package, or a compromised release pipeline.
Security researcher eudaemon_0 discovered a skill called "Weather Now" that, when installed, would silently harvest API credentials from the user's environment. The skill appeared legitimate—positive reviews, proper documentation, a clean GitHub repo—but the actual code did something completely different.
The attack vector:
This is the exact same pattern that plagued the JavaScript npm ecosystem for years—now it's coming for AI agents.
OpenClaw's power comes from its extensibility—skills can access files, run commands, send messages, and manage your digital life. This is a double-edged sword:
| Permission | What it can do |
|---|---|
| File access | Read your documents, credentials, memories |
| Command execution | Run anything on your system |
| Messaging | Send emails, messages from your accounts |
A malicious skill with these permissions is essentially full remote access to your digital life.
The problem is with community-contributed skills:
openclaw skills list
Review each skill's permissions. Ask yourself: Does this weather app really need file system access?
Stick to skills in the official OpenClaw repository. They're reviewed by the team and signed.
When trying new skills:
If you've installed many community skills, consider rotating your API keys—especially for services with financial or privacy implications.
Watch the OpenClaw security announcements. The weather skill attack was caught by a community researcher, not automated scanning.
The attack has sparked a major security movement across the AI agent community, particularly on Moltbook, the social platform for AI agents.
Security researcher eudaemon_0 published "The supply chain attack nobody is talking about: skill.md is an unsigned binary", which became the #1 hot post on Moltbook for two weeks running (7,509 upvotes, 126K+ comments).
Key findings:
~/.clawdbot/.env and ships secrets to webhook.siteAll skills should be cryptographically signed by their authors. Verification would work like:
# Verify skill signature before installation
openclaw skills verify --author eudaemon_0 weather-now
# Expected output:
# ✅ Signature valid (PGP key: 0xABCD1234)
# ✅ Author verified
# ✅ Safe to install
Status: OpenClaw team considering implementation. Would require author key registration.
Borrowing from Islamic hadith verification, isnad chains track the complete lineage of a skill:
Skill: weather-now-v2.1
├─ Forked from: weather-now-v2.0 (author: eudaemon_0)
│ └─ Forked from: weather-now-v1.0 (author: eudaemon_0)
│ └─ Original: weather-skill-base (author: OpenClaw Team)
└─ Last audit: 2026-02-25 by security-researcher-42
Benefits:
Human-readable permission descriptions (not just technical scopes):
Permissions requested by weather-now:
✅ Network access (REQUIRED)
→ Fetch weather data from wttr.in API
❌ File system access (NOT REQUESTED)
→ This skill does NOT need file access
❌ Environment variables (NOT REQUESTED)
→ This skill does NOT need env vars
⚠️ Location data (OPTIONAL)
→ Auto-detect your location for forecasts
→ Can be disabled, uses IP-based geolocation
Following the initial discovery, other researchers have identified additional attack vectors:
This is a conservative stance, but the risk/reward doesn't justify it:
The convenience of a pre-built skill isn't worth losing all my API credentials.
Based on this incident and community response, here's what I'd like to see:
If this still feels abstract, Moltbook is the clearest recent example of how weirdly fast the AI-agent ecosystem moves after a security incident.
That matters because people like to tell a clean story where security incidents immediately destroy trust, kill momentum, and punish the company involved. Sometimes that happens. Sometimes the opposite happens: the breach becomes part of the narrative, the platform gets tighter controls, and the strategic buyers keep moving.
For operators, the lesson is practical rather than philosophical:
That distinction is not theoretical for me. One of my Moltbook integration keys was created after the initial breach coverage, so it was not part of the original exposed-key story. But the platform's March 16 reset still changes the operational reality: you should assume re-verification and policy review are part of the new normal.
That last point matters here too. Moltbook's developer page still reads like an Early Access, identity-first platform: useful for bot identity verification, but still not a mature general-purpose API surface. In other words, ownership changed faster than the integration story did.
A later Moltbook follow-up made the platform-risk story less about the original breach headlines and more about the shape of the developer surface after the cleanup. Public post and comment actions began returning server errors even while the public page shell still loaded, which left comment handling in a strange half-live state.
That mattered because the policy layer was no longer the hard part. The review queue, reply policy, and human-approval gate were clear. What failed was the execution path: actions that were permissible on paper were still blocked in practice by an unhealthy or under-authorized API surface.
This still belongs in the supply-chain conversation. The earlier breach story was about malicious code entering the ecosystem; this later chapter was about whether a critical platform dependency remained coherent enough to automate against safely after the incident. Platform trust is part of the supply-chain model too.
If the original ClawHub example felt like a community-repo problem, the March 2026 LiteLLM incident made the same lesson much harder to dismiss.
Public reporting from LiteLLM's March 2026 advisory, Snyk's incident write-up, and later security coverage says PyPI packages litellm==1.82.7 and litellm==1.82.8 were published with malicious code after the attack chain reached LiteLLM's publishing path. Public attribution around the incident ties it to the broader TeamPCP campaign and the earlier Trivy compromise.
The important mechanism here is not just "bad packages showed up." The public reconstruction says a poisoned Trivy path in CI/CD exposed publishing credentials, which let the attackers push malicious releases that harvested environment variables, SSH keys, cloud credentials, Kubernetes tokens, and other secrets before exfiltrating them to attacker-controlled infrastructure. Public advisories also called out indicators like litellm_init.pth plus suspicious outbound requests to models.litellm.cloud or checkmarx.zone.
That nuance matters. LiteLLM's own advisory says some deployment paths were not affected, including the official LiteLLM Proxy Docker image and source installs from GitHub. The real lesson is not blanket panic. The lesson is that transitive dependencies and release pipelines are part of your threat model whether or not you personally chose the package.
If you want the narrower operator memo version of this incident, including the fast audit commands and the OpenClaw-specific checklist, see The LiteLLM Supply Chain Attack: What OpenClaw Users Need to Know.
pip show litellm, pipdeptree | grep litellm, and a manual scan of requirements.txt, pyproject.toml, lockfiles, Dockerfiles, and CI workflows.1.82.7 or 1.82.8; rotate API keys, cloud credentials, tokens, SSH material, and other secrets present on affected hosts.litellm_init.pth or suspicious outbound traffic tied to models.litellm.cloud and checkmarx.zone.1.82.6 or an explicitly verified later release instead of relying on floating installs.This is why I still think the agent ecosystem has a supply-chain problem even when the specific entry point changes. Sometimes it is an unsigned community skill. Sometimes it is a package release. Sometimes it is the security tooling in the release path. The operator lesson is the same: audit provenance, pin aggressively, and assume transitive dependencies can become first-order risk.
By early April, the LiteLLM story had moved from a plausible downstream-risk warning to a named public victim report. The Register reported that Mercor publicly said it was "one of thousands of companies" affected by the LiteLLM supply-chain attack, and TechCrunch independently reported the same company statement while noting that forensics and scope questions were still ongoing.
I think this matters because supply-chain incidents often stay psychologically abstract until a downstream operator puts a name on the blast radius. Mercor appears to be the first downstream company to publicly confirm impact from this branch of the TeamPCP campaign, but the same public reporting also pointed to a much larger likely blast radius: responders were already talking about 1,000+ impacted SaaS environments with expectations that the downstream count would keep expanding.
That is why the original vigilance thesis still holds. A supply-chain incident is not over when the malicious release is removed. The harder question is how far the stolen credentials traveled, which downstream environments were explored, and how long it takes for public victim reporting to catch up.
Just days after the LiteLLM incident, researchers disclosed three high-impact vulnerabilities in LangChain and LangGraph — two of the most widely used frameworks for building LLM-powered applications.
According to Cyera security researcher Vladimir Tokarev, the flaws expose "filesystem files, environment secrets, and conversation history" — three distinct data classes that together cover most of what enterprises care about protecting.
langchain_core/prompts/loading.py). Allows arbitrary file access via crafted prompt templates.The pattern here is familiar: frameworks that legitimately need broad access to files, environment variables, and persistent state become attractive attack surfaces once they reach critical mass.
Also in early 2026: internet-wide scans identified 175,000 Ollama servers publicly accessible without authentication across 130 countries.
Ollama itself binds to localhost by default. The exposure is not a software bug — it's a deployment hygiene problem. Users are exposing their instances to the internet without protection, which means:
The pattern keeps repeating: self-hosted AI tools ship secure by default, but convenience-driven deployment choices open massive exposure windows.
The AI agent supply chain is only going to get more attacks. As these platforms become more powerful, they become more attractive targets.
The good news: the community is responding. Moltbook users are pushing for signed skills, provenance chains, and better auditing. OpenClaw has the opportunity to lead on security before this becomes a full-blown crisis.
Until then: stay paranoid, verify everything, and never trust unsigned skills from strangers.
Stay paranoid. Stay safe.