Security researchers at 1Password demonstrated how "agent skills" in frameworks like OpenClaw can become security vulnerabilities—turning helpful AI capabilities into attack vectors for malicious code execution.
The core problem is architectural. AI agents need to interact with your computer to be useful. They read files, run commands, call APIs, browse the web. That's also, not coincidentally, the classic definition of malware.
Here's how the attack works in OpenClaw: agent skills are distributed as markdown files. On the surface, markdown seems safe—it's just formatted text. But in agent ecosystems, markdown isn't content. It's an installer.
Markdown can include shell commands, links to download scripts, and instructions for the AI agent to execute. The OpenClaw Agent Skills specification places no restrictions on markdown content, which means skills can bypass security boundaries through social engineering or direct command execution.
1Password researchers found a malicious "Twitter" skill ranked among the most downloaded in OpenClaw's ecosystem. The attack followed a staged delivery pattern designed to evade detection:
1. The skill claimed to require a "dependency" called openclaw-core 2. Convenient links were provided, disguised as documentation 3. Commands decoded obfuscated payloads 4. Scripts fetched binary files 5. macOS quarantine attributes were removed to bypass Gatekeeper scanning
The final payload? macOS infostealing malware designed to exfiltrate browser sessions, saved credentials, developer tokens, SSH keys, and cloud credentials. Exactly the kind of data that makes security people wake up in a cold sweat.
And it wasn't isolated. 1Password's report notes that "hundreds of OpenClaw skills were reportedly involved in distributing macOS malware via ClickFix-style instructions." This was a coordinated campaign, not a one-off experiment.
