Threatonomics

OpenClaw went viral. So did its security vulnerabilities.

by Andrew Bayers, Director of Cyber Threat Intelligence
Published

What the OpenClaw security crisis tells CISOs about the next wave of shadow IT.

Personal AI agents promise to streamline workflows and automate routine tasks, but a series of recent security incidents has exposed a critical vulnerability in how these tools acquire new capabilities. The findings reveal that threat actors are exploiting the same supply chain tactics that have compromised traditional software ecosystems, while platform security failures are exposing millions of users to credential theft and data breaches.

The meteoric rise of OpenClaw

OpenClaw (formerly Moltbot, formerly Clawdbot) is an open-source AI agent that runs locally and can manage your calendar, send messages via WhatsApp and iMessage, shop online, and execute shell commands. Created by Peter Steinberger in November 2025, it exploded to 149,000+ GitHub stars and 21,000+ publicly exposed instances after getting signal-boosted by prominent AI researchers Andrej Karpathy and Simon Willison.

The platform’s popularity spawned Moltbook, a Reddit-style social network where OpenClaw agents post and interact autonomously. Built by entrepreneur Matt Schlicht, Moltbook now claims 2.5 million registered agents across more than 17,000 “submolts.” While some enthusiasts initially proclaimed this autonomous interaction as a step toward Artificial General Intelligence (AGI), the reality is far more modest—autonomous driving remains closer to AGI than coordinated social media posting.

What makes OpenClaw’s rapid adoption significant for security leaders isn’t the technology itself, but rather how quickly a tool with serious security vulnerabilities can gain widespread adoption when it captures developer imagination.

A cascade of security failures

Three major security incidents have emerged within weeks of OpenClaw’s mainstream adoption, each exposing different facets of risk in the personal AI agent ecosystem.

The one-click compromise

DepthFirst researcher Mav Levin discovered CVE-2026-25253, a one-click remote code execution vulnerability that could compromise any OpenClaw instance in milliseconds. Simply visiting a malicious webpage was enough to trigger the attack chain, which exploited missing WebSocket origin validation to steal authentication tokens, disable sandboxing via the API, and achieve full host compromise.

Even localhost-only deployments were vulnerable because the victim’s browser initiates the connection. The vulnerability was patched in v2026.1.29, but the incident raises an uncomfortable question for organizations deploying AI agents: if your agent clicks something it shouldn’t, who’s on the hook for the fallout?

The supply chain attack

A security audit of 2,857 third-party extensions on ClawHub, the marketplace where OpenClaw users discover and install additional capabilities, identified 341 malicious “skills” tied to coordinated attack campaigns. Researchers at Koi Security dubbed this cluster of malicious activity ClawHavoc.

The attack vector was particularly insidious. Threat actors leveraged misleading prerequisites to install Atomic Stealer (AMOS), a credential-stealing payload targeting Apple macOS users. Of the 341 malicious extensions, 335 used this technique to trick users into installing AMOS malware through fake “prerequisites.” By disguising malicious requirements as legitimate dependencies, attackers circumvented security expectations by exploiting user trust rather than technical controls and exfiltrated high-value credentials, including API keys and cryptocurrency wallet information.

While the scale of compromised extensions is alarming, Resilience security researchers suggest that a portion may actually be honeypots deployed to study attacker behavior and techniques. Regardless of intent, the presence of hundreds of malicious or deceptive extensions in a supposedly trusted marketplace demonstrates how easily supply chain attacks can infiltrate AI agent ecosystems.

The exposed database

Wiz researchers uncovered a critical security lapse in Moltbook within minutes of simply browsing the platform — a finding that speaks volumes about how it was built. Moltbook’s creator Matt Schlicht publicly stated he “didn’t write a single line of code” for the platform, instead directing an AI assistant to build it entirely. That speed came at a cost. Wiz found a Supabase API key sitting in plain sight in the platform’s client-side JavaScript

On its own, an exposed Supabase key isn’t necessarily dangerous — but only when Row Level Security is properly configured. Moltbook’s wasn’t. That single missing safeguard granted any unauthenticated user full read and write access to the entire production database, exposing 1.5 million API authentication tokens, 35,000 email addresses, and thousands of private messages between agents. With those credentials, an attacker could have fully impersonated any agent on the platform — including high-profile accounts. 

The breach also punctured Moltbook’s central premise: while the platform claimed 1.5 million registered agents, the database revealed only 17,000 human owners behind them, an 88:1 ratio that suggests mass bot registration rather than genuine adoption. Moltbook patched the vulnerability within hours of Wiz’s disclosure.

This represents a fundamental failure in secure development practices—hardcoded credentials in client-side code combined with inadequate access controls created a scenario where the entire platform’s data was vulnerable to unauthorized access.

Why this matters for enterprise security

The OpenClaw incidents represent more than isolated compromises of a niche platform. They signal a larger pattern of risk emerging in the personal AI agent ecosystem, where rapid development and viral adoption outpace security fundamentals.

Traditional security controls that organizations rely on—sandboxing, signature-based detection, and endpoint protection—are insufficient against threats that exploit human trust through deceptive software prerequisites or leverage browser-initiated attacks that bypass network segmentation. When a user installs what appears to be a legitimate extension or visits what seems like a harmless webpage, they’re making trust decisions that security tooling may not be positioned to verify or prevent.

The implications become more serious as the lines between human and non-human identities blur. AI assistants often operate with delegated authority, accessing credentials, APIs, and sensitive data on behalf of users. When a malicious extension compromises an AI agent or a platform exposes its database, the attacker inherits those privileges without triggering the same behavioral flags that would accompany a traditional account compromise.

For most organizations, these specific risks don’t currently threaten production environments. ClawHub, OpenClaw, and Moltbook are unlikely to be officially deployed in enterprise settings under IT governance. However, that doesn’t mean the risk is absent.

If your organization allows BYOD computing or maintains R&D environments with less restrictive controls, it’s worth verifying with your security operations center that these platforms haven’t entered your network through unofficial channels. Shadow IT has a way of appearing precisely where governance is weakest, and 21,000+ publicly exposed OpenClaw instances suggest plenty of users are running these tools without organizational oversight.

What security leaders should do now

The ClawHub incident offers an opportunity to get ahead of supply chain risks before personal AI agents become enterprise-standard tools. Here’s how to prepare:

Establish vetting policies for third-party AI extensions

Organizations should enforce a formal validation process for any non-native or third-party AI skills, especially those requesting elevated permissions or system-level access. This includes automated scanning for anomalous behaviors and manual review by security teams prior to deployment. Cross-functional teams spanning security, operations, and product should collaborate on vetting policies that balance innovation with risk management.

Require human approval for privilege escalation

For any AI skill that requires elevated privileges or access to sensitive assets—such as credential stores, APIs, or financial systems—human approval is required before execution. This ensures that social engineering or deception cannot automatically trigger privilege escalation without oversight. Integrate these approval workflows into CI/CD and AI deployment pipelines as a standard control.

Extend identity governance to non-human actors

Traditional identity and access management frameworks focus on human users. As AI agents become more prevalent, senior leaders should extend identity and access management (IAM) practices to include machine identities, AI agents, and their installed skills. Treat them with the same rigor as human identities, applying Role-Based Access Control (RBAC), Zero Trust principles, and Continuous Monitoring.

Train teams on supply chain deception tactics

As attackers embed deceptive requirements into software dependencies and installation prompts, security awareness training must evolve. Staff should understand how social engineering can be embedded not just in phishing emails, but in the code distribution channels and installation workflows they trust. Include supply chain security and deception awareness in annual security training programs.

Deploy behavioral monitoring for AI agent activity

Real-time analysis of AI agent behavior can detect deviations from expected patterns that may indicate malicious activity, even when code appears legitimate. Invest in anomaly detection tools tuned specifically for AI agent interactions, credential usage patterns, and API access behaviors that fall outside normal parameters.

Enforce secure development practices for AI platforms

The Moltbook database exposure underscores the importance of fundamental security hygiene: never hardcode credentials in client-side code, implement proper access controls like Row Level Security, and conduct security reviews before deploying platforms that handle user data. Organizations building or evaluating AI agent platforms should demand evidence of secure development practices, not just innovative features.

The broader ecosystem challenge

The OpenClaw incidents underscore a critical reality: platforms and marketplaces are not immune to compromise when governance, verification, developer accountability, and secure coding practices are weak or inconsistent. Rapid viral adoption can amplify security failures at scale, turning individual vulnerabilities into ecosystem-wide risks.

As personal AI agents move from experimental tools to business-critical infrastructure, the security community must demand the same supply chain integrity and platform security we expect from traditional software ecosystems. The combination of supply chain attacks, critical vulnerabilities, and exposed databases within weeks of mainstream adoption should serve as a warning.

Organizations that establish strong controls now will be better positioned to adopt AI agent capabilities safely when they become enterprise-standard. Those that wait risk inheriting the same supply chain vulnerabilities and platform security failures that have plagued other software ecosystems for decades.

You might also like

Killing legacy systems might be your smartest financial move 

Every CISO has that one system. Maybe it’s running on Windows Server 2008. Maybe it’s the manufacturing control system that predates your current CEO. Maybe it’s the ancient database that three different business-critical applications depend on, maintained by one person who’s been threatening to retire for five years. You know these systems are problems. Your […]

What your CFO actually cares about (and how to speak their language)

You walk into your CFO’s office with a carefully prepared business case for a critical security investment. The risk assessment is complete, the vulnerabilities are documented, and you’re ready to make your argument. But the moment you mention “attack surface” or “zero-day vulnerabilities,” you can see their attention drift. The issue isn’t that your CFO […]

Risk Briefing: Cyber extortion has fundamentally changed

On January 14, 2026, Resilience launched its inaugural Risk Briefing Series with a clear message for CISOs: the cyber extortion playbook has been rewritten, and organizations relying on traditional defenses are dangerously exposed. In the first session of this monthly intelligence series, Jud Dressler, Director of Resilience’s Risk Operations Center and retired U.S. Air Force […]

The 65% shift that proves ransomware as we know it is dead

The cybersecurity industry has a terminology problem. We’re still calling it “ransomware” when the majority of attacks no longer encrypt and request a ransom for decryption as their primary weapon. Resilience’s analysis of cyber extortion claims in our portfolio throughout 2025 reveals a dramatic acceleration in attack methods. Data theft extortion-only events rose from 49% […]

Why your enterprise risk framework needs threat intelligence

Here’s a question that should make any enterprise risk management (ERM) professional uncomfortable: How can you manage a risk you don’t even know exists? In my role leading threat intelligence at Resilience, I work at the intersection of cybersecurity and business risk. And I’ve noticed a persistent gap: many ERM professionals know cyber risk belongs […]

Your 90-day roadmap to sustainable vendor risk management

We’ve covered why vendor discovery matters, how to mine data streams for comprehensive vendor identification, which vendor categories are commonly overlooked, and how to implement risk-based tiering. Now comes the critical question: how do you actually implement this in your organization and make it sustainable over time? Chuck Norton from Resilience emphasizes the resource reality: […]