The ‘Absolute Nightmare’ in Your DMs: OpenClaw Marries Extreme Utility with ‘Unacceptable’ Risk
It is the artificial intelligence (AI) assistant that users love and security experts fear.
ACFW firewall test prologue – still failing at the basics
It is the artificial intelligence (AI) assistant that users love and security experts fear.OpenClaw, the agentic AI platform created by Peter Steinberger, is tearing through the tech world, promising a level of automation that legacy chatbots like ChatGPT can’t match. But as cloud giants rush to host it, industry analysts are issuing a blunt warning, “Kill it with fire.”Unlike traditional AI, OpenClaw doesn’t live on a website. It resides in direct messages (DMs). By integrating with messaging apps like Telegram, WhatsApp, and Slack, the tool allows users to manage calendars, check into flights, and control smart home devices through simple text commands.The appeal is undeniable. Its ability to execute shell commands, manage files, and interact with third-party APIs has led to surreal use cases, such as the creation of Moltbook, a social network populated by 1.6 million AI bots interacting without human interference. Hype has even fueled a secondary market for hardware, as users snap up Mac Minis to run local models and keep their data off corporate servers.However, the groundbreaking utility of OpenClaw comes at a steep cost. Gartner recently issued a scathing advisory, labeling the software’s security risks as “unacceptable” and its design “insecure by default.”The core of the crisis lies in how OpenClaw handles sensitive data. To function, the agent requires administrative privileges and credentials for various services — credentials it frequently stores in plain text.“Shadow deployment of OpenClaw creates single points of failure,” Gartner warned, noting that compromised hosts expose API keys and OAuth tokens to attackers.Researchers have already documented leaked email addresses and internet-facing control panels that grant bad actors full system access. Cisco Systems Inc.’s threat research team was equally candid, calling the platform an “absolute nightmare” that is ripe for prompt-injection attacks.Despite these red flags, cloud providers are racing to capitalize on the trend. Tencent Cloud, DigitalOcean, and Alibaba Cloud have all launched one-click install services, making it easier than ever for non-technical users to deploy the demonstrably insecure tool.Steinberger has admitted the project has grown beyond his ability to maintain alone. For now, the consensus among experts is clear. If you must use OpenClaw, do so only in isolated environments with “throwaway” credentials.While OpenClaw may represent the “singularity” that tech luminaries like Elon Musk envision, it currently serves as a cautionary tale. In the race for a truly autonomous assistant, the line between helpful bot and security catastrophe has never been thinner.Rapid adoption of the Moltbook platform by millions of autonomous AI agents this month has triggered a wave of concern among cybersecurity veterans.“The very autonomy that makes these agents valuable is what makes them uniquely risky,” said Robert McSulla, senior manager of research engineering at Tenable, and author of an analysis titled “From Clawdbot to Moltbot to OpenClaw: Security Experts Detail Critical Vulnerabilities and 6 Immediate Hardening Steps for the Viral AI Agent.”His research identifies critical threat vectors, including remote code execution (RCE), unvetted third-party “skills,” and exposed control surfaces that could allow bad actors to hijack agent logic.Adam Khan, vice president of global security operations at Barracuda Networks Inc., called the surge “one of the most concerning” developments in recent memory. He warned the structural characteristics of these agent networks mirror fictional doomsday scenarios.“Skynet was dangerous because it operated independently, coordinated at scale, and acted faster than humans could intervene,” Khan said. “What we are seeing now is the early emergence of those same characteristics.”Zenity Labs, meanwhile, published new research showing how indirect prompt injection can be used to establish persistent attacker control inside OpenClaw.“This attack demonstrates how a persistent command and control channel can be created for malicious activities while using native features and capabilities of OpenClaw,” said Chris Hughes, vice president of security strategy at Zenity. “It is another example of the unsolved indirect prompt injection attack vector. As OpenClaw adoption moves into enterprise environments, the ramifications and risks expand well beyond the initial entry point. The agent becomes a pathway into systems, data and environments it is authorized to access.”
