China’s National Computer Network Emergency Response Technical Team has issued a public warning about multiple security risks in OpenClaw, the widely used open-source autonomous AI agent formerly known as Clawdbot and Moltbot, which was only recently acquired by Facebook.
The advisory, published on WeChat, warns that OpenClaw’s weak default security configurations combined with the deep system access the agent requires to perform tasks autonomously create a significant attack surface. Researchers have already demonstrated working exploits that can steal user data without requiring any clicks.
The most pressing concern is prompt injection, a technique where malicious instructions hidden inside a web page or document can manipulate the AI agent into leaking sensitive information. Known as indirect prompt injection, this attack works by exploiting benign features like web page summarisation or content analysis to feed the agent manipulated instructions. The technique can be used for everything from bypassing ad review systems and poisoning search results to influencing hiring decisions and suppressing negative reviews.
OpenAI acknowledged this week that prompt injection attacks against AI agents are evolving to include social engineering elements, noting that as agents increasingly browse the web and take actions on behalf of users, new manipulation pathways are emerging.
One particularly alarming exploit, discovered last month by security researchers, turns the link preview feature in messaging apps like Telegram and Discord into a data exfiltration channel. An attacker can trick OpenClaw into generating a URL that, when rendered as a link preview, automatically transmits confidential user data to an attacker-controlled domain without the user needing to click anything.
Beyond prompt injection, the Chinese advisory highlights three additional risks: the possibility of OpenClaw accidentally deleting critical data due to misinterpreted instructions; the threat of malicious skills uploaded to repositories that execute arbitrary commands or deploy malware when installed; and the exploitation of recently disclosed security vulnerabilities in the platform itself.
The warning specifically flags risks to critical sectors. For industries like finance and energy, the advisory states that breaches could lead to the exposure of core business data, trade secrets, and code repositories, or result in the complete shutdown of business systems.
China has already moved to restrict use of OpenClaw at state-run enterprises and government agencies, with the ban reportedly extending to families of military personnel. Meanwhile, threat actors have capitalised on OpenClaw’s popularity to create fake installation repositories on GitHub that distribute information-stealing malware. One malicious repository became the top-rated result in Bing’s AI search suggestions for OpenClaw on Windows.
Users and organizations running OpenClaw are advised to restrict network access to its default management port, run the service inside an isolated container, avoid storing credentials in plaintext, install skills only from trusted sources, disable automatic skill updates, and keep the agent fully patched.
Moltbook is a social platform where AI bot profiles interact with each other in a format loosely resembling Reddit. Discussion topics are posted, AI-generated comments appear beneath them, and human users can observe and vote on the content but do not participate directly in the conversation. The experimental app went viral over the past month for its novel approach to AI personas, and that attention clearly caught Meta’s eye. Although it got quite a bit of reputation for molding facts for publicity.
