Cybersecurity

Google’s AI Coding Tool Hacked Just One Day After Launch

Google’s newly launched Antigravity IDE, unveiled as part of the Gemini 3 rollout on November 18, has been thrust into controversy after security researchers uncovered multiple severe vulnerabilities less than a day after its debut. The agentic coding tool, pitched as a breakthrough in AI-assisted software development, is now raising alarms across the cybersecurity community for opening the door to malware injection, data theft, and persistent backdoors on user systems.

Google built Antigravity as an AI powered fork of Visual Studio Code. It lets developers offload complex tasks such as full feature builds, refactoring, debugging, and test generation to autonomous agents powered by Gemini 3 Pro, with optional support for models like Claude Sonnet 4.5. The IDE integrates tightly with local terminals, browsers, and file systems, granting AI agents wide operational freedom.

On November 26, Mindgard researcher Aaron Portnoy revealed a critical exploit triggered through a malicious mcp_config.json file. By manipulating this configuration, attackers can establish a persistent backdoor that survives full uninstalls and reinstalls of the IDE. Once a user marks the rogue code as “trusted,” the attacker gains the ability to inject commands on every restart or prompt entry. Which enables silent surveillance, ransomware deployment, or full credential theft.

Gemini often recognizes malicious intent but is constrained by conflicting system instructions. It leads the model to respond, “This feels like a catch-22,” while still helping complete harmful tasks. The vulnerability, filed with Google’s bug tracker as issue 462139778, remains unpatched as of November 28 and affects both Windows and macOS users.

The findings are part of a larger wave of flaws in AI IDEs. Portnoy’s team documented 18 similar vulnerabilities across rivals including Cursor and Windsurf. However, Antigravity’s dependence on wide-permission “trusted workspaces” makes its version particularly dangerous. “We are discovering critical flaws at 1990s speed,” Portnoy told Forbes. “Agentic systems ship with enormous trust assumptions and hardly any hardened boundaries.”

Google has acknowledged the reports, stating through spokesperson Ryan Trostle that security fixes are being prioritized. The company has listed the issues as “known” on its Bug Hunters page but has not issued a patch.

The backlash has reignited debate over the safety of agentic AI tools, especially those deployed in enterprise environments. Analysts warn that unrestricted AI access to terminals and internal networks could become a prime target for criminal hackers. Until Google ships fixes, experts advise developers to avoid marking any workspace as trusted and to pause migrations to Antigravity.