A major cybercrime group has used artificial intelligence (AI) to discover and exploit a previously unknown software vulnerability, Google Threat Intelligence Group confirmed.
The group targeted a widely used open-source system administration tool. Google blocked the attack before it could develop into a large-scale mass exploitation event affecting users globally.
The search engine confirmed this is the first recorded instance of attackers using AI to independently identify a new vulnerability and attempt to exploit it at scale.
John Hultquist, chief analyst at Google Threat Intelligence Group, stated that these findings likely represent only the tip of the iceberg regarding AI-driven hacking innovation.
Hackers are beginning to delegate portions of their cyber operations to AI, using it to autonomously hunt software flaws and assist in building functional malware independently.
Researchers said this shift signals an early move toward fully autonomous cyber operations, where AI systems actively analyze targets, generate code, and make decisions with limited human oversight.
Governments worldwide are now grappling with how to regulate powerful AI models that could help hackers identify targets and exploit both known and newly discovered software flaws.
European financial regulators have issued warnings that rapidly evolving AI models are accelerating the speed and scale of cyber risks during a period of heightened geopolitical tensions globally.
Cybercriminal networks and state-linked hacking groups tied to China, Russia, and North Korea are already experimenting with integrating AI tools directly into their active attack workflows.
Google warned that while these AI-assisted techniques remain at an early development stage, they could significantly reduce the time and expertise required to launch complex cyberattacks.