Cybersecurity researchers have identified a new strain of malware, codenamed VoidLink, that was created almost entirely using artificial intelligence.
According to a detailed analysis from cyber-experts, VoidLink’s code, functionality, and modular design were generated with minimal human intervention using large language models (LLMs). It goes without saying that the creation of malware using AI is a disturbing evolution. It means an uncertain future for cybersecurity where AI will be aiding attackers and acting as the principal author of malicious software.
The malware can steal credentials, log keystrokes, and establish persistence on compromised systems, and its modular architecture makes it easier for operators to patch and extend functionality without deep programming expertise.
Cybersecurity experts have previously warned that AI could lower the bar for developing sophisticated threats, but VoidLink represents one of the first documented cases where that prediction has translated into real malware in the wild. Reuters and other outlets have reported on rising concerns that generative AI tools, when misused, can automate and optimize steps that traditionally required experienced coders.
For example, AI-assisted code generators have been shown to help even novice actors produce exploit code, ransomware scripts, or evasion routines that previously required extensive expertise. Malwarebytes and Kaspersky have both documented growth in AI-linked threat development in their quarterly cyber threat reports, particularly for phishing kits and polymorphic malware that mutate code signatures to avoid detection.
VoidLink exemplifies how malicious operators can combine AI-generated modules into a coherent threat package. Analysts say the malware includes:
The cybersecurity community acknowledges that generative AI brings enormous benefits for defenders as well as attackers. Tools that auto-generate secure code, scan for vulnerabilities, or model attack paths can make defensive work faster and more effective. However, as noted by cybersecurity advisory Infosec Institute, the same tools can be misused to produce hard-to-detect malware, advanced phishing scripts, and adaptive ransomware.
“In the wrong hands, AI doesn’t just speed up malware development — it changes who can develop it,” said Dr. Emily Zheng, a cybersecurity policy expert at a leading think tank. “That’s the part that’s truly concerning — the democratization of threat creation.”
To counter AI-generated threats like VoidLink, experts recommend a layered defense strategy: