By Abdul Wasay ⏐ 2 hours ago ⏐ Newspaper Icon Newspaper Icon 2 min read
AI Deep Fakes

As artificial intelligence reshapes everything from daily life to global industries, a darker side is emerging: extremist groups, including the Islamic State (ISIS), are actively experimenting with AI tools to supercharge their propaganda and recruitment efforts. Recent intelligence assessments and cybersecurity reports paint a concerning picture—militant networks are leveraging generative AI to create hyper-realistic deepfakes, synthetic audio, and multilingual content, lowering barriers for spreading ideology even among resource-strapped actors.

According to a major Associated Press investigation, ISIS supporters have already produced deepfake audio recordings of deceased leaders reciting scripture and used AI for rapid translations of messages into multiple languages. Researchers at SITE Intelligence Group, which monitors extremist activity, confirmed these developments, noting how AI helps decentralized groups pump out propaganda at unprecedented scale. A pro-ISIS forum post from November 2025 even urged followers to integrate AI into operations, calling it a “powerful and accessible tool” for amplifying reach.

Extremists are generating realistic images, videos, and audio that blur fact and fiction, often circulating via encrypted apps and private channels. Past examples include AI-crafted propaganda videos after attacks, designed to recruit by exaggerating chaos or fear.

The implications go beyond propaganda. Analysts warn AI could personalize recruitment (targeting vulnerable individuals via algorithms), refine cyberattacks, or fabricate events to incite division. While ISIS lacks state-level AI sophistication, consumer tools like ChatGPT and image generators have dramatically narrowed the gap. A November 2025 post on a pro-ISIS site explicitly encouraged using AI to “make nightmares reality.”

Governments and tech companies face mounting pressure. U.S. lawmakers, including Sen. Mark Warner, call for better information-sharing on AI misuse. Global coordination is urged to detect synthetic content without curbing innovation.
As AI grows more powerful and accessible, its weaponization by extremists poses an evolving threat to security and society. Vigilant monitoring, ethical AI development, and cross-sector collaboration are essential to counter this digital jihad before it escalates further.