The Middle East conflict has unleashed a wave of AI-driven disinformation. However, experts are increasingly alarmed by a category of manipulated content that sits in a grey zone between reality and fake. That includes genuine photographs that have been run through AI tools and emerged looking sharper, more vivid, and subtly different from what actually happened.
In one widely circulated example, a high-quality photograph appeared to show a kneeling US pilot being confronted by a Kuwaiti local moments after ejecting from his aircraft. The image was shared across social media and even picked up by news outlets. But AFP fact-checkers noticed that the pilot appeared to have only four fingers on each hand. AI detection tools confirmed the image carried a SynthID watermark, an invisible tag used to identify content generated with Google’s AI tools.
Yet the underlying event was real. A video of the same scene had circulated on social media from March 2, and satellite imagery verified the location. The incident corresponded with reports that Kuwait had mistakenly shot down three US warplanes that day. AFP also located an earlier, blurrier version of the photograph on Telegram that matched the enhanced image exactly, except it lacked the artificially sharpened detail. AI verification tools confirmed this original version was authentic, suggesting it was used as the basis for the manipulated version.
“AI-enhancement may subtly alter textures, faces, lighting, or background details, creating an image that looks more ‘real’ than the original,” said Evangelos Kanoulas, a professor in AI at the University of Amsterdam. “[This can] strengthen a particular narrative about an event — for example, making a protest appear more violent, making a crowd appear larger, making facial expressions more intense.”
In a separate case, an image of a massive blaze near Erbil airport in Iraq was widely shared after Iranian strikes on March 1. While the scene itself was real, the AI-enhanced version showed a dramatically larger fire, a bigger smoke column, and more vivid colours than the original photograph.
Experts warn that the boundary between enhancement and outright content generation is dangerously thin. James O’Brien, a professor of computer science at the University of California, Berkeley, cautioned that even small changes can end up telling a very different story, fundamentally altering how events are perceived. Generative AI tools are also prone to “hallucinating” elements that were never in the original image.
This dynamic already played out in January following the shooting of Alex Pretti by federal immigration agents in Minneapolis. An AI-enhanced version of a genuine video frame went viral, and in the sharpened image, some social media users mistakenly identified the phone in Pretti’s hand as a weapon.
As the war triggered by US-Israeli strikes on Iran continues, both researchers stress that without proper labelling, AI-enhanced images are steadily eroding the public’s ability to distinguish truth from distortion. Experts also believe this kind of content is already having a huge impact on people’s ability to trust what they see, which is indirectly giving birth to a compounding effect: people are now beginning to doubt authentic images as well.
