Iran and Israel Are Waging Another War Behind the Scenes: A Battle of Fake Images Created With AI Models

Confusion and misinformation reign supreme in the conflict between the two countries. AI has arrived to muddy the waters even further.

Fake images created with AI during the Iran and Israel conflict
No comments Twitter Flipboard E-mail
john-tones

John Tones

Writer
  • Adapted by:

  • Karen Alfaro

john-tones

John Tones

Writer

I've been writing about culture for twenty-something years and, at Xataka, I cover everything related to movies, video games, TV shows, comics, and pop culture.

122 publications by John Tones
karen-alfaro

Karen Alfaro

Writer

Communications professional with a decade of experience as a copywriter, proofreader, and editor. As a travel and science journalist, I've collaborated with several print and digital outlets around the world. I'm passionate about culture, music, food, history, and innovative technologies.

551 publications by Karen Alfaro

At the user level, the popularization of AI systems leads to everyday uses—from cheating on exams to organizing your day and sparking debates that affect us all, such as ecological, labor, and ethical issues. But governments also use AI, and when they come into conflict, it becomes a powerful weapon of soft warfare and propaganda on a global scale. The conflict between Iran and Israel offers a few examples.

These images aren’t authentic. An AI model created them. When the Gulf War broke out in 1991, many called it the first war we would watch live. Today, that idea has evolved into the extreme manipulation of information through AI systems. Images and videos of live bombings circulate on social media platforms and messaging apps, but not everything is real. For example, some images show alleged F-35s shot down by the Iranian army, backed up by state media reports.

Iran and Israel AI images

But Israel quickly denied the claims. A glance at the photo above reveals the fake: the fighter jet appears absurdly large. The buildings in the corner look smaller than the people. That didn’t stop thousands from sharing the image and claiming it was legitimate.

Videos, too. Google’s powerful Veo 3 has helped generate videos that align with these fake photos. The Tehran Times, an Iranian media outlet, spreads videos of giant missiles that don’t exist. How do we know? The video still displays the Veo watermark.

A similar case involved an alleged bombing in Tel Aviv, featuring images that are clearly AI-generated. The account behind it, 3amelyonn, describes itself as “Resistance with Artificial Intelligence” and shares its videos on Telegram.

With official permission. These posts aren’t always the work of anonymous disinformation agents. For instance, Iran’s Supreme Leader, Ayatollah Ali Khamenei, tweeted images of Iranian missiles heading to Israel—images created by ChatGPT. Meanwhile, Israel’s defense minister produces authentic propaganda videos that flood Facebook and other platforms.

Leaders on both sides have joined this iconographic war. Propaganda like this helps damage the enemy’s reputation and fills information gaps for citizens with limited access.

Please, just AI. There’s another reason Iran and Israel support AI-driven propaganda. When citizens share real images of bombings and killings, those videos may give the enemy strategic intelligence. AI-generated videos avoid that risk.

According to 404Media, Israeli authorities issued a social media warning: “The enemy follows these documentations in order to improve its targeting abilities. Be responsible—do not share locations on the web!” In this context, AI serves as a tool to deliver propaganda without revealing tactical details.

How to identify AI-created content. Companies that develop generative AI tools carry the responsibility of flagging AI-created content. In addition to visible (and easily altered) watermarks, Google includes an invisible watermark called SynthID.

But SynthID poses a problem: Users must download the suspicious video—often posted on social media—and upload it to the SynthID platform. This takes minutes or even hours, while the fake video may reach millions. In terms of propaganda, that delay is critical.

Images | iraninarabic_ir | 404Media

Related | AI Videos Have Broken Instagram and TikTok’s Algorithms. Welcome to Social Media’s ‘AI Slop’ Era

Home o Index