At the user level, the popularization of AI systems leads to everyday uses—from cheating on exams to organizing your day and sparking debates that affect us all, such as ecological, labor, and ethical issues. But governments also use AI, and when they come into conflict, it becomes a powerful weapon of soft warfare and propaganda on a global scale. The conflict between Iran and Israel offers a few examples.
These images aren’t authentic. An AI model created them. When the Gulf War broke out in 1991, many called it the first war we would watch live. Today, that idea has evolved into the extreme manipulation of information through AI systems. Images and videos of live bombings circulate on social media platforms and messaging apps, but not everything is real. For example, some images show alleged F-35s shot down by the Iranian army, backed up by state media reports.

But Israel quickly denied the claims. A glance at the photo above reveals the fake: the fighter jet appears absurdly large. The buildings in the corner look smaller than the people. That didn’t stop thousands from sharing the image and claiming it was legitimate.
Videos, too. Google’s powerful Veo 3 has helped generate videos that align with these fake photos. The Tehran Times, an Iranian media outlet, spreads videos of giant missiles that don’t exist. How do we know? The video still displays the Veo watermark.
A similar case involved an alleged bombing in Tel Aviv, featuring images that are clearly AI-generated. The account behind it, 3amelyonn, describes itself as “Resistance with Artificial Intelligence” and shares its videos on Telegram.
With official permission. These posts aren’t always the work of anonymous disinformation agents. For instance, Iran’s Supreme Leader, Ayatollah Ali Khamenei, tweeted images of Iranian missiles heading to Israel—images created by ChatGPT. Meanwhile, Israel’s defense minister produces authentic propaganda videos that flood Facebook and other platforms.
Leaders on both sides have joined this iconographic war. Propaganda like this helps damage the enemy’s reputation and fills information gaps for citizens with limited access.
Please, just AI. There’s another reason Iran and Israel support AI-driven propaganda. When citizens share real images of bombings and killings, those videos may give the enemy strategic intelligence. AI-generated videos avoid that risk.
According to 404Media, Israeli authorities issued a social media warning: “The enemy follows these documentations in order to improve its targeting abilities. Be responsible—do not share locations on the web!” In this context, AI serves as a tool to deliver propaganda without revealing tactical details.
How to identify AI-created content. Companies that develop generative AI tools carry the responsibility of flagging AI-created content. In addition to visible (and easily altered) watermarks, Google includes an invisible watermark called SynthID.
But SynthID poses a problem: Users must download the suspicious video—often posted on social media—and upload it to the SynthID platform. This takes minutes or even hours, while the fake video may reach millions. In terms of propaganda, that delay is critical.
Images | iraninarabic_ir | 404Media
Related | AI Videos Have Broken Instagram and TikTok’s Algorithms. Welcome to Social Media’s ‘AI Slop’ Era
View 0 comments