Since the outbreak of war in the Middle East on 28 February, more than 500 fact-checking reports have been published, revealing a widespread surge in misinformation. Analysts estimate that between 20% and 25% of this content has been generated using artificial intelligence—an unprecedented level compared to previous conflicts.
While earlier wars, such as the Russia–Ukraine conflict and the war in Gaza, witnessed waves of AI-generated material, the current conflict stands out for both the scale and realism of such content. Researchers attribute this shift to the availability of advanced, low-cost tools capable of producing highly convincing images, videos, and audio, often without the tell-tale signs of manipulation seen in earlier technologies.
This raises a critical question: how can authenticity be verified when the boundary between reality and fabrication has become increasingly blurred amid the fog of war?
A Landscape of Digital Chaos
Digital platforms are now saturated with what many experts describe as “information chaos.” Thomas Novotny, who leads an AI research group at the University of Sussex, argues that visual and audio content should now be treated with the same scepticism as rumours.
The sophistication of AI-generated media has reached a point where even genuine images and videos are subject to doubt, creating a broader crisis of trust. Constance de Saint Laurent of Maynooth University noted that “the problem with misinformation today is not that people believe it, but that they no longer trust even authentic information.”
The sheer volume of false content has overwhelmed fact-checkers, and even major media outlets have not been immune. The German magazine Der Spiegel recently withdrew images connected to Iran after determining they were likely AI-generated. Even after being debunked, such content often resurfaces—a phenomenon some researchers describe as “zombie misinformation.”
Billions of Views and Financial Incentives
Financial incentives are further fuelling the spread of misleading content. Social media platforms reward engagement, encouraging some influencers to circulate sensational or deceptive material. According to the Institute for Strategic Dialogue, a network of accounts on X sharing AI-generated content about the conflict has amassed more than one billion views since the war began.
The issue extends beyond videos. Fabricated satellite imagery and falsified maps are increasingly being used to cast doubt on verified events. The monitoring group NewsGuard warns that misinformation now goes beyond creating false content—it also involves labelling genuine material as fake. This dual dynamic makes it easier to question everything, ultimately eroding the very notion of objective truth.
As AI continues to evolve, the information domain is becoming as contested as the physical battlefield—reshaping not only how wars are fought, but also how they are perceived and understood.










