Iran's AI Lego Propaganda Targets Trump and Netanyahu in Slopaganda War
Iran's AI Lego Propaganda Targets Trump and Netanyahu

The Rise of Slopaganda: AI-Generated Propaganda Floods Digital Battlefield

In a bizarre escalation of digital warfare, Iranian propaganda efforts have deployed artificial intelligence to create surreal videos featuring Lego figurines of prominent political figures. These AI-generated clips depict former US President Donald Trump standing alongside Israeli Prime Minister Benjamin Netanyahu and a representation of Satan, marking what experts describe as a dangerous new frontier in information manipulation.

When Digital Warfare Meets Child's Play

The emergence of these Lego-themed propaganda videos represents just the latest development in what researchers have termed "slopaganda" – AI-generated content specifically designed for propagandistic purposes. This phenomenon has accelerated dramatically since early 2025, when the White House released its own mixed-media video combining real military strikes with clips from popular entertainment.

Iranian responses to US-Israeli military actions have included flooding social media platforms with outdated combat footage alongside completely fabricated AI-generated content showing attacks on Tel Aviv and American bases throughout the Persian Gulf region. The Lego videos represent a particularly creative, if disturbing, evolution of these tactics.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Understanding the Slopaganda Phenomenon

Researchers Mark Alfano and Michał Klincewicz, who coined the term "slopaganda" in a recent academic paper, define it as AI-generated content serving explicit propaganda purposes. This represents a dangerous fusion of generative artificial intelligence with traditional propaganda techniques aimed at manipulating beliefs, emotions, attention, and memory for political ends.

The situation has deteriorated more rapidly than experts anticipated. In October 2025, Donald Trump himself posted an AI-generated video showing him piloting a fighter jet while wearing a crown and dumping waste on American protesters. More recently, he shared another fabricated video envisioning his presidential library as an enormous, gaudy skyscraper complete with golden elevators.

How Slopaganda Breaks Through Our Defenses

Slopaganda operates through several distinct mechanisms that make it particularly effective in today's digital landscape:

  • Repeated Exposure: Through both traditional and social media channels, slopaganda penetrates mental defenses by being attention-grabbing and emotionally arresting, typically targeting distracted audiences scrolling through endless content streams.
  • Epistemic Dilution: This content effectively pollutes our information environment with falsehoods and half-truths, creating what philosophers describe as "machines for bullshit" – content fundamentally indifferent to factual accuracy.
  • Emotional Association: Rather than aiming for factual accuracy, much slopaganda works expressively, creating emotional associations between concepts and figures. The Iranian Lego videos, for instance, aim to associate political leaders with evil rather than convince viewers of literal partnerships.

The Threat to Shared Truth and Public Trust

Perhaps most dangerously, some slopaganda does contain genuinely misleading content, either by design or through "context collapse" where jokes or trolling escape their intended context and are misunderstood as serious information. During conflicts, crises, and emergencies – when authoritative sources are scarce but information demand is high – misleading slopaganda can spread rapidly with significant consequences.

Once misleading associations enter public consciousness, they prove remarkably difficult to dislodge. Even small misleading effects across large populations can influence election outcomes, protest movements, and public sentiment regarding unpopular military engagements.

Perhaps most insidiously, the prevalence of slopaganda may create a climate of generalized doubt, where people become so suspicious of misinformation that they begin to distrust genuinely trustworthy sources. This erosion of public confidence in institutions and experts threatens to create what researchers describe as "nihilistic doubt" – a state where people struggle to believe they can know anything with certainty.

Pickt after-article banner — collaborative shopping lists app with family illustration

Three Strategies for Combating the Slopaganda Threat

Researchers propose interventions at three distinct levels to address the growing slopaganda crisis:

  1. Individual Digital Literacy: Citizens must develop skills to identify telltale signs of AI-generated content in text, images, and video. This includes learning to verify sources rather than merely consuming headlines and developing habits of blocking consistently unreliable sources rather than evaluating each piece of content in isolation.
  2. Technological and Regulatory Solutions: Industry and government must collaborate on implementing technological fixes such as watermarking AI-generated content. Some platforms may need to remove particularly harmful slopaganda from spaces where people access news and important information.
  3. Corporate Accountability: Major technology companies like OpenAI, Google, and X must be held responsible for the tools they've created. This could involve taxation and other interventions to fund both regulatory efforts and comprehensive digital literacy education programs.

While slopaganda appears to be a permanent feature of our digital landscape, researchers remain cautiously optimistic that with sufficient foresight, courage, and coordinated effort, society can adapt to this new reality and potentially even control its most harmful effects. The battle for truth in the digital age has entered a new, more surreal phase, and our collective response will determine whether shared understanding can survive in an increasingly polarized world.