Did the ayatollah body-slam Trump? Negotiating social media during war
The war in Iran has transfixed the world, and a quick perusal of social media gives a concerning impression: The Iranians are outsmarting, outclassing and outperforming the United States and Israeli militaries—and by wide margins. Just look at the videos and images on social media.
One video found on X, formerly known as Twitter, seemingly purports to show an Iranian missile crashing into an American battleship. In another example, a video claimed to show the Iranians destroying the U.S. fleet at Bahrain. Two viral images on social media also showed several U.S. special forces members captured and restrained in Iran. “The messages convey resilience, presenting [Iran] as not only fighting back but winning,” The New York Times notes. Yikes!
The only problem is that none of this is true. It’s all artificial intelligence-generated nonsense. Upon closer examination, the missile that allegedly hit an American vessel depicts a Soyuz, which Russians use to shuttle cargo and cosmonauts to space, and the ship appears to be a World War II-era Japanese warship. To date, the Iranians have not laid waste to any American ship—let alone fleet—nor have they captured any special forces.
Despite these obvious realities, the posts—and many others like them—have made their way across the internet and have an untold number of impressions. Meanwhile, this disinformation is confusing unsuspecting social media users. The impetus of these posts isn’t entirely clear. They could stem from Iranian operatives who hope to further sour the American people on the war, content creators pursuing revenue from viral posts, or both.
Regardless of their source and intent, “The scale is truly alarming and this war has made it impossible to ignore now,” according to the BBC. “What used to require professional video production can now be done in minutes with AI tools.”
Given the proliferation of AI-generated disinformation, expect the government to want to clamp down on social media disinformation and the application of artificial intelligence, which could constrain free speech and make the government the arbiter of truth—a scary proposition. However, officials should tread carefully, especially when there are effective, less heavy-handed ways of policing online speech.
Legacy media is one such useful filter available to check the veracity of posts. While the national media has long been the subject of skepticism from the general public, reporters are meant to serve a public good—to cut through disinformation and inform readers of the facts. Some outlets are clearly better than others, but if the media isn’t reporting on a major development, like the Iranians sinking an entire American fleet, then it probably isn’t true.
There are also in-app tools to check for disinformation. Take the platform X for instance, which is where I saw all the aforementioned media. While it isn’t immune from fair criticisms, it has worked to counter some disinformation. “The platform X announced this week it will temporarily suspend creators from its monetization program if they post AI-generated videos of armed conflict without a label,” the BBC reported. So if revenue is the cause, X has an answer.
The AI chatbot Grok is also integrated into X, and anyone who questions a post they saw can simply respond to it and ask @Grok if it is legitimate. Is it perfectly accurate? Far from it, and in fact, there have been some notable failures, but in my anecdotal experience, it has been an important weapon to fight disinformation.
Perhaps even more important are community notes. They “aim to create a better informed world by empowering people on X to collaboratively add context to potentially misleading posts. Contributors can leave notes on any post and if enough contributors from different points of view rate that note as helpful, the note will be publicly shown on a post,” according to X. In this way, informed users can more effectively expose falsehoods for all to see. Again, this isn’t a silver bullet, but it is a smart tool for users.
Unfortunately, disinformation will never go away, and as AI continues to improve, it will become more difficult to distinguish the truth just by viewing viral posts. While this will certainly tempt lawmakers to regulate social media and AI into oblivion, the government is not the answer, nor is there a single panacea. Rather there is an array of tools already available to undermine disinformation.
So the next time your great uncle Jed shares a viral video of the Ayatollah body-slamming Trump in the Roman Colosseum, pause and take a deep breath. You are well-equipped to deal with this. If you feel the need, verify it with legacy media, AI chatbots and/or community notes. It’s so easy that the government doesn’t even need to get involved.