In a society increasingly intertwined with digital technology, artificial intelligence (AI) has demonstrated its potential to not only innovate but also to disrupt. The recent case of a fake, AI-generated image of an explosion at the Pentagon, which briefly went viral, underscores this point. It served as a jarring wake-up call about the dangers of AI-generated misinformation and its potential to impact not just social discourse, but even financial markets.

On Monday morning, an image began circulating on social media depicting an explosion outside the Pentagon in Arlington, Virginia. The image, a piece of misinformation, showed an explosion on a grass lawn outside the Pentagon and circulated on fringe Twitter pages just after 10 a.m. The original post has since been removed. The Department of Defense confirmed the image was fake, while the Arlington Fire and EMS Department tweeted: “There is NO explosion or incident” or “immediate danger or hazards to the public”【7†source】【8†source】.

The source of the image has not been determined, but it is believed to be AI-generated, aligning with a recent trend of remarkably lifelike “deepfakes”. These deepfakes have included a series depicting Pope Francis wearing a Balenciaga coat, AI-generated renderings of famous artwork, and realistic viral images of former President Donald Trump resisting authorities during a fake arrest【9†source】.

The AI-generated photo was shared by various social media accounts, including a Russian state media outlet, leading to mass confusion and a brief selloff in the US stock market. The Dow Jones Industrial Average dipped around 50 points before rebounding once the image was exposed as a hoax【17†source】【19†source】【20†source】.

The rise of artificial intelligence has prompted numerous warnings from government officials and major U.S. companies about the potential threats posed by unchecked AI. The Biden Administration recently unveiled a $140 million plan to create seven national research institutes to evaluate AI technology and drive responsible innovation. The plan highlights potential cybersecurity, biosecurity, and safety risks that could accompany this technology. Industry leaders, including Twitter CEO Elon Musk and Apple co-founder Steve Wozniak, have also called for caution in AI development, warning of an “out of control race” to create increasingly advanced technology【10†source】.

AI-generated “deepfakes” have spread rapidly on social media since the release of a series of highly powerful AI technologies, including OpenAI’s impressive ChatGPT. The technologies are capable of writing poetry, college-level essays, and even tricking researchers into believing AI-generated science papers are real. Major public school systems and companies have expressed concerns that these technologies could be used for academic cheating or to leak sensitive internal information【11†source】.

Critics argue that advanced AI systems, such as those that can generate convincing deepfakes, provide new tools for bad actors to spread misinformation and sow chaos online. The recent Pentagon image hoax lends credence to these concerns. In March, Elon Musk and more than 1,000 experts called for a six-month pause on the development of advanced AI until proper safety guidelines were in place, citing risks including the spread of “propaganda and untruth,” job losses, and the potential for AI to “outsmart, obsolete, and replace us”【23†source】【24†source】.

Dr. Geoffrey Hinton, known as the “Godfather of AI”, recently quit his job at Google over concerns about the potential risks of AI. He warned that AI will become more dangerous in the future — with “bad actorspotentially exploiting advanced systems for harmful purposes that will be hard to prevent【25†source】【26†source】.

The fake Pentagon explosion image serves as a stark reminder of AI’s power to disrupt society and highlights the need for continued vigilance, research, and responsible innovation in the field of AI.


  • Forbes【7†source】【8†source】【9†source】【10†source】【11†source】
  • New York Post【17†source】【18†source】【19†source】【20†source】【21†source】【22†source】【23†source】【24†source】【25†source】【26†source】