You open social media to see a startling satellite image: a military base on fire. But is it real?
The rise of AI has made it easier than ever to generate fakes. Since the Cold War, images captured from space have served as a method of verification for media, governments, and the public. Now, technology has placed this once seemingly irrefutable source of truth under threat.
[time-brightcove not-tgx=”true”]
An AI-generated satellite image posted on social media is unlikely to spark a war on its own or dupe the military of a well-resourced country like the U.S.—which can double-check any claims with its own fleet of satellites. But they can nonetheless be potent tools to influence public opinion in ways that undermine our information ecosystem.
There are many cases of deepfake satellites images from just this year.
In June, Ukraine’s Operation Spiderweb used drones to strike Russia’s prized long-range bombers. High-resolution photos of the result spread rapidly across social media: multiple destroyed Russian bombers (and a transport plane) lying in scorched ruin. But the daring Ukrainian military attack was also accompanied by fake satellite images that suggested a more successful operation than the 10 Russian warplanes that U.S. officials estimate were destroyed.
Another case emerged later that month following U.S. and Israeli strikes on facilities linked to Iran’s nuclear program. One fake image depicted a crowd gathered around a destroyed Israeli F-35 jet, and another deceptive video falsely claimed to be taken from an Iranian missile’s onboard sensors. These images suggested a more potent Iranian military response to devastating strikes than what Tehran was actually able to muster.
There were also fakes following the four-day India-Pakistan conflict in May. Both Indian and Pakistani users on social media shared fake satellite imagery to suggest their respective countries’ militaries had inflicted more damage than what otherwise transpired.
With more than half the globe using social media, the reach of manipulated satellite images can be massive and their impact almost immediate. We have already seen previews of how an individual fake can impact the real world: when an image falsely depicted a fire near the Pentagon last year, for example, the stock market dipped until local authorities clarified it was a hoax.
And while those in the field have warned of the risks for years, today’s fakes are growing both more difficult to distinguish from reality and easier to make.
In years past, the models behind simple online tools offered a path to create basic AI-generated satellite imagery. But they were limited, and the end products were blurry, zoomed-out photos. To make high-quality fakes today, all you need is free software and the ability to type a prompt that guides your AI of choice.
That is why the fight against fake satellite images should be a society-wide initiative. Governments and media outlets using imagery, alongside commercial providers, should help their audiences become attuned to indicators of deception.
In the media, outlets that rely on satellite images in their coverage should include or link to an explanation of how they verified it, a practice already employed by some. Outlining how they match satellite imagery with details from on-the-ground can help bolster reader trust in credible reporting.
For their part, commercial providers should make available, where possible, tools or teams that verify imagery claimed to be sourced from them as genuine or not. Third-party software to detect AI-generated images exists, but it is imperfect and in an arms race against ever-improving models that can churn out hyper-realistic photos.
Messages on how malicious actors will use deception can also be included in government literature. The Swedish government’s brochure, “In Case of Crisis or War,” describes how foreign powers may use disinformation in times of conflict, and offers advice on guarding against these efforts. Finland’s government has a guide with more detail on influence operations and tools to parse photos and videos you see during times of crisis.
Other countries should follow suit. The U.S. Department of Defense’s Emergency Preparedness Guide has a few paragraphs on media awareness, but falls short in describing the fakes that adversaries may create.
Clearly, misleading AI-generated content is just getting started, and satellite imagery will form a growing part of that. It’s time more took note of this type of mis- and disinformation.
