In March 2021, Tom Cruise appeared to join TikTok. To casual viewers scrolling through their For You page, it looked as though the Hollywood actor was using the app to post clips of himself golfing and performing magic tricks. This wasn’t outside the realm of possibility, as other celebrities such as Billie Eilish and Doja Cat had already signed up for the video sharing service.
Yet the man in the popular TikToks wasn’t actually Cruise. Rather it was Tom Cruise impersonator Miles Fisher, with Cruise’s face superimposed over his via deepfake technology. The account was upfront about the manipulation, and was even called DeepTomCruise. Even so, the videos were so convincing that many wondered if it was Cruise pretending to be a deepfake instead of vice versa.
“I’d like to show people the technical possibilities of these things,” DeepTomCruise’s visual effects artist Christopher Ume told The Guardian. “I just want to show them what’s possible in a few years.”
Deepfake technology uses AI to manipulate media in order to create deceptively realistic images, which can be indistinguishable from reality to the average observer. However, depictions of people aren’t the only images that deepfake technology can falsify. Geographical data such as satellite pictures can be altered using the same techniques to give a false account of a landscape. And just like deepfaked celebrities and politicians, deepfaked geography can have significant implications for national security.
“[W]e cannot ignore the appearance, or underestimate the development, of deepfake in satellite images or other types of geospatial data,” wrote researchers from the University of Washington, Seattle, US, in a recent paper published in Cartography and Geographic Information Science.
Focusing on increasing geographic data literacy, the researchers explored new ways of detecting deepfaked geography. This included developing software that can identify manipulated images through techniques such as colour distribution analysis.
Deepfake geography is relatively new technology, and thus still fairly uncommon. However, lead author Bo Zhao believes it’s important that we develop the skills to identify such manipulations now so we aren’t taken in when fakes become more prevalent.
“The main aim is to demystify the function of absolute reliability of satellite images and to raise public awareness of the potential influence of deepfake geography,” Zhao says.
Storm in a security teacup?
According to the US National Geospatial-Intelligence Agency, AI-manipulated satellite images can be a “meaningful” threat to national security. The NGA is the country’s lead federal agency for geospatial intelligence, providing geographic data to the US government, military, and first responders.
“The intelligence community is charged with providing a timely, objective and accurate representation of global events,” an NGA spokesperson tells Cosmos Weekly. “Anything that creates confusion or undermines trust in the information provided to national security leaders poses a threat to our decision advantage.”
Such confusion could have significant real-world impacts. Speaking at the 2019 Genius Machines summit, Todd Myers of the NGA presented the hypothetical scenario of false geographic information thwarting a military operation.
“So from a tactical perspective or mission planning, you train your forces to go a certain route, toward a bridge, but it’s not there,” said Myers. “Then there’s a big surprise waiting for you.”
Even so, the likelihood that such incidents actually occur is relatively small. False geographic information was around long before computer manipulation arrived on the scene, from fake maps created to mislead enemies, to non-existent “paper towns” drawn to protect cartographers’ copyright. Deepfake geography is simply another form of misinformation that organisations such as the NSA must navigate and counter.
Sanjeev Kumar Srivastava, from the University of the Sunshine Coast, says that any risk deepfaked geography might pose to national security is significantly reduced by such counter measures. A senior lecturer in geospatial analysis, Srivastava noted that government agencies and large private organisations gather geographic information from multiple sources. Comparing them against each other would quickly reveal any discrepancies, thwarting attempts at deepfaked deception.
“Altogether, we are talking about petabytes of digital images,” says Srivastava. “One can modify a few images for a limited audience through modified products that can be placed on certain media, but modifying images from all sources will be almost impossible. Similarly, a country can modify satellite data owned by them – but this information can be cross-checked from other sources.”
The NGA spokesperson affirms this. “[Geospatial intelligence] practitioners have resources at their disposal such as historical information, an increasing diversity of information sources, and the confidence of a robust tradecraft to verify and validate our understanding of the Earth. The intelligence community is also very cognisant of the rapid evolution in technology and has made strong commitments and investments to keep pace with this ever-changing landscape.”
The NGA also noted that its experts analyse geographic data and use tailored technology to determine whether a satellite image is real or fake. These assessments include use of systems such as those mentioned in Zhao’s paper.
“What AI can create, AI can potentially detect,” the NGA spokesperson says. “We are also watching with interest developments in the commercial world targeting deepfake detection and advanced techniques for validating digital media.”
Public perception
A more complicated issue raised by deepfake geography is the impact false maps may have on public perception of military operations. For the NGA, this is a much harder problem to solve.
“Manipulated satellite images also provide opportunities for adversaries to create or influence public narratives to justify aggressive actions and put the US and our allies on the defensive, forcing us to refute false or misleading narratives,” says the spokesperson.
The public are less likely to be comparing multiple data points when looking at satellite images, so are comparatively more likely to be misled. Further, disproving incorrect conclusions can be difficult when people are predisposed to trusting satellite images — and in some cases distrusting the government.
“For the general public, if they are looking at images that are modified by someone then this could be misleading,” says Srivastava. “For example, Google Earth uses satellite images from multiple sources and they potentially can do modifications.”
Zhao and his colleagues’ work using algorithms to detect fake satellite images may help prevent people from being misled. Still, Srivastava believes the best method of safeguarding yourself is to be critical, and cross-check your information with a variety of sources.
Alert but not alarmed
Zhao stresses that while deepfake technology can present a threat if misused, it can also provide substantial benefits.
“[D]eepfakes of satellite imagery can be misleading or even threatening to national security, but can also be very useful, such as in predicting land-use change scenarios, reconstructing and preserving historical scenes, and automated making of reference and topographic maps,” the researchers wrote.
Srivastava further noted that we already routinely manipulate geographic images for beneficial purposes, stitching together multiple satellite pictures to create clear and unobstructed images of an area.
“For example, Landsat 7 was launched in 1999 but developed a fault in 2003 that introduced blank stripes in images that keep changing their positions,” he says. “Multiple images are used to get rid of such stripes, which is an example of image manipulation for good.”
Though its power to distort reality may appear dangerous, our response to deepfake technology doesn’t have to be blanket distrust or fear. In fact, deepfakes can be a useful tool for education and creation, helping us engage in history and visualise possible futures. In 2019, an ad campaign for US nonprofit Malaria No More used deepfake technology to make footballer David Beckham speak nine different languages, reaching a global audience.
As such, rather than recoiling from deepfakes, it would be far more beneficial to focus on developing data literacy, familiarising ourselves with the technology, and ensuring it’s deployed responsibly.
“I do not think the social implication of AI-created satellite images is just a threat,” says Zhao. “It can be used for good purposes too. It is really a matter of how people plan to use it.”