The Newest AI-Enabled Weapon: ‘Deep-Faking’ Photos of the Earth
Step 1: Use AI to make undetectable changes to outdoor photos. Step 2: release them into the open-source world and enjoy the chaos. Worries about deep fakes — machine-manipulated videos of celebrities and world leaders purportedly saying or doing things that they really didn’t — are quaint compared to a new threat: doctored images of the Earth itself.
China is the acknowledged leader in using an emerging technique called generative adversarial networks to trick computers into seeing objects in landscapes or in satellite images that aren’t there, says Todd Myers, automation lead for the CIO-Technology Directorate at the National Geospatial-Intelligence Agency. “The Chinese are well ahead of us. This is not classified info,” Myers said Thursday at the second annual Genius Machines summit, hosted by Defense One and Nextgov. “The Chinese have already designed; they’re already doing it right now, using GANs—which are generative adversarial networks—to manipulate scenes and pixels to create things for nefarious reasons.”
For example, Myers said, an adversary might fool your computer-assisted imagery analysts into reporting that a bridge crosses an important river at a given point. “So from a tactical perspective or mission planning, you train your forces to go a certain route, toward a bridge, but it’s not there. Then there’s a big surprise waiting for you,” he said.
First described in 2014, GANs represent a big evolution in the way neural networks learn to see and recognize objects and even detect truth from fiction. Say you ask your conventional neural network to figure out which objects are what in satellite photos. The network will break the image into multiple pieces, or pixel clusters, calculate how those broken pieces relate to one another, and then make a determination about what the final product is, or, whether the photos are real or doctored. It’s all based on the experience of looking at lots of satellite photos.
GANs reverse that process by pitting two networks against one another — hence the word“adversarial.” A conventional network might say, “The presence of x, y, and z in these pixel clusters means this is a picture of a cat.” But a GAN network might say, “This is a picture of a cat, so x, y, and z must be present. What are x, y, and z and how do they relate?” The adversarial network learns how to construct, or generate, x, y, and z in a way that convinces the first neural network, or the discriminator, that something is there when, perhaps, it is not. A lot of scholars have found GANs useful for spotting objects and sorting valid images from fake ones. In 2017, Chinese scholars used GANs to identify roads, bridges, and other features in satellite photos.
The concern, as AI technologists told Quartz last year, is that the same technique that can discern real bridges from fake ones can also help create fake bridges that AI can’t tell from the real thing. Myers worries that as the world comes to rely more and more on open-source images to understand the physical terrain, just a handful of expertly manipulated data sets entered into the open-source image supply line could create havoc. “Forget about the [Department of Defense] and the [intelligence community]. Imagine Google Maps being infiltrated with that, purposefully? And imagine five years from now when the Tesla [self-driving] semis are out there routing stuff?” he said.
When it comes to deep fake videos of people, biometric indicators like pulse and speech can defeat the fake effect. But faked landscape isn’t vulnerable to the same techniques. Even if you can defeat GANs, a lot of image-recognition systems can be fooled by adding small visual changes to the physical objects in the environment themselves, such as stickers added to stop signs that are barely noticeable to human drivers but that can throw off machine vision systems, as DARPA program manager Hava Siegelmann has demonstrated.
https://www.defenseone.com/technology/2019/03/next-phase-ai-deep-faking-whole-world-and-china-ahead/155944/