Anonymous ID: 7581d1 March 31, 2019, 10:31 a.m. No.5992425   🗄️.is 🔗kun

An anon posted an article on Project Raven a few months ago, ex-NSA types assisting UAE and blurring the line in targeting Americans overseas.

 

Below is a link to a short article with similar information.

 

https://smallwarsjournal.com/jrnl/art/advent-digital-mercenaries

 

"Again, as the situational complexity surrounding this novel trend of employing foreign hired intelligence personnel is rather high. Despite the existence of well-developed American legal corpus dealing with export and transfer of military goods and services abroad, the incidents involving licensing of cyber know-how and capabilities in benefit of foreign intelligence service suggest that there might be gaps or at least a room for improvement in the existing legal base. Same seems to apply for the other respective Western governments that deal with such cyber outfits that operate in foreign environments. Indeed, the complexity and context vary widely, as the burgeoning private sector demand for specific skills and services pertaining to intrusion and influence operations is clearly on a rise. Such conclusion could be inferred by the cases of the now-defunct third party intelligence operators, such as Cambridge Analytica and PSY Group that have employed certain amount of cyber and traditional tradecraft in benefit to their private clients with significant amount of loud public controversy."

Anonymous ID: 7581d1 March 31, 2019, 10:54 a.m. No.5992695   🗄️.is 🔗kun   >>2889 >>3021

China is the acknowledged leader in using an emerging technique called generative adversarial networks to trick computers into seeing objects in landscapes or in satellite images that aren’t there, says Todd Myers, automation lead and Chief Information Officer in the Office of the Director of Technology at the National Geospatial-Intelligence Agency.

 

“The Chinese are well ahead of us. This is not classified info,” Myers said Thursday at the second annual Genius Machines summit, hosted by Defense One and Nextgov. “The Chinese have already designed; they’re already doing it right now, using GANs—which are generative adversarial networks—to manipulate scenes and pixels to create things for nefarious reasons.”

 

For example, Myers said, an adversary might fool your computer-assisted imagery analysts into reporting that a bridge crosses an important river at a given point.

 

“So from a tactical perspective or mission planning, you train your forces to go a certain route, toward a bridge, but it’s not there. Then there’s a big surprise waiting for you,” he said.

 

First described in 2014, GANs represent a big evolution in the way neural networks learn to see and recognize objects and even detect truth from fiction.

 

Say you ask your conventional neural network to figure out which objects are what in satellite photos. The network will break the image into multiple pieces, or pixel clusters, calculate how those broken pieces relate to one another, and then make a determination about what the final product is, or, whether the photos are real or doctored. It’s all based on the experience of looking at lots of satellite photos.

 

Myers worries that as the world comes to rely more and more on open-source images to understand the physical terrain, just a handful of expertly manipulated data sets entered into the open-source image supply line could create havoc. “Forget about the [Department of Defense] and the [intelligence community]. Imagine Google Maps being infiltrated with that, purposefully? And imagine five years from now when the Tesla [self-driving] semis are out there routing stuff?” he said.

 

When it comes to deep fake videos of people, biometric indicators like pulse and speech can defeat the fake effect. But faked landscape isn’t vulnerable to the same techniques.

 

Even if you can defeat GANs, a lot of image-recognition systems can be fooled by adding small visual changes to the physical objects in the environment themselves, such as stickers added to stop signs that are barely noticeable to human drivers but that can throw off machine vision systems, as DARPA program manager Hava Siegelmann has demonstrated.

 

Myers says the military and intelligence community can defeat GAN, but it’s time-consuming and costly, requiring multiple, duplicate collections of satellite images and other pieces of corroborating evidence. “For every collect, you have to have a duplicate collect of what occurred from different sources,” he said. “Otherwise, you’re trusting the one source.”

 

But when it comes to protecting open-source data and images, used by everybody from news organizations to citizens to human rights groups to hedge funds to make decisions about what is real and what isn’t, the question of how to protect it is frighteningly open. The gap between the “truth” that the government can access and the “truth” that the public can access may soon become unbridgeable, which would further erode the public credibility of the national security community and the functioning of democratic institutions.

 

Andrew Hallman, who heads the CIA’s Digital Directorate, framed the question in terms of epic conflict. “We are in an existential battle for truth in the digital domain,” Hallman said. “That’s, again, where the help of the private sector is important and these data providers. Because that’s frankly the digital conflict we’re in, in that battle space…This is one of my highest priorities.”

 

When asked if he felt the CIA had a firm grasp of the challenge of fake information in the open-source domain, Hallman said, “I think we are starting to. We are just starting to understand the magnitude of the problem.”

 

https://www.defenseone.com/technology/2019/03/next-phase-ai-deep-faking-whole-world-and-china-ahead/155944/?oref=d-topstory