ID: 62f3e4 Sept. 18, 2020, 2:15 a.m. No.10692280   🗄️.is 🔗kun   >>2303 >>2317 >>2326 >>2381 >>2515 >>2652 >>2800 >>2819 >>2924

Attacking AI with adversarial examples.

 

Adversarial examples are inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake; they’re like optical illusions for machines. In this post we’ll show how adversarial examples work across different mediums, and will discuss why securing systems against them can be difficult.

 

2

Deep neural networks (DNN) have achieved unprecedented success in numerous machine learning tasks in various domains. However, the existence of adversarial examples has raised concerns

about applying deep learning to safety-critical applications.

 

As a result, we have witnessed increasing interests in studying attack and defense mechanisms for DNN models on different data types,such as images, graphs and text. Thus, it is necessary to provide a systematic and comprehensive overview of the main threats of attacks and the success of corresponding countermeasures. In this survey, we review the state of the art algorithms for generating adversarial examples and the countermeasures against adversarial examples, for the three popular data types, i.e., images, graphs and text.

 

https://arxiv.org/pdf/1909.08072.pdf

 

n 2014, a group of researchers at Google and NYU found that it was far too easy to fool ConvNets with an imperceivable, but carefully constructed nudge in the input. Let’s look at an example. We start with an image of a panda, which our neural network correctly recognizes as a “panda” with 57.7% confidence. Add a little bit of carefully constructed noise and the same neural network now thinks this is an image of a gibbon with 99.3% confidence! This is, clearly, an optical illusion — but for the neural network. You and I can clearly tell that both the images look like pandas — in fact, we can’t even tell that some noise has been added to the original image to construct the adversarial example on the right!

 

https://towardsdatascience.com/breaking-neural-networks-with-adversarial-attacks-f4290a9a45aa