ID: a2109e Sept. 11, 2020, 6:06 a.m. No.10602217   🗄️.is 🔗kun

Depending on the image recognition system it's easy to evade detection. In neural nets trained to recognize a specific image, it s possible to make an image that will destroy the networks ability to reconize the selected image/text.

 

Adversarial Examples is data in which a small amount of noise that is artificially created to misrecognize a sample.The example used for the image is shown below.

 

https://medium.com/analytics-vidhya/introduction-of-adversarial-examples-improve-image-recognition-imagenet-sota-method-using-1fe981b303e#: