Anonymous ID: ffc63e Nov. 14, 2018, 5:08 a.m. No.3897939   🗄️.is 🔗kun   >>7951 >>8256

Repost

 

HOW TO BREAK IMAGE CLASSIFIERS

 

simple, effective, simple, effective, enchanted anthropoids unchained, simple, effective, actually matters, disenchantment, effective, simple sabotage convolutional neural NPC networks, cnn, simple effective …beep, beep, beep

 

We present a method to create universal, robust, targeted adversarial image patches in the real world. The patches are universal because they can be used to attack any scene, robust because they work under a wide variety of transformations, and targeted because they can cause a classifier to output any target class. These adversarial patches can be printed, added to any scene, photographed, and presented to image classifiers; even when the patches are small, they cause the classifiers to ignore the other items in the scene and report a chosen target class.

 

Adversarial examples have been shown to

generalize to the real world

 

last in red because it important.