Anonymous ID: df9894 Sept. 18, 2018, 4:25 p.m. No.3078811   🗄️.is 🔗kun   >>8884

JUST REMEMBERED

Anons there is a way to blow neural nets minds. Image recognition systems are highly vulnerable to this technical attack which involve the placement of code into the image. As far as anon knows it is not possible to defend against this.

the technical article anon read two months ago was concernfaggin on this. Should look into this for FB situation.

No doubt Overwatch know what I mean.

I'll have to go search for the article but will find eventually. Possibly CM or helperanon could save us some time wit a link IDK.

 

Remeber think, "That's easy, why publish it when we know ISIS can read?" Meh. Gone looking.

Anonymous ID: df9894 Sept. 18, 2018, 4:29 p.m. No.3078884   🗄️.is 🔗kun   >>8942

>>3078811

>>3078811

This is not the specific article but it does describe the tactic.

We can put our images anywhere.Thanks Wired.

 

https://www.wired.com/story/researcher-fooled-a-google-ai-into-thinking-a-rifle-was-a-helicopter/

 

https://www.wired.com/story/researcher-fooled-a-google-ai-into-thinking-a-rifle-was-a-helicopter/

Anonymous ID: df9894 Sept. 18, 2018, 4:32 p.m. No.3078942   🗄️.is 🔗kun

>>3078884

method to blind image recognition

 

From the Caltech of the east coast, MIT:

 

https://www.labsix.org/physical-objects-that-fool-neural-nets/