Anonymous ID: f9e6f5 July 20, 2018, 10:42 a.m. No.369   🗄️.is 🔗kun   >>577 >>580

>>347

>Myth: No one knows whether the human brain makes similar mistakes.

 

>If our brains made the same kind of mistakes as machine learning models, then adversarial examples for machine learning models would be optical illusions for us,

 

Doesn't the second portion logically imply that optical illusions are the equivalent of adversarial examples for the human brain? It's important to note the difference between human object recognition and machine based object recognition but I think it's plainly obvious that human object recognition is also prone to certain types of error.

 

One personal anecdote that shows the weakness of human object recognition mechanisms (or at least mine) is my frequent inability to recognize a familiar object when looking for it when it is in an unfamiliar place. Example: I need a specific piece of mail so I start looking for it in my office mail pile. I'll spend 10 minutes looking for it all over the place only to give up, sit down at my desk, and realize it was sitting right there the whole time. My mind presumes a location for that object and combines the symbols of the envelope and the mail pile to forma the expected image of what I'm looking for (that envelope buried in a pile of mail). However, if that presumption is wrong and that expected image too strongly encoded in the mental process of searching for that object, then I am unable to recognize thaf object independently of the expected image. Basically, our brains (or at least mine) have developed highly efficient processes that allow us to identify objects based on logically deduced expectations of future events. This works great up until the foundational "training data" upon which those expectations are based is fundamentally wrong. I also think this plays heavily into the trait of "adaptability", the logical depth at which the foundational "training data" is based, the more flexible the process becomes.