>>577
>>580
Sorry for doublepost, but this just reminded me of something. Necker cubes - an interesting phenomenon in human vision no? Multistable perception, where a signal input stimulus can be interpreted multiple ways. Now, let's pretend one of those ways was effectively beaten into you - that'd be one reality tunnel (to use the Leary term). Minds capable of flicking to the other, equally valid perception can effectively jump reality tunnels.
See this for some science on it: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2749649/
Now here's my question - can any ANN do the same? Ie, can I train something like (say) a CNN to do the same thing? Just tried a bit of an experiment - once I've trained it to perceive it one way, it NEVER selects the other mode. Might experiment with similar known perception hacks and known human visual glitches.
So: possible stuff to research looking for the big exploit - can we pass information through where 2 human actors can somehow agree on shifting to a specific reality tunnel that's opaque to the NN (even given 100% of the same perceptual information)? Do any interpretations (reality tunnels) exist that a NN can't EVER be trained for? Can we use that to effectively encode your "New Emotion" or communicate it? After all, what are emotions if not responses to modes of perception? Just wondering aloud, brainstorming it out.