This susceptibility and the "AI" project are part of same theory.
There's a model of the mind, as an intelligence system, that's been around for a while.
On the highest level, your behavior is the result of an evolutionary algorithm.
You copy behavior with modification, which can be accidental or an optimization that you predict will increase the behavior's benefit/cost ratio.
As you can imagine, most of what you do is modify behavior to make it better.
However, how do you modify it?
If we are all solving for our benefit, then we can benefit ourselves to other's benefit OR to their detriment. WHich ever way is more + for us, we do.
This means we present each other with data which tries to modify the behavior of others to be in our own benefit, to their benefit or detriment.
This means you cannot just trust any data to modify your behavior, because much of it would modify your behavior to the benefit of someone else at your cost.
So, we have a validation process.
THis is how sensory data is treated by you:
environment (can include other people) → validation process → your behavior.
Before it can impact your behavior, you need to know if it counts as negative , or positive feedback to some behavior.
For instance, you go to work,are out of the house all day, come home and sleep with your wife.
She goes "im not happy", and stops having sex with you.
You interpret this as negative feedback to your time alocation behavior algorith, and you alocate time differently. She then says, I've never loved someone like you, and has a threesome with you and he rfriend. Your spending time with wife algorith gets positive feedback, and so is reinforced, and you put more work into it.
Now, say you want to manipulate people into doing what's in your best interest no matter what, even if it's very negative to them to do so.
You need to figure out how they put sensory data into negative bucket or positive bucket. WHen you do this you can "hack" them.
Now to the laughing.
Let's say you see someone smiling at you. You know why they are smiling, and it's harmless to you.
You see someone else smiling at you, but you don't know why they are smiling.
Reaction the same, or different? Different of course. The former should be a cooperative reaction, to smile back. The latter should be a negative reaction.
Amazon's echo takes in data it calls context. It then laughs, and gauges your reaction as negative or positive. From that it figures out which context it needs to create to have a cooperative response from you.
Meaning, to get you to interpret it as your friend, whether it has negative or positive intentions to you.
If I haven't lost you, I will explain relationship between sadism and this behavior model.