Anonymous ID: d7747d July 28, 2022, 4:20 a.m. No.16899601   🗄️.is 🔗kun

What we can tell about a person, just using data from a mobile device.

 

We can map the room, identify furniture, objects, animals and other people. We can tell if the user is male or female. If female we can tell when they are cycling naturally, or using birth control. And we can know what you are feeling, a based on your interests, marital and economic circumstances, browser history and from all that data know exactly what, placed in your newsfeed, or in dynamic adverts targeting (you) will motivate you to preform, voluntarily, the behavior we desire.

 

"We" in this case means the operators of (for example) an in-game AI, or a reddit chatbot called H.A.N.K . This device data is the same data continuously harvested by social media, big tech, gaming companies, researchers in cognitive science, and dozens of intelligence agencies.

 

This is not a future scenario, this technology has existed for some time, is in use now in popular gaming environments.

 

Here are three papers which when integrated with AI give the operator enormous control of target's decision making in any chosen area.

 

Predicting Latent Narrative Mood using Audio and Physiologic Data

 

Human communication depends on a delicate interplay between the emotional intent of the speaker, and the linguistic content of their message. While linguistic content is delivered in words, emotional intent is often communicated

 

though additional modalities including facial expressions, spoken intonation, and body gestures. Importantly, the same message can take on a plurality of meanings, depending on the emotional intent of the speaker. The phrase ”Thanks a lot” may communicate gratitude, or anger, depending on the tonality, pitch and intonation of the spoken delivery. Given it’s importance for communication, the consequences of misreading emotional intent can be severe, particularly in high-stakes social situations such as salary negotiations or job interviews…¨

 

"… Machine-aided assessments of historic and real-time interactions…"

 

"In this paper, we present the first steps toward the realization of such a system. We present a novel multi-modal dataset containing audio, physiologic, and text transcriptions from 31 narrative conversations. As far as we know, this is the first experimental set-up to include individuals engaged in natural dialogue with the particular combination of signals we collected and processed: para-linguistic cues

 

from audio, linguistic features from text transcriptions (average postive/negative sentiment score), Electrocardiogram (ECG), Photoplethysmogram (PPG), accelerometer, gyroscope, bio-impedance, electric tissue impedance, Galvanic Skin Response (GSR), and skin temperature.

 

https://groups.csail.mit.edu/sls/publications/2017/TukaAlHanai_aaai-17.pdf

 

If there are 10 people talking or background, say–

 

Segregating Event Streams and Noise with a Markov Renewal Process Model

 

We describe an inference task in which a set of timestamped event observations must be clustered into an unknown number of temporal sequences with independent and varying rates of observations. Various existing approaches to multi-object tracking assume a fixed number of sources and/or a fixed observation rate; we develop an approach to inferring structure in timestamped data produced by a mixture of an unknown and varying number of similar Markov renewal processes, plus independent clutter noise. The inference simultaneously distinguishes signal from noise as well as clustering signal observations into separate source streams. We illustrate the technique via synthetic experiments as well as an experiment to track a mixture of singing birds. Source code is available.

 

https://jmlr.csail.mit.edu/papers/volume14/stowell13a/stowell13a.pdf

 

Voice in different phases of menstrual cycle among naturally cycling women and users of hormonal contraceptives

 

Riding theredcotton pony? The DARPA Ai knows.

 

https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0183462