DEEP FAKE VIDEO FROM SINGLE PORTRAIT: SAMSUNG AI MOSCOW AND OTHER SPOOPY NERDS
TLDR: A paper just published demonstrating/explaining how (((they))) can currently make a convincing full motion video from a single clean photo of a human face - or a painting… see Mona Lisa come alive in attached video…
Abstract from attached PDF:
Several recent works have shown how highly realistic human head images can be obtained by training convolutional neural networks to generate them. In order to create a personalized talking head model, these works require training on a large dataset of images of a single person. However, in many practical scenarios, such personalized talking head models need to be learned from a few image views of a person, potentially even a single image. Here, we present a system with such few-shot capability. It performs lengthy meta-learning on a large dataset of videos, and after that is able to frame few- and one-shot learning of neural talking head models of previously unseen people as adversarial training problems with high capacity generators and discriminators. Crucially, the system is able to initialize the parameters of both the generator and the discriminator in a person-specific way, so that training can be based on just a few images and done quickly, despite the need to tune tens of millions of parameters. We show that such an approach is able to learn highly realistic and personalized talking head models of new people and even portrait paintings.
Sauce:
https://arxiv.org/pdf/1905.08233.pdf