A new GPU-based facial reenactment technique tracks the expression of a source actor and transfers it to a target actor in real-time – which translates into you being able to control another human’s expressions. The project is a collaboration of researchers from Stanford University, Max Planck Institute for Informatics and University of Erlangen-Nuremberg.
The novelty of the approach lies in the transfer and photorealistic re-rendering of facial deformations and detail into the target video in a way that the newly-synthesized expressions are virtually indistinguishable from a real video.
The video demo uses a setup consisting of a GeForce GTX 980 GPU and is something definitely worth watching – it’s a matter of time before Disney adopts this technology!
You can read more about the project in their paper titled “Real-time Expression Transfer for Facial Reenactment.“