Real-time Facial Expression Transfer

A new GPU-based facial reenactment technique tracks the expression of a source actor and transfers it to a target actor in real-time – which translates into you being able to control another human’s expressions. The project is a collaboration of researchers from Stanford University, Max Planck Institute for Informatics and University of Erlangen-Nuremberg.

The novelty of the approach lies in the transfer and photorealistic re-rendering of facial deformations and detail into the target video in a way that the newly-synthesized expressions are virtually indistinguishable from a real video.

The video demo uses a setup consisting of a GeForce GTX 980 GPU and is something definitely worth watching – it’s a matter of time before Disney adopts this technology!

 

You can read more about the project in their paper titled “Real-time Expression Transfer for Facial Reenactment.

About Brad Nemire

Brad Nemire
Brad Nemire is on the Developer Marketing team and loves reading about all of the fascinating research being done by developers using NVIDIA GPUs. Reach out to Brad on Twitter @BradNemire and let him know how you’re using GPUs to accelerate your research. Brad graduated from San Diego State University and currently resides in San Jose, CA. Follow @BradNemire on Twitter