Researchers from Korea University, Clova AI Research (NAVER), The College of New Jersey, and Hong Kong University of Science & Technology developed a Generative Adversarial Networks (GAN)-based approach that transforms the facial expressions of still images.
Using an NVIDIA Tesla GPU and the cuDNN-accelerated PyTorch deep learning framework, the team trained their models on the CelebFaces Attributes (CelebA) dataset and the Radboud Faces Database (RaFD) that includes of a variety facial expressions. Their framework named StarGAN is able to perform multi-domain image-to-image translation results on the CelebA dataset via transferring the knowledge it learned from the RaFD dataset – this means they can take an input image of a neutral celebrity face and synthesize the facial expressions (angry, happy, and fearful).
The researchers claim this work is the first to successfully perform multi-domain image translation across different datasets.
The framework is also able to transfer facial attributes to an input image to automatically age someone, or change the color of their hair.