Computer Vision / Video Analytics

GauGAN Debuts at the Ars Electronica Center in Europe

GauGAN, NVIDIA’s generative adversarial network that can convert segmentation maps into lifelike images, is being shown for the first time at the brand new Ars Electronica Center in Linz, Austria at the “Understanding AI” exhibition.

Developed by NVIDIA researchers earlier this year, GauGAN is the first semantic image synthesis model that can produce complex and photorealistic images including landscapes, and street scenes with only a few brushstrokes.  

“It’s much easier to brainstorm designs with simple sketches, and this technology is able to convert sketches into highly realistic images,” said Bryan Catanzaro, vice president of applied deep learning research at NVIDIA.

To develop the algorithm, the researchers trained a GAN with a million real landscape images on an NVIDIA DGX-1, comprised of 8 NVIDIA V100 GPUs, with the cuDNN-accelerated PyTorch deep learning framework.

“This technology is not just stitching together pieces of other images, or cutting and pasting textures,” Catanzaro said. “It’s actually synthesizing new images, very similar to how an artist would draw something.”

Attendees at the Ars Electronica Center in Vienna, Austria can try out GauGAN for themselves in the “Understanding AI” exhibition, with an interactive demo running on a NVIDIA RTX 2080Ti GPU.

Read more>

Discuss (0)

Tags