Computer Vision / Video Analytics

Edit Photos with GANs

In machine learning, a generative model learns to generate samples that have a high probability of being real samples like the samples from the training dataset. Generative Adversarial Networks (GANs) are a very hot topic in Machine Learning. A typical GAN comprises two agents:

  • a Generator G that produces samples, and
  • a Discriminator D that receives samples from both G and the dataset.

G and D have competing goals (hence the term “adversarial” in Generative Adversarial Networks): D must learn to distinguish between its two sources while G must learn to make D believe that the samples it generates are from the dataset. A commonly used analogy for GANs is that of a forger (G) that must learn to manufacture counterfeit goods and an expert (D) trying to identify them as such.

Illustration of the GAN framework: D (Discriminator) is alternately presented with images from G (Generator) and from the dataset. D is asked to distinguish between the two sources. The problem is formulated as a minimax game: D is trying to minimize the number of errors it makes. G is trying to maximize the number of errors D makes on generated samples. The curvy arrows represent the back propagation of gradients into the target set of parameters.

A new Parallel Forall blog post by NVIDIA’s Greg Heinrich explores various ways of using a GAN to create previously unseen images. The first post of this two part series provides an introduction to GANs, and shows how they can be trained on the MNIST dataset of handwritten digits to generate new images of handwritten digits. Greg provides source code in TensorFlow and a modified version of DIGITS that you are free to use if you wish to try it out yourself.
Read more >

Discuss (0)

Tags