In machine learning, a generative model learns to generate samples that have a high probability of being real samples like the samples from the training dataset. Generative Adversarial Networks (GANs) are a very hot topic in Machine Learning. A typical GAN comprises two agents:
- a Generator G that produces samples, and
- a Discriminator D that receives samples from both G and the dataset.
G and D have competing goals (hence the term “adversarial” in Generative Adversarial Networks): D must learn to distinguish between its two sources while G must learn to make D believe that the samples it generates are from the dataset. A commonly used analogy for GANs is that of a forger (G) that must learn to manufacture counterfeit goods and an expert (D) trying to identify them as such.
A new Parallel Forall blog post by NVIDIA’s Greg Heinrich explores various ways of using a GAN to create previously unseen images. The first post of this two part series provides an introduction to GANs, and shows how they can be trained on the MNIST dataset of handwritten digits to generate new images of handwritten digits. Greg provides source code in TensorFlow and a modified version of DIGITS that you are free to use if you wish to try it out yourself.