Today at NIPS 2017, researchers from Microsoft Research and ETH Zurich present their work on making GAN models more robust and practically useful.
“In the past, one challenge in using GAN models has been the difficulty of training them reliably; our work represents a step forward in making GANs simpler to train.” ~ Thomas Hofmann, Professor, ETH Zurich.
In research presented at NIPS last year, researchers from Microsoft generalized the above interpretation of GANs to a broader class of games, providing a deeper theoretical understanding of the GAN learning objective and enabling GANs to apply to other machine learning problems such as probabilistic inference.
Problems in training GANs.Despite the excitement around GANs, back at NIPS 2016 the situation looked bleak: GANs remained notoriously difficult to train and the reasons for these difficulties were not fully clear.
One key difficulty in training GANs is the issue of dimensional misspecification: a low-dimensional input is mapped continuously to a high-dimensional space. As a result, GANs summarize the distribution via a low-dimensional manifold but do not accurately capture the full training distribution.
The work presented today is a milestone because it addresses a key open problem in a line of research on generalizing GANs with a simple and principled solution.