A pinboard by
Xudong Mao

PhD Student, City University of Hong Kong


An artificial intelligence algorithms used in unsupervised learning

GANs is used to learn an implicit distribution (e.g., images) from the data. The basic idea of GANs is to simultaneously train a discriminator and a generator: the discriminator aims to distinguish between real samples and generated samples; while the generator tries to generate fake samples as real as possible, making the discriminator believe that the fake samples are from real data. GANs have demonstrated impressive performance for various computer vision tasks such as image generation, image super-resolution, and semi-supervised learning.

In spite of the great progress for GANs in image generation, GANs still face two problems. The first one is that the quality of generated images is still limited for some realistic tasks. The second one is that the learning process of GANs may be unstable. We found that these problems are partially caused by the loss function of the original GANs. The original GANs hypothesize the discriminator as a classifier with the sigmoid cross entropy loss function, which may lead to the vanishing gradients problem during the learning process. To overcome such a problem, we propose the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator. We show that minimizing the objective function of LSGAN yields minimizing the Pearson Chi^2 divergence. There are two benefits of LSGANs over original GANs. First, LSGANs are able to generate higher quality images than original GANs. Second, LSGANs perform more stable during the learning process. We evaluate LSGANs on several datasets and the experimental results show that the images generated by LSGANs are of better quality than the ones generated by original GANs. We also conduct two comparison experiments between LSGANs and original GANs to illustrate the stability of LSGANs.