Quantcast

Adversarial Ladder Networks

Research paper by Juan Maroñas Molano, Alberto Albiol Colomer, Roberto Paredes Palacios

Indexed on: 07 Nov '16Published on: 07 Nov '16Published in: arXiv - Computer Science - Neural and Evolutionary Computing



Abstract

The use of unsupervised data in addition to supervised data in training discriminative neural networks has improved the performance of this clas- sification scheme. However, the best results were achieved with a training process that is divided in two parts: first an unsupervised pre-training step is done for initializing the weights of the network and after these weights are refined with the use of supervised data. On the other hand adversarial noise has improved the results of clas- sical supervised learning. Recently, a new neural network topology called Ladder Network, where the key idea is based in some properties of hierar- chichal latent variable models, has been proposed as a technique to train a neural network using supervised and unsupervised data at the same time with what is called semi-supervised learning. This technique has reached state of the art classification. In this work we add adversarial noise to the ladder network and get state of the art classification, with several important conclusions on how adversarial noise can help in addition with new possible lines of investi- gation. We also propose an alternative to add adversarial noise to unsu- pervised data.