Quantcast

Max-Margin Deep Generative Models for (Semi-)Supervised Learning.

Research paper by Chongxuan C Li, Jun J Zhu, Bo B Zhang

Indexed on: 11 Jul '18Published on: 11 Jul '18Published in: IEEE transactions on pattern analysis and machine intelligence



Abstract

Deep generative models (DGMs) can effectively capture the underlying distributions of complex data by learning multilayered representations and performing inference. However, it is relatively insufficient to boost the discriminative ability of DGMs. This paper presents max-margin deep generative models (mmDGMs) and a class-conditional variant (mmDCGMs), which explore the strongly discriminative principle of max-margin learning to improve the predictive performance of DGMs in both supervised and semi-supervised learning, while retaining the generative capability. In semi-supervised learning, we use the predictions of a max-margin classifier as the missing labels instead of performing full posterior inference for efficiency; we also introduce additional max-margin and label-balance regularization terms of unlabeled data for effectiveness. We develop an efficient doubly stochastic subgradient algorithm for the piecewise linear objectives in different settings. Empirical results on various datasets demonstrate that: (1) max-margin learning can significantly improve the prediction performance of DGMs and meanwhile retain the generative ability; (2) in supervised learning, mmDGMs are competitive to the best fully discriminative networks when employing convolutional neural networks as the generative and recognition models; and (3) in semi-supervised learning, mmDCGMs can perform efficient inference and achieve state-of-the-art classification results on several benchmarks.