Quantcast

Learning Compositional Representations for Few-Shot Recognition

Research paper by Pavel Tokmakov, Yu-Xiong Wang, Martial Hebert

Indexed on: 21 Dec '18Published on: 21 Dec '18Published in: arXiv - Computer Science - Computer Vision and Pattern Recognition



Abstract

One of the key limitations of modern deep learning based approaches lies in the amount of data required to train them. Humans, on the other hand, can learn to recognize novel categories from just a few examples. Instrumental to this rapid learning ability is the compositional structure of concept representations in the human brain - something that deep learning models are lacking. In this work we make a step towards bridging this gap between human and machine learning by introducing a simple regularization technique that allows the learned representation to be decomposable into parts. We evaluate the proposed approach on three datasets: CUB-200-2011, SUN397, and ImageNet, and demonstrate that our compositional representations require fewer examples to learn classifiers for novel categories, outperforming state-of-the-art few-shot learning approaches by a significant margin.