Quantcast

From Pixel to Patch: Synthesize Context-aware Features for Zero-shot Semantic Segmentation

Research paper by Zhangxuan Gu, Siyuan Zhou, Li Niu, Zihan Zhao, Liqing Zhang

Indexed on: 28 Sep '20Published on: 25 Sep '20Published in: arXiv - Computer Science - Computer Vision and Pattern Recognition



Abstract

Zero-shot learning has been actively studied for image classification task to relieve the burden of annotating image labels. Interestingly, semantic segmentation task requires more labor-intensive pixel-wise annotation, but zero-shot semantic segmentation has only attracted limited research interest. Thus, we focus on zero-shot semantic segmentation, which aims to segment unseen objects with only category-level semantic representations provided for unseen categories. In this paper, we propose a novel Context-aware feature Generation Network (CaGNet), which can synthesize context-aware pixel-wise visual features for unseen categories based on category-level semantic representations and pixel-wise contextual information. The synthesized features are used to finetune the classifier to enable segmenting unseen objects. Furthermore, we extend pixel-wise feature generation and finetuning to patch-wise feature generation and finetuning, which additionally considers inter-pixel relationship. Experimental results on Pascal-VOC, Pascal-Context, and COCO-stuff show that our method significantly outperforms the existing zero-shot semantic segmentation methods.