Data Augmentation using Pre-trained Transformer Models

Research paper by Varun Kumar, Ashutosh Choudhary, Eunah Cho

Indexed on: 09 Mar '20Published on: 04 Mar '20Published in: arXiv - Computer Science - Computation and Language


Language model based pre-trained models such as BERT have provided significant gains across different NLP tasks. In this paper, we study different types of pre-trained transformer based models such as auto-regressive models (GPT-2), auto-encoder models (BERT), and seq2seq models (BART) for conditional data augmentation. We show that prepending the class labels to text sequences provides a simple yet effective way to condition the pre-trained models for data augmentation. On three classification benchmarks, pre-trained Seq2Seq model outperforms other models. Further, we explore how different pre-trained model based data augmentation differs in-terms of data diversity, and how well such methods preserve the class-label information.