Quantcast

Improving BERT Fine-Tuning via Self-Ensemble and Self-Distillation

Research paper by Yige Xu, Xipeng Qiu, Ligao Zhou, Xuanjing Huang

Indexed on: 25 Feb '20Published on: 24 Feb '20Published in: arXiv - Computer Science - Computation and Language



Abstract

Fine-tuning pre-trained language models like BERT has become an effective way in NLP and yields state-of-the-art results on many downstream tasks. Recent studies on adapting BERT to new tasks mainly focus on modifying the model structure, re-designing the pre-train tasks, and leveraging external data and knowledge. The fine-tuning strategy itself has yet to be fully explored. In this paper, we improve the fine-tuning of BERT with two effective mechanisms: self-ensemble and self-distillation. The experiments on text classification and natural language inference tasks show our proposed methods can significantly improve the adaption of BERT without any external data or knowledge.