Tensorized Spectrum Preserving Compression for Neural Networks

Research paper by Jiahao Su, Jingling Li, Bobby Bhattacharjee, Furong Huang

Indexed on: 25 May '18Published on: 25 May '18Published in: arXiv - Statistics - Machine Learning


Modern neural networks can have tens of millions of parameters, and are often ill-suited for smartphones or IoT devices. In this paper, we describe an efficient mechanism for compressing large networks by {\em tensorizing\/} network layers: i.e. mapping layers on to high-order matrices, for which we introduce new tensor decomposition methods. Compared to previous compression methods, some of which use tensor decomposition, our techniques preserve more of the networks invariance structure. Coupled with a new data reconstruction-based learning method, we show that tensorized compression outperforms existing techniques for both convolutional and fully-connected layers on state-of-the art networks.