Quantcast


CURATOR
A pinboard by
Julian Faraone

PhD, University of Sydney

PINBOARD SUMMARY

Software optimizations of Deep Learning algorithms for efficient hardware implementations.

Artificial intelligence is a growing field and becoming a core part of the efficiency of everyday jobs and lifestyles. From powering the Google search engine to medical imaging systems, these algorithms have a wide range of applications. In the last few years there have been several breakthroughs showing superhuman performance on certain tasks. Although these algorithms are very useful, they require lots of data and computational power to create.

There are ways to redesign these algorithms by reducing the number of parameters or changing the representations of the parameters to be more amenable to computer hardware. This allows for implementations which are more efficient. By being able to redesign and compress the algorithms in this way, it not only broadens the applicability of the algorithm to new application areas, but leads to a significant reduction in power consumption. This leads to the ability to implement these algorithms on embedded devices such as drones, mobile phones...etc. The reduced power consumption also has several positive environmental impacts. Lots of these AI algorithm computations happen in data centers and if data centers were a country they would consume the 5th largest electricity in the world. Therefore this type of research has the ability to reduce greenhouse gas emissions significantly.

4 ITEMS PINNED

Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding

Abstract: Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce "deep compression", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.

Pub.: 15 Feb '16, Pinned: 30 Aug '17