Quantcast

Designing Interpretable Recurrent Neural Networks for Video Reconstruction Via Deep Unfolding.

Research paper by Huynh H Van Luong, Boris B Joukovsky, Nikos N Deligiannis

Indexed on: 07 Apr '21Published on: 03 Apr '21Published in: IEEE transactions on image processing : a publication of the IEEE Signal Processing Society



Abstract

Deep unfolding methods design deep neural networks as learned variations of optimization algorithms through the unrolling of their iterations. These networks have been shown to achieve faster convergence and higher accuracy than the original optimization methods. In this line of research, this paper presents novel interpretable deep recurrent neural networks (RNNs), designed by the unfolding of iterative algorithms that solve the task of sequential signal reconstruction (in particular, video reconstruction). The proposed networks are designed by accounting that video frames' patches have a sparse representation and the temporal difference between consecutive representations is also sparse. Specifically, we design an interpretable deep RNN (coined reweighted-RNN) by unrolling the iterations of a proximal method that solves a reweighted version of the ℓ1-ℓ1 minimization problem. Due to the underlying minimization model, our reweighted-RNN has a different thresholding function (alias, different activation function) for each hidden unit in each layer. In this way, it has higher network expressivity than existing deep unfolding RNN models. We also present the derivative ℓ1-ℓ1-RNN model, which is obtained by unfolding a proximal method for the ℓ1-ℓ1 minimization problem. We apply the proposed interpretable RNNs to the task of video frame reconstruction from low-dimensional measurements, that is, sequential video frame reconstruction. The experimental results on various datasets demonstrate that the proposed deep RNNs outperform various RNN models.