Quantcast

Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform

Research paper by Rick Archibald, Anne Gelb, Rodrigo B. Platte

Indexed on: 05 Sep '15Published on: 05 Sep '15Published in: Journal of Scientific Computing



Abstract

Fourier samples are collected in a variety of applications including magnetic resonance imaging and synthetic aperture radar. The data are typically under-sampled and noisy. In recent years, \(l^1\) regularization has received considerable attention in designing image reconstruction algorithms from under-sampled and noisy Fourier data. The underlying image is assumed to have some sparsity features, that is, some measurable features of the image have sparse representation. The reconstruction algorithm is typically designed to solve a convex optimization problem, which consists of a fidelity term penalized by one or more \(l^1\) regularization terms. The Split Bregman Algorithm provides a fast explicit solution for the case when TV is used for the \(l^1\) regularization terms. Due to its numerical efficiency, it has been widely adopted for a variety of applications. A well known drawback in using TV as an \(l^1\) regularization term is that the reconstructed image will tend to default to a piecewise constant image. This issue has been addressed in several ways. Recently, the polynomial annihilation edge detection method was used to generate a higher order sparsifying transform, and was coined the “polynomial annihilation (PA) transform.” This paper adapts the Split Bregman Algorithm for the case when the PA transform is used as the \(l^1\) regularization term. In so doing, we achieve a more accurate image reconstruction method from under-sampled and noisy Fourier data. Our new method compares favorably to the TV Split Bregman Algorithm, as well as to the popular TGV combined with shearlet approach.