Quantcast


CURATOR
A pinboard by
Bernhard Geiger

I'm a researcher at Graz University of Technology working in applied information theory (with a data science flavor).

I love math, all things entropy, and working in small teams. Networking is the best part of academic conferences - I'm a people person!

PINBOARD SUMMARY

Measuring the information content of time series

The information dimension of a stochastic process is a measure for the information content of a (random) time series. This quantity, in turn, characterizes fundamental limits of analog compression/sampling: The number of measurements you need to take to reconstruct the time series is proportional to its information dimension. Information dimension was shown to be connected to the bandwidth of a (Gaussian) process and to the rate-distortion dimension.

4 ITEMS PINNED

Compression-Based Compressed Sensing

Abstract: Modern compression algorithms exploit complex structures that are present in signals to describe them very efficiently. On the other hand, the field of compressed sensing is built upon the observation that "structured" signals can be recovered from their under-determined set of linear projections. Currently, there is a large gap between the complexity of the structures studied in the area of compressed sensing and those employed by the state-of-the-art compression codes. Recent results in the literature on deterministic signals aim at bridging this gap through devising compressed sensing decoders that employ compression codes. This paper focuses on structured stochastic processes and studies the application of rate-distortion codes to compressed sensing of such signals. The performance of the formerly-proposed compressible signal pursuit (CSP) algorithm is studied in this stochastic setting. It is proved that in the very low distortion regime, as the blocklength grows to infinity, the CSP algorithm reliably and robustly recovers $n$ instances of a stationary process from random linear projections as long as their count is slightly more than $n$ times the rate-distortion dimension (RDD) of the source. It is also shown that under some regularity conditions, the RDD of a stationary process is equal to its information dimension (ID). This connection establishes the optimality of the CSP algorithm at least for memoryless stationary sources, for which the fundamental limits are known. Finally, it is shown that the CSP algorithm combined by a family of universal variable-length fixed-distortion compression codes yields a family of universal compressed sensing recovery algorithms.

Pub.: 07 Jan '16, Pinned: 27 Sep '17

Universal Compressed Sensing

Abstract: In this paper, the problem of developing universal algorithms for compressed sensing of stochastic processes is studied. First, R\'enyi's notion of information dimension (ID) is generalized to analog stationary processes. This provides a measure of complexity for such processes and is connected to the number of measurements required for their accurate recovery. Then a minimum entropy pursuit (MEP) optimization approach is proposed, and it is proven that it can reliably recover any stationary process satisfying some mixing constraints from sufficient number of randomized linear measurements, without having any prior information about the distribution of the process. It is proved that a Lagrangian-type approximation of the MEP optimization problem, referred to as Lagrangian-MEP problem, is identical to a heuristic implementable algorithm proposed by Baron et al. It is shown that for the right choice of parameters the Lagrangian-MEP algorithm, in addition to having the same asymptotic performance as MEP optimization, is also robust to the measurement noise. For memoryless sources with a discrete-continuous mixture distribution, the fundamental limits of the minimum number of required measurements by a non-universal compressed sensing decoder is characterized by Wu et al. For such sources, it is proved that there is no loss in universal coding, and both the MEP and the Lagrangian-MEP asymptotically achieve the optimal performance.

Pub.: 26 Jan '16, Pinned: 27 Sep '17