Quantcast


CURATOR
A pinboard by
Lin Zhou

Ph.D. candidate, National University of Singapore

PINBOARD SUMMARY

Mathematical theory for communication systems with multiple terminals and lossy recovery

I work in information theory, which is related with understanding the communication systems with mathematical tools. The study of information theory was initiated by Claude Shannon, who first proposed to study the communication systems in a mathematical way and established the foundation of the area. This line of research is actually closely related with our daily life, since some ideas in information theory have already been successfully used in the modern communication systems, which provide us the cellular and network services.

To be more specific, my work actually focuses on how to compress information efficiently and reliably and what is the fundamental limit one can achieve. Think of the compression of videos as an example. A high-quality video normally has a very big size and if we can tolerate some distortion, then we can compress the video and save a lot of spaces. Motivated by many daily life examples, I focus on deriving the optimal performance of communication systems which compress information efficiently and reliably in a lossy manner and providing insights about the design of practical systems.

5 ITEMS PINNED

Second-Order and Moderate Deviation Asymptotics for Successive Refinement

Abstract: We derive the optimal second-order coding region and moderate deviations constant for successive refinement source coding with a joint excess-distortion probability constraint. We consider two scenarios: (i) a discrete memoryless source (DMS) and arbitrary distortion measures at the decoders and (ii) a Gaussian memoryless source (GMS) and quadratic distortion measures at the decoders. For a DMS with arbitrary distortion measures, we prove an achievable second-order coding region, using type covering lemmas by Kanlis and Narayan and by No, Ingber and Weissman. We prove the converse using the perturbation approach by Gu and Effros. When the DMS is successively refinable, the expressions for the second-order coding region and the moderate deviations constant are simplified and easily computable. For this case, we also obtain new insights on the second-order behavior compared to the scenario where separate excess-distortion proabilities are considered. For example, we describe a DMS, for which the optimal second-order region transitions from being characterizable by a bivariate Gaussian to a univariate Gaussian, as the distortion levels are varied. We then consider a GMS with quadratic distortion measures. To prove the direct part, we make use of the sphere covering theorem by Verger-Gaugry, together with appropriately-defined Gaussian type classes. To prove the converse, we generalize Kostina and Verd\'u's one-shot converse bound for point-to-point lossy source coding. We remark that this proof is applicable to general successively refinable sources. In the proofs of the moderate deviations results for both scenarios, we follow a strategy similar to that for the second-order asymptotics and use the moderate deviations principle.

Pub.: 18 Jan '16, Pinned: 01 Sep '17

Moderate Deviations Asymptotics for Streaming Compression of Correlated Sources

Abstract: In this paper, we consider the problem of blockwise streaming compression of a pair of correlated sources, which we term streaming Slepian-Wolf coding. We study the moderate deviations regime in which the rate pairs of a sequence of codes converges, along a straight line, to various points on the boundary of the Slepian-Wolf region at a speed slower than the inverse square root of the blocklength $n$, while the error probability decays subexponentially fast in $n$. Our main result focuses on directions of approaches to corner points of the Slepian-Wolf region. It states that for each correlated source and all corner points, there exists a non-empty subset of directions of approaches such that the moderate deviations constant (the constant of proportionality for the subexponential decay of the error probability) is enhanced (over the non-streaming case) by at least a factor of $T$, the block delay of decoding symbol pairs. We specialize our main result to the setting of lossless streaming source coding and generalize this result to the setting where we have different delay requirements for each of the two source blocks. The proof of our main result involves the use of various analytical tools and amalgamates several ideas from the recent information-theoretic streaming literature. We adapt the so-called truncated memory idea from Draper and Khisti (2011) and Lee, Tan and Khisti (2015) to ensure that the effect of error accumulation is nullified in the limit of large blocklengths. We also adapt the use of the so-called minimum empirical suffix entropy decoder which was used by Draper, Chang and Sahai (2014) to derive achievable error exponents for streaming Slepian-Wolf coding.

Pub.: 25 Apr '16, Pinned: 01 Sep '17

Refined Asymptotics for Rate-Distortion using Gaussian Codebooks for Arbitrary Sources

Abstract: The rate-distortion saddle-point problem considered by Lapidoth (1997) consists in finding the minimum rate to compress an arbitrary ergodic source when one is constrained to use a random Gaussian codebook and minimum (Euclidean) distance encoding is employed. We extend Lapidoth's analysis in several directions in this paper. Firstly, we consider refined asymptotics. In particular, when the source is stationary and memoryless, we establish the second-order, moderate, and large deviation asymptotics of the problem. Secondly, by "random Gaussian codebook", Lapidoth refers to a collection of random codewords, each of which is drawn independently and uniformly from the surface of an $n$-dimensional sphere. To be more precise, we term this as a spherical Gaussian codebook. We also consider i.i.d.\ Gaussian codebooks in which each random codeword is drawn independently from a product Gaussian distribution. We derive the second-order, moderate, and large deviation asymptotics when i.i.d.\ Gaussian codebooks are employed. Interestingly, in contrast to the recent work on the channel coding counterpart by Scarlett, Tan and Durisi (2017), the dispersions for spherical and i.i.d.\ Gaussian codebooks are identical. Our bounds on the optimal error exponent for the spherical case coincide on a non-empty interval of rates above the rate-distortion function. The optimal error exponent for the i.i.d.\ case is established for all rates.

Pub.: 16 Aug '17, Pinned: 01 Sep '17

Non-Asymptotic Converse Bounds and Refined Asymptotics for Two Lossy Source Coding Problems

Abstract: In this paper, we revisit two multi-terminal lossy source coding problems: the lossy source coding problem with side information available at the encoder and one of the two decoders, which we term as the Kaspi problem (Kaspi, 1994), and the multiple description coding problem with one semi-deterministic distortion measure, which we refer to as the Fu-Yeung problem (Fu and Yeung, 2002). For the Kaspi problem, we first present the properties of optimal test channels. Subsequently, we generalize the notion of the distortion-tilted information density for the lossy source coding problem to the Kaspi problem and prove a non-asymptotic converse bound using the properties of optimal test channels and the well-defined distortion-tilted information density. Finally, for discrete memoryless sources, we derive refined asymptotics which includes the second-order, large and moderate deviations asymptotics. In the converse proof of second-order asymptotics, we apply the Berry-Esseen theorem to the derived non-asymptotic converse bound. The achievability proof follows by first proving a type-covering lemma tailored to the Kaspi problem, then properly Taylor expanding the well-defined distortion-tilted information densities and finally applying the Berry-Esseen theorem. We then generalize the methods used in the Kaspi problem to the Fu-Yeung problem. As a result, we obtain the properties of optimal test channels for the minimum sum-rate function, a non-asymptotic converse bound and refined asymptotics for discrete memoryless sources. Since the successive refinement problem is a special case of the Fu-Yeung problem, as a by-product, we obtain a non-asymptotic converse bound for the successive refinement problem, which is a strict generalization of the non-asymptotic converse bound for successively refinable sources (Zhou, Tan and Motani, 2017).

Pub.: 17 Aug '17, Pinned: 01 Sep '17