Distributed Kalman Filtering over Massive Data Sets: Analysis Through Large Deviations of Random Riccati Equations

Research paper by Di Li, Soummya Kar, Jose' M. F. Moura, H. Vincent Poor, Shuguang Cui

Indexed on: 12 Jan '15Published on: 12 Jan '15Published in: Computer Science - Information Theory


This paper studies the convergence of the estimation error process and the characterization of the corresponding invariant measure in distributed Kalman filtering for potentially unstable and large linear dynamic systems. A gossip network protocol termed Modified Gossip Interactive Kalman Filtering (M-GIKF) is proposed, where sensors exchange their filtered states (estimates and error covariances) and propagate their observations via inter-sensor communications of rate $\overline{\gamma}$; $\overline{\gamma}$ is defined as the averaged number of inter-sensor message passages per signal evolution epoch. The filtered states are interpreted as stochastic particles swapped through local interaction. The paper shows that the conditional estimation error covariance sequence at each sensor under M-GIKF evolves as a random Riccati equation (RRE) with Markov modulated switching. By formulating the RRE as a random dynamical system, it is shown that the network achieves weak consensus, i.e., the conditional estimation error covariance at a randomly selected sensor converges weakly (in distribution) to a unique invariant measure. Further, it is proved that as $\overline{\gamma} \rightarrow \infty$ this invariant measure satisfies the Large Deviation (LD) upper and lower bounds, implying that this measure converges exponentially fast (in probability) to the Dirac measure $\delta_{P^*}$, where $P^*$ is the stable error covariance of the centralized (Kalman) filtering setup. The LD results answer a fundamental question on how to quantify the rate at which the distributed scheme approaches the centralized performance as the inter-sensor communication rate increases.