Reducing the State Space while Preserving "Information"

Markov chains are attractive models for real-world random processes such as speech, natural language, chemical reactions, and the behavior of gene regulatory networks. Sometimes, though, the state space (i.e., the number of states the process can go through) is too large to admit simulation, model estimation, or model analysis - just think of the rather large number of words in an English dictionary! One way to simplify these problems is state space aggregation, i.e., clustering words that are similar in a well-defined sense. With information-theoretic cost functions used as measures of similarity, you get to the core task of information processing: Remove all information you don't need, and keep only what's relevant. Aside from solving the aggregation problem, these methods and cost functions can be used for clustering (i.e., grouping data points based on their pairwise distances/similarities) and co-clustering (i.e., grouping two sets of data points simultaneously based on their relationships).

**Join Sparrho today to stay on top of science**

Discover, organise and share research that matters to you