We present an informal review of recent work on the asymptotics of
Approximate Bayesian Computation (ABC). In particular we focus on how does the
ABC posterior, or point estimates obtained by ABC, behave in the limit as we
have more data? The results we review show that ABC can perform well in terms
of point estimation, but standard implementations will over-estimate the
uncertainty about the parameters. If we use the regression correction of
Beaumont et al. then ABC can also accurately quantify this uncertainty. The
theoretical results also have practical implications for how to implement ABC.