Bayesian Iterative Simulation Methods
A useful alternative approach to Maximum Likelihood(ML) methods, particularly when the sample size is small, is to include a reasonable prior distribution for the parameters and compute the posterior distribution of the parameters of interest. The posterior distribution for a model with ignorable missingness is
where
Data Augmentation
Data Augmentation(Tanner and Wong (1987)), or DA, is an iterative method of simulating the posteiror distribution of
Draw
with density (I step).Draw
with density (P step).
The procedure is motivated by the fact that the distributions in these two steps are often much easier to draw from than either of the posteriors
Bivariate Normal Data Example
Suppose having a sample
where
where
The Gibbs’ Sampler
The Gibbs’s sampler is an iterative simulation method that is designed to yield draws from the joint posterior distribution in the case of a general pattern of missingness and provides a Bayesian analogous to the Expectation Conditonal Maximisation (ECM) algorithm for ML estimation. The Gibbs’ sampler eventually generates a draw from the distribution
up to
It can be shown that, under general conditions, the sequence of
such that one run of the sampler converges to a draw from the posterior predictive distribution of
Assessing Convergence
Assessing convergence of the sequence of draws to the target distribution is more difficult than assessing convergence of an EM-type algorithm because there is no single target quantity to monitor like the maximum value of the likelihood. Methods have been proposed to assess convergence of a single sequence (Geyer (1992)), but a more reliable approach is to simulate
where
which will overestimate the marginal posterior variance assuming the starting distribution is appropriately over-dispersed but is unbiased under stationarity (starting distribution equals the target distribution). For any finte
which declines to 1 as
Other Simulation Methods
When draws from the sequence of conditional distributions forming the Gibbs’ sampler are not easy to obtain, other simulation approaches are needed. Among these there are the Sequential Imputation (Kong, Liu, and Wong (1994)), Sampling Imprtance Resampling (Gelfand and Smith (1990)), Rejection Sampling (Von Neumann and others (1951)). One of these alternatives are the Metropolis-Hastings (Metropolis et al. (1953)) algorithms, of which the Gibbs’ sampler is a particular case, which constitute the so-called Markov Chain Monte Carlo (MCMC) algorithms as the sequence of iterates forms a Markov Chain (Gelman et al. (2013)).