This session will be held in the Erskine Building, Room 446
13:10 — 13:30
Dr Nazim Khan
University of Western Australia
The Em algorithm is a powerful tool for parameter estimation when there are missing or incomplete data. In most applications it is easy to implement - the mathematics involved is, in principle, not very demanding, and the method does not require second derivatives. This latter feature is at once an attraction of the algorithm as well as one of its shortcomings; standard errors are not automatically generated during the EM computations. Various methods have been proposed for obtaining standard errors when using the EM algorithm. In 1982 Loius obtained the observed information matrix using the "missing information principle" of Orchard and Woodbury. However, Loius' the exact observed information cannot be computed using this method when the data are not independent, as is the case for example in hidden Markov models. Hugh (1997) used Loius' idea to approximate the observed information for hidden Markov models.
We present a general algorithm to the obtain exact observed information within the EM framework. The algorithm is simple, and the computations can be performed in the last cycle of the EM algorithm. Examples using mixture models are given and some comparisons made with the work of Loius. Finally, some simulation results and data analysis are presented in the context of hidden Markov models and ion channel data.
13:30 — 13:50
Jason Phillip Bentley
University of Canterbury
Bayesian variable selection (BVS) typically requires Markov chain Monte Carlo (MCMC) exploration of large sample spaces. MCMC methods provide samples distributed approximately according to the stationary distribution of a Markov chain. Coupling from the past (CFTP) proposed by Propp and Wilson (1996), outlines a framework for exact MCMC methods. We investigate the use of an exact Gibbs sampler for BVS in linear regression models using a posterior distribution proposed by Celeux et al (2006). We consider this within the wider context of Bayesian analysis of linear regression models. We use simulated and real data studies to assess performance and inference. We consider methods proposed by Huang and Djuric (2002) and Corcoran and Schneider (2004). We find that the CFTP Gibbs sampler method provides exact samples, while the monotone version provides only close to exact samples. We conclude that exact MCMC methods for Bayesian analysis in linear regression benefit the accuracy of inference when their use is available.
13:50 — 14:10
Jeffrey J Hunter
Massey University Auckland
The properties of the time to coupling and the time to mixing in Markov chains are explored. In particular, the expected time to coupling is compared with the expected time to mixing (as introduced by the presenter in “Mixing times with applications to perturbed Markov chains”, Linear Algebra Appl. (417, 108-123 (2006).) Comparisons in some special cases as well as some general results are presented.