Double seminar: Geoff Nicholls (University of Oxford) & Idris Eckley (Lancaster University)

Geoff Nicholls (Department of Statistics, University of Oxford) and Idris Eckley (Mathematics and Statistics, Lancaster University) will both give a talk on September 17th, at 13:45 and 14:45, respectively, in the Seminar Room 819, Niels Henrik Abels hus, 8th floor.


Title: Calibrating approximate Bayes coverage

Abstract: Consider Bayesian inference and suppose the prior and likelihood are both exactly correct - that is, nature really does draw the parameter theta according to pi(theta) and the data y according to p(y|theta). If we make a Bayesian credible set C_y(alpha) of size 1-alpha from the posterior in the usual way then it covers the true value of the parameter theta with probability 1-alpha. In this ideal setting we have exact coverage.

In a paper from 2004 with the snappy title "Getting it Right", Geweke uses this property to test an MCMC sampler targeting a given posterior distribution. By correct here we mean the Monte Carlo sampler is generating samples distributed according to the intended posterior distribution, so in the case of MCMC this means the algorithm is both correctly implemented and the "burn-in" is large enough to give representative samples from the target. The test is usually made in the non-parametric form suggested by Cook in 2006. This is not simply a test for MCMC stationarity
but for the MCMC target distribution itself. Passing the test doesn’t guarantee the samples have the correct distribution, just that we cannot detect any departure. However it seems in practice a fairly stringent check

When we use Monte Carlo methods to fit complex models to large data sets we may need to make additional approximations. This may involve approximation of the likelihood as well as other alterations to the algorithm itself. The Monte Carlo is a black box taking as input data and returning as output samples distributed approximately according to the posterior. Recently several papers have suggested the Geweke test might be used to quantify the damage done by these additional approximations - can we detect a difference between the Monte Carlo output and the true target distribution? If not, that is surely a good thing. 

We review this recent work, and suggest some additional methods for quantifying the bias introduced by approximations made within Monte Carlo algorithms, focusing on calibrating the damage done to the exact Bayes coverage.

Download the flyer here.





Title: A wavelet-based framework for modelling non-stationary multivariate time series

Abstract: This talk will consider the problem of estimating time-localised, cross-dependence in a collection of non-stationary signals. We develop a multivariate locally stationary wavelet framework that provides a time-scale decomposition of the second-order structure within multivariate series, thus naturally capturing the time evolving cross-dependence between components of the series. Under the proposed model, we rigorously define and estimate two forms of cross-dependence measures: the wavelet coherence and wavelet partial-coherence that respectively measure indirect and direct linear associations between a pair of series. The talk will conclude by describing recent work exploring how this framework might be applied to impute missing data within multivariate time series.

Download the flyer here.


Tags: Seminar Series in Statistics and Biostatistics
Published Aug. 7, 2018 4:54 PM - Last modified Sep. 4, 2018 1:21 PM