Adaptive MCMC algorithms are designed to self-tune for optimal sampling performance, and hence they change the transition kernel of the underlying Markov chain as the simulation progresses, and characteristics of the target distribution are being revealed. This strategy breaks Markovianity, and while hugely successful in applications, these algorithms are notoriously difficult to analyse theoretically. We introduce a class of Adapted Increasingly Rarely Markov Chain Monte Carlo (AirMCMC) algorithms where the underlying Markov kernel is allowed to be changed based on the whole available chain
output but only at special time points separated by an increasing number of iterations. The main motivation is the ease of analysis of such algorithms. Under assumption of either simultaneous or (weaker) local simultaneous geometric drift condition, or simultaneous polynomial drift we prove the Mean Squared Error (MSE) convergence, Strong and Weak Laws of Large Numbers (SLLN, WLLN), Central Limit Theorem (CLT), and discuss how our approach extends the existing results. We argue that many of the known Adaptive MCMC may be transformed into an Air version and provide an empirical evidence that the
performance of the Air version stays virtually the same.
For the final part of the talk, I will advertise a new open-source statistical library Tensorflow Probability which is an ongoing and quickly growing project at Google. I will cover available features and prospects of the library. Contributions are welcome!
 C. Chimisov, K. Latuszynski, and G. Roberts. Air Markov Chain Monte Carlo, arXiv:1801.09309, 2018