Master Class: Leveraging Optimization Techniques to Scale Bayesian Inference (Emily Fox)
|Affiliation||University of Washington|
|Date||Thursday, 02 Jul 2015|
|Time||16:00 - 17:00|
|Location||Room 1.02, Malet Place Engineering Building|
|Event series||Master Class: Carlos Guestrin (2-3 July 2015)|
Data streams of increasing complexity are being collected in a variety of fields ranging from neuroscience, genomics, and environmental monitoring to e-commerce based on technologies and infrastructures previously unavailable. With the advent of Markov chain Monte Carlo (MCMC) combined with the computational power to implement such algorithms, deploying increasingly expressive models has been a focus in recent decades. Unfortunately, traditional algorithms for Bayesian inference in these models such as MCMC and variational inference do not typically scale to the large datasets encountered in practice. Likewise, these algorithms are not applicable to the increasingly common situation where an unbounded amount of data arrive as a stream and inferences need to be made on-the-fly. In this talk, we will present a series of algorithms— stochastic gradient Hamiltonian Monte Carlo, HMM stochastic variational inference, and streaming Bayesian nonparametric inference— to address various aspects of the challenge in scaling Bayesian inference; our algorithms focus on deploying stochastic gradients and working within an optimization framework. We demonstrate our methods on a variety of applications including online movie recommendations, segmenting a human chromatin data set with 250 million observations, and clustering a stream of New York Times documents.