Master Class: Tensor Decompositions for Learning Latent Variable Models (Part 2)

SpeakerSham Kakade
AffiliationMicrosoft Research, New England
DateTuesday, 04 Nov 2014
Time11:30 - 12:30
Location1.03, Malet Place Engineering Building
Event seriesMaster Class: Sham Kakade (3-5 November 2014)
Description

In many applications, we face the challenge of modeling the interactions between multiple observations. A popular and successful approach in machine learning and AI is to hypothesize the existence of certain latent (or hidden) causes which help to explain the correlations in the observed data. The (unsupervised) learning problem is to accurately estimate a model with only samples of the observed variables. For example, in document modeling, we may wish to characterize the correlational structure of the "bag of words" in documents. Here, a standard model is to posit that documents are about a few topics (the hidden variables) and that each active topic determines the occurrence of words in the document. The learning problem is, using only the observed words in the documents (and not the hidden topics), to estimate the topic probability vectors (i.e. discover the strength by which words tend to appear under different topcis). In practice, a broad class of latent variable models is most often fit with either local search heuristics (such as the EM algorithm) or sampling based approaches.



The first two lectures survey algorithms for learning latent variable models based on the method-of-moments, focusing on algorithms based on low-rank decompositions of higher-order tensors. The target audiences of the tutorial include (i) users of latent variable models in applications, and (ii) researchers developing techniques for learning latent variable models. The only prior knowledge expected of the audience is a familiarity with simple latent variable models (e.g., mixtures of Gaussians), and rudimentary linear algebra and probability. The audience will learn about new algorithms for learning latent variable models, techniques for developing new learning algorithms based on spectral decompositions, and analytical techniques for understanding the aforementioned models and algorithms. Advanced topics such as learning overcomplete representations may also be discussed.

iCalendar csml_id_206.ics