CSML Master Classes

The CSML Master Class Series invites distinguished speakers from all over the world to spend several days at CSML to present their work in depth during seminars and meetings.

The CSML Master Class Series is sponsored by Google Deepmind (www.deepmind.com). Google Deepmind is an ambitious London-based startup building general-purpose learning algorithms, with initial product applications in mobile social gaming.

iCalendar URL for all master classes: www.csml.ac.uk/ics/type/4

Upcoming Master Classes

Previous Master Classes

Master Class: Tamara Broderick (4-6 June 2018)

Bayesian methods exhibit a number of desirable properties for modern data analysis---including (1) coherent quantification of uncertainty, (2) a modular modeling framework able to capture complex phenomena, (3) the ability to incorporate prior information from an expert source, and (4) interpretability. In practice, though, Bayesian inference necessitates approximation of a high-dimensional integral, and some traditional algorithms for this purpose can be slow---notably at data scales of current interest. The tutorial will cover modern tools for fast, approximate Bayesian inference at scale. One increasingly popular framework is provided by "variational Bayes" (VB), which formulates Bayesian inference as an optimization problem. We will examine key benefits and pitfalls of using VB in practice, with a focus on the widespread "mean-field variational Bayes" (MFVB) subtype. We will highlight properties that anyone working with VB, from the data analyst to the theoretician, should be aware of.

About the Speaker: Tamara is ITT Career Development Assistant Professor in the Department of Electrical Engineering and Computer Science, Massechusetts Institute of Technology (MIT). She is a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), the MIT Statistics and Data Science Center, and the Institute for Data, Systems, and Society (IDSS). Tamara completed her Ph.D. in Statistics at the University of California, Berkeley in 2014. Previously, she received an AB in Mathematics from Princeton University (2007), a Master of Advanced Study for completion of Part III of the Mathematical Tripos from the University of Cambridge (2008), an MPhil by research in Physics from the University of Cambridge (2009), and an MS in Computer Science from the University of California, Berkeley (2013). Her recent research has focused on developing and analyzing models for scalable Bayesian machine learning---especially Bayesian nonparametrics.

We are grateful to our generous sponsors at: Microsoft

Master Class: Carlos Guestrin (2-3 July 2015)

Carlos Guestrin is the Amazon Professor of Machine Learning at the Computer Science & Engineering Department of the University of Washington. He is also a co-founder and CEO of GraphLab Inc., focusing large-scale machine learning and graph analytics. His previous positions include the Finmeccanica Associate Professor at Carnegie Mellon University and senior researcher at the Intel Research Lab in Berkeley. Carlos received his PhD and Master from Stanford University, and a Mechatronics Engineer degree from the University of Sao Paulo, Brazil. Carlos' work has been recognized by awards at a number of conferences and two journals: KDD 2007 and 2010, IPSN 2005 and 2006, VLDB 2004, NIPS 2003 and 2007, UAI 2005, ICML 2005, AISTATS 2010, JAIR in 2007 & 2012, and JWRPM in 2009. He is also a recipient of the ONR Young Investigator Award, NSF Career Award, Alfred P. Sloan Fellowship, IBM Faculty Fellowship, the Siebel Scholarship and the Stanford Centennial Teaching Assistant Award. Carlos was named one of the 2008 `Brilliant 10' by Popular Science Magazine, received the IJCAI Computers and Thought Award and the Presidential Early Career Award for Scientists and Engineers (PECASE). He is a former member of the Information Sciences and Technology (ISAT) advisory group for DARPA.

Carlos will give two talks during his visit, an additional talk will be given by Emily Fox (University of Washington).

Please register to attend at http://carlosmasterclass.eventbrite.co.uk

Master Class: Sham Kakade (3-5 November 2014)

Sham Kakade is a principal research scientist at Microsoft Research, New England, a lab in Cambridge, MA. Previously, he was an associate professor at the Department of Statistics, Wharton, University of Pennsylvania (from 2010-2012), and was an assistant professor at the Toyota Technological Institute at Chicago. Sham did a postdoc in the Computer and Information Science department at the University of Pennsylvania under the supervision of Michael Kearns, and completed his PhD at the Gatsby Unit where his advisor was Peter Dayan.

The focus of Sham's work is on designing (and implementing) both statistically and computationally efficient algorithms for machine learning, statistics, and artificial intelligence.

You can view videos of the 3 lectures at the following links:

Tensor Decompositions for Learning Latent Variable Models (Part 1)
Tensor Decompositions for Learning Latent Variable Models (Part 2)
How much computation is required in order to achieve statistical efficiency?

CSML Master Classes are sponsored by Google Deepmind.

Master Class: Shai Ben-David (21 - 23 July 2014)

Shai Ben-David grew up in Jerusalem, Israel. He attended the Hebrew University studying physics, mathematics and psychology. He received his PhD under the supervision of Saharon Shelah and Menachem Magidor for a thesis in set theory. Professor Ben-David was a postdoctoral fellow at the University of Toronto in the Mathematics and the Computer Science departments, and in 1987 joined the faculty of the CS Department at the Technion (Israel Institute of Technology). He held visiting faculty positions at the Australian National University in Canberra (1997-8) and at Cornell University (2001-2004). In August 2004 he joined the School of Computer Science at the University of Waterloo.

Shai's research interests span a wide spectrum of topics in the foundations of computer science and its applications, with a particular emphasis on statistical and computational machine learning. The common thread throughout my research is aiming to provide mathematical formulation and understanding of real world problems. In particular, I have been looking at popular machine learning and data mining paradigms that seem to lack clear theoretical justification.

You can view videos of the 3 lectures at the following links:

Day 1: Clustering
Day 2: Use of unlabeled and weakly labeled data
Day 3: Efficient computations on well behaved inputs

CSML master classes are sponsored by Google DeepMind.

Master Class: Andrew Gelman (14 - 16 April 2014)

Andrew Gelman is a professor of statistics and political science and director of the Applied Statistics Center at Columbia University. He has received the Outstanding Statistical Application award from the American Statistical Association, the award for best article published in the American Political Science Review, and the Council of Presidents of Statistical Societies award for outstanding contributions by a person under the age of 40. His books include Bayesian Data Analysis (with John Carlin, Hal Stern, and Don Rubin), Teaching Statistics: A Bag of Tricks (with Deb Nolan), Data Analysis Using Regression and Multilevel/Hierarchical Models (with Jennifer Hill), Red State, Blue State, Rich State, Poor State: Why Americans Vote the Way They Do (with David Park, Boris Shor, Joe Bafumi, and Jeronimo Cortina), and A Quantitative Tour of the Social Sciences (co-edited with Jeronimo Cortina).

Andrew has done research on a wide range of topics, including: why it is rational to vote; why campaign polls are so variable when elections are so predictable; why redistricting is good for democracy; reversals of death sentences; police stops in New York City, the statistical challenges of estimating small effects; the probability that your vote will be decisive; seats and votes in Congress; social network structure; arsenic in Bangladesh; radon in your basement; toxicology; medical imaging; and methods in surveys, experimental design, statistical inference, computation, and graphics.

THe slides for the masterclass are available at the following links:

Master Class: Little Data: How traditional statistical ideas remain relevant in a big-data world
Parameterization and Bayesian modeling
Weakly informative priors

CSML Masterclasses are sponsored by Google Deepmind.

Master Class: Yoshua Bengio (21 Oct - 24 Oct 2013)

Yoshua Bengio (CS PhD, McGill University, 1991) was a post-doc with Michael Jordan at MIT and worked at AT&T Bell Labs before becoming professor at U. Montreal. He wrote two books and around 200 papers, the most cited being in the areas of deep learning, recurrent neural networks, probabilistic learning, NLP and manifold learning. Among the most cited Canadian computer scientists he sat on editorial boards of top ML journals and of the NIPS foundation, holds a Canada Research Chair and an NSERC chair, is a Fellow of CIFAR and has been program/general chair for NIPS. He is driven by his quest for AI through machine learning, involving fundamental questions on deep learning of representations, the geometry of generalization in high-dimension, manifold learning, biologically inspired learning, and challenging applications of ML. He was one of the founders of the area of deep learning in 2006. He was awarded the Urgel-Archambault prize in 2009 and a Canada Research Chair in 2000 (tier 2) and 2006 (tier 1). In August 2013, Google Scholar finds almost 14000 citations to his work, yielding an h-index of 51.

Registration for the Master Class should be done here.

You can find videos of the masterclasses below:

Oct 21: Deep Learning of Representations

Oct 22: Non-local Manifold Learning by Regularized Auto-encoders

Oct 23: Generative Stochastic Networks: How to Get Rid of Approximate Inference over Latent Variables

This event is funded by DeepMind Technologies.

Master Class: Martin Wainwright (25 Feb - 1 Mar 2013)

Martin Wainwright is currently a professor at University of California at Berkeley, with a joint appointment between the Department of Statistics and the Department of Electrical Engineering and Computer Sciences (EECS). He received a Bachelor's degree in Mathematics from University of Waterloo, Canada, and Ph.D. degree in EECS from Massachusetts Institute of Technology (MIT). His research interests include high-dimensional statistics, statistical machine learning, information theory and statistical signal processing. He is currently serving as an associate editor for the Annals of Statistics, Journal of Machine Learning Research, and Information and Inference.

He has been awarded the George M. Sprowls Prize for his dissertation research (MIT), an Alfred P. Sloan Foundation Fellowship, Best Paper Awards from the IEEE Signal Processing Society (2008), IEEE Communications Society (2010), the Joint Paper Prize (2012) from IEEE Information Theory and Communication Societies, and a Medallion Lecturer (2013) from the Institute of Mathematical Statistics.

The slides for the talks are available here: Lecture 1, Lecture 2 (paper), Lecture 3 (paper)

Master Class: Arnaud Doucet (1-5 Oct 2012)

Arnaud Doucet (Professor of Statistics, Department of Statistics, Oxford University) will be the second speaker in the CSML Master Class Series.

Arnaud Doucet obtained his PhD Degree from University Paris XI in 1997. He has held previously faculty positions at the University of Melbourne, the University of Cambridge, the Institute of Statistical Mathematics in Tokyo and was a Canada Research Chair at the University of British Columbia. He joined the Department of Statistics of the University of Oxford in 2011 where he is currently Professor. He is Associate editor of the Annals of Statistics and ACM Transactions on Modeling and Computer Simulation. His research areas include Monte Carlo methods, Bayesian statistics, dynamic models and their applications

The slides for the talks are available here: ucl_1.pdf, ucl_2.pdf, ucl_3.pdf

Master Class: Larry Wasserman (6-11 June 2012)

Larry Wasserman is Professor of Statistics at Carnegie Mellon University. He is also a member of the Center for Automated Learning and Discovery in the School of Computer Science. His research areas include nonparametric inference, asymptotic theory, causality, and applications to astrophysics, bioinformatics, and genetics. He is the 1999 winner of the Committee of Presidents of Statistical Societies Presidents' Award and the 2002 winner of the Centre de recherches mathematiques de Montreal Statistical Society of Canada Prize in Statistics. He is Associate Editor of The Journal of the American Statistical Association and The Annals of Statistics. He is a fellow of the American Statistical Association and of the Institute of Mathematical Statistics.

The slides for the talks are available here: Structure.pdf, Manifolds.pdf, Graphs.pdf