|Department of Engineering|
Taught By: Professor Zoubin Ghahramani
Teaching Assistants: Dr David Barrett, Alex Navarro, Marina Riabiz, and Je Hyeong (John) Hong
Code and Term: 4F13 Michaelmas term
Year: 4th year (part IIB) Engineering and MPhil in Machine Learning and Speech Technology; also open to MPhil and PhD students in any Department.
Structure & Assessment:14 lectures, 2 coursework revisions, 3 pieces of course work. The evaluation is by coursework only, all three pieces of course work carry an equal weight. There is no final exam.
Time: Friday 2pm to 4pm
Location: NEW ** LT2 ** (weeks 7-8 in *LT0* ) (Inglis Building), Department of Engineering, Trumpington Street (map)
Prerequisites: A good background in statistics, calculus, linear algebra, and computer science. 3F3 Signal and Pattern Processing. You should thoroughly review the maths in the following cribsheet [pdf] [ps] before the start of the course. The following Matrix Cookbook is also a useful resource. If you want to do the optional coursework you need to know Matlab or Octave, or be willing to learn it on your own. Any student or researcher at Cambridge meeting these requirements is welcome to attend the lectures. Students wishing to take it for credit should consult with the course lecturers.
Textbook: There is no required textbook. However, the material covered is treated excellent recent text books:
Kevin P. Murphy Machine Learning: a Probabilistic Perspective, the MIT Press (2012).
David Barber Bayesian Reasoning and Machine Learning, Cambridge University Press (2012), avaiable freely on the web.
Christopher M. Bishop Pattern Recognition and Machine Learning. Springer (2006)
David J.C. MacKay Information Theory, Inference, and Learning Algorithms, Cambridge University Press (2003), available freely on the web.NOTE: If you want to see lecture slides from a similar but not identical course taught last year click on the Lent 2015 course website, but be warned that the slides will change slightly this year.
These sessions will be led by the Teaching Assistants. The purpose of these sessions is to provide a forum for questions on the course material.
27th Oct in ** LR10 **, 3pm-4pm
4th Nov in LR5, 3pm-4pm
25th Nov in LR5, 3pm-4pm
NEWS: We now have an FAQ site! Please note, if you email us questions, they may appear on this site (anonymised of course!).
This year, the exposition of the material will be centered around three specific machine learning areas: 1) supervised non-parametric probabilistic inference using Gaussian processes, 2) the TrueSkill ranking system and 3) the latent Dirichlet Allocation model for unsupervised learning in text.
Introduction to Machine Learning (2L):
the concept of a model, linear in the parameters regression: making predictions, least squares fit, overfitting
likelihood and the concept of noise: Gaussian iid noise, maximum likelihood fitting, equivalence to least squares, motivation for inference with multiple hypotheses
probability basics: Medical diagnosis example, joint, conditional and marginal probability, the two rules: sum and product, and Bayes rule
Bayesian inference and prediction: likelihood and prior, posterior and predictive distribution
Marginal Likelihood: Bayesian model selection, example: How Bayes avoids overfitting
|Lecture 1 and 2 slides|
|Oct 16, 23||
Gaussian Processes (3L):
Distributions over parameters and over functions: Motivation: representation of multiple hypothesis, concepts of prior over functions and over parameters, inference, priors over functions are priors over long vectors
Gaussian process priors: from finite multi-variate Gaussians to Gaussian processes, GP definition, conditional generation and joint generation
Lecture 3 and 4 slides
Lecture 5 slides
|Oct 23, 30, Nov 6||
Probabilistic Ranking (4L):
Factor graphs and probabilistic graphical models: message passing algorithms for computing marginals and conditions efficiently (i.e. sum-product algorithm, belief propagation)
Appoximating messages by moment matching: The Expectation Propagation (EP) algorithm. Applications to the TrueSkill ranking problem.
| Lecture 6 and 7 slides
Lecture 8 and 9 slides
|Nov 6, 13, 20, 27||Text and Discrete Distributions:
Motivation: unsupervised learning on text corpora, discrete distributions, Bernoulli, Binomial, multinomial, categorical and Dirichlet distributions
Mixture models for text and the Expectation Maximization (EM) algorithm.
Latent Dirichlet Allocation (LDA) model
Lecture 10 and 11 slides|
Lecture 12 slides
Lecture 13 and 14 slides
Course work is to be submitted on Moodle no later than 16:00 on the date due; as a last resort it can be handed in to the Division F office in Baker BNO-37. Each of the three pieces of course work carry an equal weight in the evaluation. The course work will be similar, but not identical to last year's, and will be posted shortly on this web site. The due-dates this year are:Coursework #1