Machine Learning 2006 Course Web Page

Department of Engineering
University of Cambridge

UCL logo
Keywords: Machine learning, probabilistic modelling, graphical models, approximate inference, Bayesian statistics

For a summary of the topics covered in this module you can read the following chapter:

Ghahramani (2004) Unsupervised Learning. In Bousquet, O., Raetsch, G. and von Luxburg, U. (eds)
Advanced Lectures on Machine Learning LNAI 3176. Springer-Verlag.

Taught By: Professor Zoubin Ghahramani

Code and Term: 4F13 Michaelmas (Fall) term

Year: 4th year (part IIB) Engineering, also open to MPhil and PhD students

Time: 12noon to 1pm Wednesdays, and 11am to 12noon Thursdays

Location: Lecture Room 6, First floor, Engineering Building, Trumpington Street, Cambridge

Prerequisites: A good background in statistics, calculus, linear algebra, and computer science. 3F3 Signal and Pattern Processing would be useful. You should thoroughly review the maths in the following cribsheet [pdf] [ps] before the start of the course. The following Matrix Cookbook is also a useful resource. If you want to do the optional coursework you need to know Matlab or Octave, or be willing to learn it on your own. Any student or researcher at Cambridge meeting these requirements is welcome to attend the lectures. Students wishing to take it for credit should consult with the course lecturer (email:

Textbook: There is no required textbook. However, I recommend the following two textbooks as excellent sources for many of the topics here, and I will be occasionally assigning reading from them:

Christopher M. Bishop (2006) Pattern Recognition and Machine Learning. Springer

David J.C. MacKay (2003) Information Theory, Inference, and Learning Algorithms, Cambridge University Press. (also available online)

NOTE: If you want to see lecture slides from a similar but not identical course I taught last year click on the 2005 course website, but be warned that the slides will change this year.

NEW: SAMPLE EXAM PAPER


LECTURE SYLLABUS

Oct 5, 11     Introduction to Machine Learning(2L): review of probabilistic models, relation to coding terminology: Bayes rule, supervised, unsupervised and reinforcement learning Lectures 1-2 Slides
Oct 12, 18, 19 Unsupervised learning (3L): factor analysis, PCA, Independent Components Analysis (ICA), the EM algorithm Lectures 3-4 Slides
Lecture 5 Slides
Oct 25, 26 Bayesian Networks (2L): conditional independence, Belief Propagation Lectures 6-7 Slides
Belief Propagation Demo
Factor Graph Propagation
Nov 1, 2 Monte Carlo Methods (2L): simple Monte Carlo, rejection sampling, importance sampling, Metropolis, Gibbs sampling, auxiliary variables (lectures given by Iain Murray) Lecture 8 Slides (Monte Carlo)
Lecture 9 Slides (MCMC)
Nov 8 Variational approximations (1L): KL divergences, mean field, expectation propagation Lecture 10 Slides
Nov 9 Model comparison (1L): Bayes factors, Occam's razor, BIC, Laplace approximations. Lecture 11 Slides
Nov 15, 16, 22 Reinforcement Learning, Decision Making and MDPs (3L): value functions, value iteration, policy iteration, Bellman equations, Q-learning, Bayesian decision theory Lectures 12-14 Slides
Nov 23, 29 Discussion and Review (2L)