Probabilistic Machine Learning chunks

The topics and concepts taught in the Probabilisitc Machine Learning course is broken down into a number of chunks, which are detailed in this page. The goal of this organisation is to help students to be able to identify and find material. Chunks are designed to be concise, and fairly self contained, and clearly labeled with content, prerequisites and relationships to other chunks.

The entire course falls naturally in three parts, Gaussian processes, probabilistic ranking and text modeling.

Part I: Supervised non-parametric probabilistic inference using Gaussian processes

In a nutshell, part I is concerned with...

  1. Modelling data
  2. Linear in the parameters regression
  3. Likelihood and the concept of noise
  4. Probability fundamentals
  5. Bayesian inference and prediction with finite regression models
  6. Background: Some useful Gaussian and Matrix equations
  7. Marginal likelihood
  8. Distributions over parameters and over functions
  9. Gaussian process
  10. Gaussian processes and data
  11. Gaussian process marginal likelihood and hyperparameters
  12. Correspondence between Linear in the parameters models and GPs
  13. covariance functions
  14. The gpml toolbox

Part II: Ranking

  1. Ranking: motivation and tennis example
  2. Gibbs sampling
  3. Gibbs sampling in TrueSkill
  4. Representing distributions using factor graphs
  5. message passing in TrueSkill
  6. Approximate messages using moment matching

Part III: Modeling text

  1. Modeling text
  2. Discrete distributions on binary variables (tossing coins)
  3. Discrete distributions over multiple outcomes
  4. Document models
  5. The Expectation Maximization (EM) algoritm
  6. Gibbs sampling for Bayesian mixture model
  7. Latent Dichichlet Allocation topic models