24 November 2021
Natural-Gradient Variational Inference 2: ImageNet-scale
Having derived a natural-gradient variational inference algorithm, we now turn our attention to scaling it all the way to ImageNet. By borrowing tricks developed for Adam, we can get fast convergence, good performance, and reasonable uncertainties.
21 July 2021
Bayesian Deep Learning via Subnetwork Inference
Bayesian inference has the potential to address shortcomings of deep neural networks (DNNs) such as poor calibration. However, scaling Bayesian methods to modern DNNs is challenging. This blog post describes subnetwork inference, a method that tackles this issue by doing inference over only a small, carefully selected subset of the DNN weights.
30 April 2021
Reinforcement Learning for 3D Molecular Design
Automating the design of molecules with desirable properties can greatly accelerate the search for novel drugs and materials. However, to make further progress we need to go beyond graph-based approaches. In this blog post, we use ideas from reinforcement learning and quantum chemistry to make a first step towards 3D molecular design.
13 April 2021
Natural-Gradient Variational Inference 1: The Maths
What does it mean to combine variational inference with natural gradients? Can this scale to neural networks? What kind of approximations do we need to make? We take a detailed look at the mathematical derivations of such algorithms.
31 March 2021
What Keeps a Bayesian Awake At Night? Part 2: Night Time
The theory of subjective probability describes ideally consistent behaviour and ought not, therefore, be taken too literally.
— Leonard Jimmie Savage (1917–1971)
31 March 2021
What Keeps a Bayesian Awake At Night? Part 1: Day Time
The theory of probabilities is at bottom nothing but common sense reduced to calculus;
it enables us to appreciate with exactness that which accurate minds feel with a sort of instinct for which ofttimes they are unable to account.
— Pierre-Simon Laplace (1749–1827)