I am a PhD student in the Machine Learning Group supervised by José Miguel Hernández Lobato since October 2020. I am primarily interested in developing and analysing data compression algorithms and applying them to neural data compression. I am particularly interested in exploring simulation methods for point processes, which have deep connections with data compression algorithms. My broader interests include generative and probabilistic modelling, e.g. using variational autoencoders and diffusion models; and Bayesian optimisation. I hold an MPhil in Machine Learning and Machine Intelligence from the University of Cambridge and a Joint BSc (Hons) in Mathematics and Computer Science from the University of St Andrews.
Publications
Fast relative Entropy coding with A* coding
Gergely Flamich, Stratis Markou, José Miguel Hernández-Lobato, 2022. (In 39th International Conference on Machine Learning).
Abstract▼ URL
Relative entropy coding (REC) algorithms encode a sample from a target distribution Q using a proposal distribution P, such that the expected codelength is 𝒪(D_KL[Q||P]). REC can be seamlessly integrated with existing learned compression models since, unlike entropy coding, it does not assume discrete Q or P, and does not require quantisation. However, general REC algorithms require an intractable Ω(e^D_KL[Q||P]) runtime. We introduce AS* and AD* coding, two REC algorithms based on A* sampling. We prove that, for continuous distributions over ℝ, if the density ratio is unimodal, AS* has 𝒪(D_∞[Q||P]QP) expected runtime, where D_∞[Q||P]QP is the Rényi ∞-divergence. We provide experimental evidence that AD* also has 𝒪(D_∞[Q||P]QP) expected runtime. We prove that AS* and AD* achieve an expected codelength of 𝒪(D_KL[Q||P]). Further, we introduce DAD, an approximate algorithm based on AD which retains its favourable runtime and has bias similar to that of alternative methods. Focusing on VAEs, we propose the IsoKL VAE (IKVAE), which can be used with DAD* to further improve compression efficiency. We evaluate A* coding with (IK)VAEs on MNIST, showing that it can losslessly compress images near the theoretically optimal limit.