Alessandro completed a BA in Philosophy before moving on to a Master’s degree in Machine Learning at UCL. Continuing the work of his Master’s thesis, he spent one year at the Gatsby Computational Neuroscience Unit working with Professor Sahani on latent variable dynamical systems. Prior to joining the Cambridge-Tübingen Machine Learning group in October 2016, he worked as a Data Scientist at a London startup. His research interests lie in time-series analysis (especially state-space models), reinforcement learning and approximate inference methods. He is funded by the Cambridge-Tübingen Fellowship and the Qualcomm Innovation Fellowship.

Publications

Evaluating Model-Based Planning and Planner Amortization for Continuous Control

Arunkumar Byravan, Leonard Hasenclever, Piotr Trochim, Mehdi Mirza, Alessandro Davide Ialongo, Yuval Tassa, Jost Tobias Springenberg, Abbas Abdolmaleki, Nicolas Heess, Josh Merel, Martin Riedmiller, 2022. (In 10th International Conference on Learning Representations).

Abstract URL

There is a widespread intuition that model-based control methods should be able to surpass the data efficiency of model-free approaches. In this paper we attempt to evaluate this intuition on various challenging locomotion tasks. We take a hybrid approach, combining model predictive control (MPC) with a learned model and model-free policy learning; the learned policy serves as a proposal for MPC. We show that MPC with learned proposals and models (trained on the fly or transferred from related tasks) can significantly improve performance and data efficiency with respect to model-free methods. However, we find that well-tuned model-free agents are strong baselines even for high DoF control problems. Finally, we show that it is possible to distil a model-based planner into a policy that amortizes the planning computation without any loss of performance.

Quantum machine learning: a classical perspective

Carlo Ciliberto, Mark Herbster, Alessandro Davide Ialongo, Massimiliano Pontil, Andrea Rocchetto, Simone Severini, Leonard Wossnig, 2018. (In Proc. R. Soc. A). DOI: 10.1098/rspa.2017.0551.

Abstract

Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning techniques to impressive results in regression, classification, data-generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets are motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed-up classical machine learning algorithms. Here we review the literature in quantum machine learning and discuss perspectives for a mixed readership of classical machine learning and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in machine learning are identified as promising directions for the field. Practical questions, like how to upload classical data into quantum form, will also be addressed.

Variational Inference in Dynamical Systems

Alessandro Davide Ialongo, 2022. University of Cambridge, Department of Engineering, Cambridge, UK. DOI: https://doi.org/10.17863/CAM.91368.

Abstract URL

Dynamical systems are a powerful formalism to analyse the world around us. Many datasets are sequential in nature, and can be described by a discrete time evolution law. We are interested in approaching the analysis of such datasets from a probabilistic perspective. We would like to maintain justified beliefs about quantities which, though useful in explaining the behaviour of a system, may not be observable, as well as about the system’s evolution itself, especially in regimes we have not yet observed in our data. The framework of statistical inference gives us the tools to do so, yet, for many systems of interest, performing inference exactly is not computationally or analytically tractable. The contribution of this thesis, then, is twofold: first, we uncover two sources of bias in existing variational inference methods applied to dynamical systems in general, and state space models whose transition function is drawn from a Gaussian process (GPSSM) in particular. We show bias can derive from assuming posteriors in non-linear systems to be jointly Gaussian, and from assuming that we can sever the dependence between latent states and transition function in state space model posteriors. Second, we propose methods to address these issues, undoing the resulting biases. We do this without compromising on computational efficiency or on the ability to scale to larger datasets and higher dimensions, compared to the methods we rectify. One method, the Markov Autoregressive Flow (Markov AF) addresses the Gaussian assumption, by providing a more flexible class of posteriors, based on normalizing flows, which can be easily evaluated, sampled, and optimised. The other method, Variationally Coupled Dynamics and Trajectories (VCDT), tackles the factorisation assumption, leveraging sparse Gaussian processes and their variational representation to reintroduce dependence between latent states and the transition function at no extra computational cost. Since the objective of inference is to maintain calibrated beliefs, if we employed approximations which are significantly biased in non-linear, noisy systems, or when there is little data available, we would have failed in our objective, as those are precisely the regimes in which uncertainty quantification is all the more important. Hence we think it is essential, if we wish to act optimally on such beliefs, to uncover, and, if possible, to correct, all sources of systematic bias in our inference methods.

Non-Factorised Variational Inference in Dynamical Systems

Alessandro Davide Ialongo, Mark van der Wilk, James Hensman, Carl Edward Rasmussen, December 2018. (In First Symposium on Advances in Approximate Bayesian Inference). Montreal.

Abstract URL

We focus on variational inference in dynamical systems where the discrete time transition function (or evolution rule) is modelled by a Gaussian process. The dominant approach so far has been to use a factorised posterior distribution, decoupling the transition function from the system states. This is not exact in general and can lead to an overconfident posterior over the transition function as well as an overestimation of the intrinsic stochasticity of the system (process noise). We propose a new method that addresses these issues and incurs no additional computational costs.

Overcoming Mean-Field Approximations in Recurrent Gaussian Process Models

Alessandro Davide Ialongo, Mark van der Wilk, James Hensman, Carl Edward Rasmussen, June 2019. (In 36th International Conference on Machine Learning). Long Beach.

Abstract URL

We identify a new variational inference scheme for dynamical systems whose transition function is modelled by a Gaussian process. Inference in this setting has either employed computationally intensive MCMC methods, or relied on factorisations of the variational posterior. As we demonstrate in our experiments, the factorisation between latent system states and transition function can lead to a miscalibrated posterior and to learning unnecessarily large noise terms. We eliminate this factorisation by explicitly modelling the dependence between state trajectories and the Gaussian process posterior. Samples of the latent states can then be tractably generated by conditioning on this representation. The method we obtain (VCDT: variationally coupled dynamics and trajectories) gives better predictive performance and more calibrated estimates of the transition function, yet maintains the same time and space complexities as mean-field methods. Code is available at: https://github.com/ialong/GPt.

Closed-form Inference and Prediction in Gaussian Process State-Space Models

Alessandro Davide Ialongo, Mark van der Wilk, Carl Edward Rasmussen, December 2017. (In NIPS Time Series Workshop 2017). Long Beach.

Abstract URL

We examine an analytic variational inference scheme for the Gaussian Process State Space Model (GPSSM) - a probabilistic model for system identification and time-series modelling. Our approach performs variational inference over both the system states and the transition function. We exploit Markov structure in the true posterior, as well as an inducing point approximation to achieve linear time complexity in the length of the time series. Contrary to previous approaches, no Monte Carlo sampling is required: inference is cast as a deterministic optimisation problem. In a number of experiments, we demonstrate the ability to model non-linear dynamics in the presence of both process and observation noise as well as to impute missing information (e.g. velocities from raw positions through time), to de-noise, and to estimate the underlying dimensionality of the system. Finally, we also introduce a closed-form method for multi-step prediction, and a novel criterion for assessing the quality of our approximate posterior.

Iterative Amortized Policy Optimization

Joseph Marino, Alexandre Piche, Alessandro Davide Ialongo, Yisong Yue, 2021. (In Advances in Neural Information Processing Systems 34). Edited by M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, J. Wortman Vaughan. Curran Associates, Inc..

Abstract URL

Policy networks are a central feature of deep reinforcement learning (RL) algorithms for continuous control, enabling the estimation and sampling of high-value actions. From the variational inference perspective on RL, policy networks, when used with entropy or KL regularization, are a form of amortized optimization, optimizing network parameters rather than the policy distributions directly. However, direct amortized mappings can yield suboptimal policy estimates and restricted distributions, limiting performance and exploration. Given this perspective, we consider the more flexible class of iterative amortized optimizers. We demonstrate that the resulting technique, iterative amortized policy optimization, yields performance improvements over direct amortization on benchmark continuous control tasks.

No matching items
Back to top