Publications
Bayesian Lipschitz Constant Estimation and Quadrature
Jan-Peter Calliess, December 2015. (In Workshop on Probabilistic Integration, NIPS). Montreal, Canada.
Abstract▼ URL
Lipschitz quadrature methods provide an approach to one-dimensional numerical integration on bounded domains. On the basis of the assumption that the integrand is Lipschitz continuous with a known Lipschitz constant, these quadrature rules can provide a tight error bound around their integral estimates and utilise the Lipschitz constant to guide exploration in the context of adaptive quadrature. In this paper, we outline our ongoing work on extending this approach to settings where the Lipschitz constant is probabilistically uncertain. As the key component, we introduce a Bayesian approach for updating a subjectively probabilistic belief of the Lipschitz constant. Combined with any Lipschitz quadrature rule, we obtain an approach for translating a sample into an integral estimate with probabilistic uncertainty intervals. The paper concludes with an illustration of the approach followed by a discussion of open issues and future work.
Lazily Adapted Constant Kinky Inference for Nonparametric Regression and Model-Reference Adaptive Control
Jan-Peter Calliess, 2016. (arXiv).
Abstract▼ URL
Techniques known as Nonlinear Set Membership prediction, Lipschitz Interpolation or Kinky Inference are approaches to machine learning that utilise presupposed Lipschitz properties to compute inferences over unobserved function values. Provided a bound on the true best Lipschitz constant of the target function is known a priori they offer convergence guarantees as well as bounds around the predictions. Considering a more general setting that builds on Hölder continuity relative to pseudo-metrics, we propose an online method for estimating the Hoelder constant online from function value observations that possibly are corrupted by bounded observational errors. Utilising this to compute adaptive parameters within a kinky inference rule gives rise to a nonparametric machine learning method, for which we establish strong universal approximation guarantees. That is, we show that our prediction rule can learn any continuous function in the limit of increasingly dense data to within a worst-case error bound that depends on the level of observational uncertainty. We apply our method in the context of nonparametric model-reference adaptive control (MRAC). Across a range of simulated aircraft roll-dynamics and performance metrics our approach outperforms recently proposed alternatives that were based on Gaussian processes and RBF-neural networks. For discrete-time systems, we provide stability guarantees for our learning-based controllers both for the batch and the online learning setting.
Lipschitz Optimisation for Lipschitz Interpolation
Jan-Peter Calliess, May 2017. (In 2017 American Control Conference (ACC 2017)). Seattle, WA, USA.
Abstract▼ URL
Techniques known as Nonlinear Set Membership prediction, Kinky Inference or Lipschitz Interpolation are fast and numerically robust approaches to nonparametric machine learning that have been proposed to be utilised in the context of system identification and learning-based control. They utilise presupposed Lipschitz properties in order to compute inferences over unobserved function values. Unfortunately, most of these approaches rely on exact knowledge about the input space metric as well as about the Lipschitz constant. Furthermore, existing techniques to estimate the Lipschitz constants from the data are not robust to noise or seem to be ad-hoc and typically are decoupled from the ultimate learning and prediction task. To overcome these limitations, we propose an approach for optimising parameters of the presupposed metrics by minimising validation set prediction errors. To avoid poor performance due to local minima, we propose to utilise Lipschitz properties of the optimisation objective to ensure global optimisation success. The resulting approach is a new flexible method for nonparametric black-box learning. We illustrate its competitiveness on a set of benchmark problems.
A Distributed Mechanism for Multi-Agent Convex Optimisation and Coordination with No-Regret Learners
Jan-Peter Calliess, Nathan Korda, Geoffrey J. Gordon, December 2016. (In Workshop on Learning, Inference and Control of Multi-Agent Systems, NIPS). Barcelona, Spain.
Abstract▼ URL
We develop an indirect mechanism for coordinated, distributed multi-agent optimisation, and decision-making. Our approach extends previous work in no-regret learning based mechanism design and renders it applicable to partial information settings. We consider planning problems that can be stated as a collection of single-agent convex programmes coupled by common soft constraints. A key idea is to recast the joint optimisation problem as distributed learning in a repeated game between the original agents and a newly introduced group of adversarial agents who influence prices for decisions and facilitate coordination. Under the weak behavioural assumption that all agents employ selfish, sub-linear regret algorithms in the course of the repeated game, we guarantee that our mechanism can achieve design goals such as social optimality (efficiency) and Nash-equilibrium convergence to within an error which approaches zero as the agents gain experience. Our error bounds are deterministic or probabilistic, depending on the nature of the regret bounds available for the algorithms employed by the agents. We illustrate our method in an emissions market application.
Nonlinear Set Membership Regression with Adaptive Hyper-Parameter Estimation for Online Learning and Control
Jan-Peter Calliess, Stephen Roberts, Carl Edward Rasmussen, Jan Maciejowski, 2018. (In Proceedings of the European Control Conference).
Abstract▼ URL
Methods known as Lipschitz Interpolation or Nonlinear Set Membership regression have become established tools for nonparametric system-identification and data-based control. They utilise presupposed Lipschitz properties to compute inferences over unobserved function values. Unfortunately, they rely on the a priori knowledge of a Lipschitz constant of the underlying target function which serves as a hyperparameter. We propose a closed-form estimator of the Lipschitz constant that is robust to bounded observational noise in the data. The merger of Lipschitz Interpolation with the new hyperparameter estimator gives a new nonparametric machine learning method for which we derive online learning convergence guarantees. Furthermore, we apply our learning method to model-reference adaptive control and provide a convergence guarantee on the closed-loop dynamics. In a simulated flight manoeuvre control scenario, we compare the performance of our approach to recently proposed alternative learning-based controllers.
Lazily Adapted Constant Kinky Inference for non-parametric regression and model-reference adaptive control
Jan-Peter Calliess, Stephen J. Roberts, Carl Edward Rasmussen, Jan Maciejowski, 2020. (Automatica). DOI: 10.1016/j.automatica.2020.109216.
Abstract▼
Techniques known as Nonlinear Set Membership prediction or Lipschitz Interpolation are approaches to supervised machine learning that utilise presupposed Lipschitz properties to perform inference over unobserved function values. Provided a bound on the true best Lipschitz constant of the target function is known a priori, they offer convergence guarantees, as well as bounds around the predictions. Considering a more general setting that builds on Lipschitz continuity, we propose an online method for estimating the Lipschitz constant online from function value observations that are possibly corrupted by bounded noise. Utilising this as a data-dependent hyper-parameter gives rise to a nonparametric machine learning method, for which we establish strong universal approximation guarantees. That is, we show that our prediction rule can learn any continuous function on compact support in the limit of increasingly dense data, up to a worst-case error that can be bounded by the level of observational error. We also consider applications of our nonparametric regression method to learning-based control. For a class of discrete-time settings, we establish convergence guarantees on the closed-loop tracking error of our online learning-based controllers. To provide evidence that our method can be beneficial not only in theory but also in practice, we apply it in the context of nonparametric model-reference adaptive control (MRAC). Across a range of simulated aircraft roll-dynamics and performance metrics our approach outperforms recently proposed alternatives that were based on Gaussian processes and RBF-neural networks.
Learning-based Nonlinear Model Predictive Control
Daniel Limon, Jan-Peter Calliess, Jan Maciejowski, July 2017. (In IFAC 2017 World Congress). Toulouse, France. DOI: 10.1016/j.ifacol.2017.08.1050.
Abstract▼
This paper presents stabilizing Model Predictive Controllers (MPC) in which prediction models are inferred from experimental data of the inputs and outputs of the plant. Using a nonparametric machine learning technique called LACKI, the estimated (possibly nonlinear) model function together with an estimation of Hoelder constant is provided. Based on these, a number of predictive controllers with stability guaranteed by design are proposed. Firstly, the case when the prediction model is estimated off- line is considered and robust stability and recursive feasibility is ensured by using tightened constraints in the optimisation problem. This controller has been extended to the more interesting and complex case: the online learning of the model, where the new data collected from feedback is added to enhance the prediction model. A on-line learning MPC based on a double sequence of predictions is proposed. Stability of the online learning MPC is proved. These controllers are illustrated by simulation.