Listed below are the primary bits of code that I’ve used throughout my research, with more recent work is near the top.
pybo. A modular approach to Bayesian optimization. This builds on reggie for modelling and is used inside benchfunk for benchmarking various approaches.
reggie. A set of routines for Bayesian regression. Used heavily within pybo. This is heavily based on our earlier pygp code and plays the same role within pybo.
benchfunk. Code for benchmarking Bayesian optimization algorithms. This includes a collection of benchmark functions as well as tools to run and collect the results of multiple solvers.
mwhutils. Various machine learning and linear algebra utility functions that are used across my different projects. This has mostly been superceded by the utilities inside reggie.
pygp. This implements code for inference with Gaussian processes, although it has been largely been replaced by reggie.
pychud. This implements efficient rank-one updates and downdates to the Cholesky decomposition. Based on simplified wrappers to Linpack implementations of these procedures.
rl demos. Some rather crude reinforcement learning and dynamic programming demos, in python, that I wrote for a tutorial on reinforcement learning. These are not particularly efficient, but they’re simple and have no external requirements other than numpy. So if you’re new to the area they might prove useful in illustrating a few things.