Adrian Weller
November 7, 2017











Adrian Weller is a senior research fellow in machine learning at the University of Cambridge, in the Computational and Biological Learning Lab.

Adrian has broad interests across machine learning and artificial intelligence, their applications, and their implications for society, including: scalability, reliability, interpretability, fairness, privacy, ethics, law, social policy, finance, networks, deep learning, reinforcement learning, music, safety and methods for big data.

Adrian is the David MacKay Newton research fellow at Darwin College; a Turing fellow at the Alan Turing Institute leading a group on Fairness, Transparency and Privacy; and a senior research fellow at the Leverhulme Centre for the Future of Intelligence (CFI) leading work on Trust and Transparency. He is an advisor to the Centre for Science and Policy (CSaP), and the Centre for the Study of Existential Risk (CSER).

Previously, Adrian held senior positions in financial trading. He continues to be active as an angel investor and advisor.

Contact: first name (dot) last name (at) eng.cam.ac.uk


NEWS:

Accepting PhD applications - for general information see here

With great co-authors: 3 papers accepted to NIPS 2017, paper accepted to AAAI 2018

Co-organizing the NIPS 2017 Symposium on Kinds of Intelligence

With Matej Balog, Nilesh Tripuraneni and Zoubin Ghahramani, best paper award honourable mention at ICML 2017

Best paper runner up at ICML 2017 Workshop on Human Interpretability


Co-organized the following events

ICML 2017 Workshop on Human Interpretability in Machine Learning

ICML 2017 Workshop on Reliable Machine Learning in the Wild

CFI Conference 2017 day on Trust in the Age of Intelligent Machines Video

Turing Gateway 2017 Workshop on Algorithmic Trading: Perspectives from Mathematical Modelling

NIPS 2016 Symposium on Machine Learning and the Law

NIPS 2016 Workshop on Reliable Machine Learning in the Wild

ICML 2016 Workshop on Reliable Machine Learning in the Wild

DALI 2016 Workshop on Machine Learning and Society

NIPS 2015 Symposium on Algorithms Among Us: the Societal Impacts of Machine Learning


Publications

Full conference papers (refereed and archived)

N. Grgic-Hlaca, M. Zafar, K. P. Gummadi and A. Weller. Beyond Distributive Fairness in Algorithmic Decision Making: Feature Selection for Procedurally Fair Learning. To appear in the Association for the Advancement of Artificial Intelligence conference (AAAI), 2018.

M. Zafar, I. Valera, M. Rodriguez, K. P. Gummadi and A. Weller. From parity to preference: Learning with cost-effective notions of fairness. To appear in Neural Information Processing Systems (NIPS), 2017.

K. Choromanski*, M. Rowland* and A. Weller. The unreasonable effectiveness of structured random orthogonal embeddings. To appear in Neural Information Processing Systems (NIPS), 2017 [*equal contribution].

M. Rowland* and A. Weller*. Uprooting and rerooting higher-order graphical models. To appear in Neural Information Processing Systems (NIPS), 2017 [*equal contribution].

M. Balog, N. Tripuraneni, Z. Ghahramani and A. Weller. Lost relatives of the Gumbel trick. In the International Conference on Machine Learning (ICML), 2017 [best paper award honourable mention].
Slides Poster Code

R. McAllister, Y. Gal, A. Kendall, M. van der Wilk, A. Shah, R. Cipolla and A. Weller. Concrete Problems for Autonomous Vehicle Safety: Advantages of Bayesian Deep Learning. In the International Joint Conference on Artificial Intelligence (IJCAI), 2017.

M. Rowland, A. Pacchiano and A. Weller. Conditions beyond treewidth for tightness of higher-order LP relaxations. In Artificial Intelligence and Statistics (AISTATS), 2017.
Poster
Will also be presented by Aldo Pacchiano at the International Conference on Principles and Practice of Constraint Programming (CP), 2017.

A. Weller. Characterizing tightness of LP relaxations by forbidding signed minors. In Uncertainty in Artificial Intelligence (UAI), 2016.
Poster

A. Weller. Uprooting and rerooting graphical models. In the International Conference on Machine Learning (ICML), 2016.
Slides Poster Video

O. Meshi, M. Mahdavi, A. Weller and D. Sontag. Train and test tightness of LP relaxations in structured prediction. In the International Conference on Machine Learning (ICML), 2016.

A. Weller, M. Rowland and D. Sontag. Tightness of LP relaxations for almost balanced models. In Artificial Intelligence and Statistics (AISTATS), 2016 [selected for oral presentation].
Slides Poster
Also presented by Mark Rowland at the International Conference on Principles and Practice of Constraint Programming (CP), 2016.
Slides

A. Weller* and J. Domke*. Clamping improves TRW and mean field approximations. In Artificial Intelligence and Statistics (AISTATS), 2016 [*equal contribution].
Poster

A. Weller. Bethe and related pairwise entropy approximations. In Uncertainty in Artificial Intelligence (UAI), 2015.
Poster

A. Weller. Revisiting the limits of MAP inference by MWSS on perfect graphs. In Artificial Intelligence and Statistics (AISTATS), 2015.
Poster
Also presented at the International Conference on Principles and Practice of Constraint Programming (CP), 2015.
Slides

A. Weller and T. Jebara. Clamping variables and approximate inference. In Neural Information Processing Systems (NIPS), 2014 [selected for oral presentation].
Slides Poster Video

A. Weller and T. Jebara. Approximating the Bethe partition function. In Uncertainty in Artificial Intelligence (UAI), 2014.
Poster

A. Weller, K. Tang, D. Sontag and T. Jebara. Understanding the Bethe approximation: When and how can it go wrong?. In Uncertainty in Artificial Intelligence (UAI), 2014.
Poster

A. Weller and T. Jebara. On MAP inference by MWSS on perfect graphs. In Uncertainty in Artificial Intelligence (UAI), 2013 [selected for oral presentation].

A. Weller and T. Jebara. Bethe bounds and approximating the global optimum. In Artificial Intelligence and Statistics (AISTATS), 2013.

Workshop papers (refereed)

T. Speicher, M. Zafar, K. P. Gummadi, A. Singla and A. Weller. Reliable learning by subsuming a trusted model: Safe exploration of the space of complex models. In the Workshop on Reliable Machine Learning in the Wild at ICML, August 2017.

A. Weller. Challenges for Transparency. In the Workshop on Human Interpretability in Machine Learning (WHI) at ICML, August 2017 [best paper runner up].
Slides

M. Zafar, I. Valera, M. Rodriguez, K. P. Gummadi and A. Weller. From parity to preference-based notions of fairness in classification. In the FAT/ML Workshop at KDD, August 2017 [selected for oral presentation].

N. Grgic-Hlaca, M. Zafar, K. P. Gummadi and A. Weller. On fairness, diversity and randomness in algorithmic decision making. In the FAT/ML Workshop at KDD, August 2017.

N. Grgic-Hlaca, M. Zafar, K. P. Gummadi and A. Weller. The case for process fairness in learning: feature selection for fair decision making. In NIPS Symposium on Machine Learning and the Law, December 2016 [CFI-Clifford Chance notable paper award].

B. London, O. Meshi and A. Weller. Bounding the integrality distance of LP relaxations for structured prediction. In NIPS Workshop on Optimization for Machine Learning, December 2016.

K. Tang, A. Weller and T. Jebara. Network ranking with Bethe pseudomarginals. In NIPS Workshop on Discrete Optimization in Machine Learning, December 2013.

PhD thesis

A. Weller. Methods for Inference in Graphical Models. Columbia University, 2014.

Earlier work

A. Weller, D. Ellis and T. Jebara. Structured Prediction Models for Chord Transcription of Music Audio. International Conference on Machine Learning and Applications (ICMLA), December 2009.

These methods were used to provide a slight improvement to Dan Ellis' existing, powerful approach to chord transcription, which led to us submitting the best entry to the MIREX open competition that year, see results here. A brief description of the overall 2010 LabROSA chord recognition system is given here.

Selected presentations

MLSALT4 graphical models lecture 1: Directed and Undirected Graphical Models, Feb 24

MLSALT4 graphical models lecture 2: An introduction to LP relaxations for MAP inference, Feb 27

MLSALT4 graphical models lecture 3: Junction tree algorithm, belief propagation and variational methods, Mar 3

Clamping variables and approximate inference, Microsoft Research, March 2016 Video (starting around 1:13:15)

Cambridge CBL tea talk on Penney's game, Nov 2015