October 7, 2024         Some of our group, June 2022 |
|
|||||
Adrian Weller is a Director of Research in machine learning (ML) at the University of Cambridge. He has broad interests across machine learning and artificial intelligence (AI), their applications, and helping to ensure beneficial outcomes for society, including: explainability, fairness, robustness, scalability, privacy, safety, ethics and finance. He holds a Turing AI Fellowship for Trustworthy Machine Learning. Adrian is head of Safe and Ethical AI at The Alan Turing Institute (UK national institute for data science and AI), where he is also a Turing Fellow. He is a Director of Research and Programme Director for Trust and Society at the Leverhulme Centre for the Future of Intelligence (CFI); an advisor to the Centre for Science and Policy (CSaP), and the Centre for the Study of Existential Risk (CSER); and a fellow at Sidney Sussex College. Adrian serves on the boards of several organizations. He is a member of the World Economic Forum Global Future Council on the Future of AI, and is co-director of the European Laboratory for Learning and Intelligent Systems (ELLIS) programme on Human-centric Machine Learning. Adrian previously served as Programme Director for AI at the Turing Institute, on the advisory board of the Centre for Data Ethics and Innovation (CDEI) and in the UNESCO ad hoc expert group on the ethics of AI. Earlier, Adrian held senior positions in finance. He continues to be an angel investor and advisor. In 2022, Adrian received an MBE for services to digital innovation in the Queen's Birthday Honours. Contact: first name (dot) last name (at) eng.cam.ac.uk NEWS Accepting PhD applications for 2025 For recently accepted papers with great co-authors, please see below. Glad to serve as Specialist Adviser to the Lords select committee on AI in Weapon Systems Discussing AI developments on Radio 4 PM from 39:50 Delighted to be part of the new CSIS AI Council to work on corporate responsibility, international collaboration and effective governance of AI. Great fun to talk on Radio 4 with Hannah Fry, Adam Rutherford and Brian Christian about AI alignment and Stuart Russell's Reith lectures. PhD Students Matthew Ashman Weiyang Liu (co-supervised with Bernhard Schölkopf at MPI-IS Tübingen) Juyeon Heo Yanzhi Chen Katie Collins Isaac Reid Yichao Liang Postdocs Miri Zilka Pingfan Song Vihari Piratla Former group members Julius von Kügelgen, now at ETHZ Umang Bhatt, now at NYU Valerii Likhosherstov, now at Waymo Co-organized the following events 2024 International Conference on Machine Learning (ICML 2024), served as Program Chair along with Zico Kolter, Katherine Heller, and Nuria Oliver 2024 Workshop on Responsible Data Science and AI Research ELSA 2024 Workshop on Generative AI and Creative Arts ICAIF 2023 Workshop on Explainable AI in Finance Alan Turing Institute 2023 Workshop on Privacy and Fairness in AI for Health ICAIF 2022 Workshop on Explainable AI in Finance ICML 2022 Workshop on Human-Machine Collaboration and Teaming Alan Turing Institute + ICO 2021-22 Workshops on Data Protection, AI and Fairness NeurIPS 2021 Workshop on Human-Centered AI NeurIPS 2021 Workshop on AI for Science: Mind the Gaps NeurIPS 2021 Workshop on Privacy in Machine Learning Alan Turing Institute 2021 Workshop on Interpretability, Safety, and Security in AI ELLIS 2021 Workshop on Human-centric Machine Learning (HCML) NeurIPS 2020 Workshop on Privacy Preserving Machine Learning ICML 2020 Workshop on Human Interpretability in Machine Learning (WHI 2020) CHI 2020 Workshop on Human-Centered Approaches to Fair and Responsible AI Royal Society 2020 Workshop on Data governance from principles to practice: Civil society, volunteer data science skills, and open datasets NeurIPS 2019 Workshop on Human-centric Machine Learning (HCML) NeurIPS 2019 Workshop on Privacy in Machine Learning (PriML) DALI/ELLIS 2019 Workshop on Human-centric Machine Learning (HCML) ICML 2019 Workshop on Human in the Loop Learning (HILL) CIFAR-UKRI-CNRS 2019 Workshop on AI & Society: from principles to practice ICLR 2019 Workshop on Safe Machine Learning DALI 2019 Workshop on Security and Robustness NeurIPS 2018 Workshop on Privacy Preserving Machine Learning Royal Society 2018 Workshop on AI and data governance from principles to practice: auto insurance BBC 2018 Conference on Artificial Intelligence, Society and the Media Alan Turing Institute 2018 Workshop on Data Trusts ICML/IJCAI (FAIM) 2018 Workshop on Human Interpretability in Machine Learning Consultations in 2018 and 2019 on Faith and AI with religious leaders at St George’s House UN AI for Good 2018 Track on Trust in AI DALI 2018 Workshop on Fairness, Transparency and Privacy in AI Alan Turing Institute 2018 Symposium on The GDPR and Beyond: Privacy, Transparency and the Law Alan Turing Institute 2018 Symposium on AI for Social Good at the Royal Society NeurIPS 2017 Symposium on Kinds of Intelligence Alan Turing Institute 2017 Workshop on Interpretability of Algorithmic Systems ICML 2017 Workshop on Human Interpretability in Machine Learning ICML 2017 Workshop on Reliable Machine Learning in the Wild CFI Conference 2017 day on Trust in the Age of Intelligent Machines Video Turing Gateway 2017 Workshop on Algorithmic Trading: Perspectives from Mathematical Modelling NeurIPS 2016 Symposium on Machine Learning and the Law NeurIPS 2016 Workshop on Reliable Machine Learning in the Wild ICML 2016 Workshop on Reliable Machine Learning in the Wild DALI 2016 Workshop on Machine Learning and Society NeurIPS 2015 Symposium on Algorithms Among Us: the Societal Impacts of Machine Learning Publications                 Google Scholar Journal articles (refereed and archived) K. Collins, A. Jiang, S. Frieder et al. Evaluating language models for mathematics through interactions. Proceedings of the National Academy of Sciences (PNAS), 2024. I. Inuwa-Dutse, A. Toniolo, A. Weller and U. Bhatt. Algorithmic loafing and mitigation strategies in Human-AI teams. Computers in Human Behavior: Artificial Humans, 2023. V. Chen, U. Bhatt, H. Heidari, A. Weller and A. Talwalkar. Perspectives on incorporating expert feedback into model updates. Patterns, 2023. L. Yu, T. Hu, L. Hong, Z. Liu, A. Weller and W. Liu. Continual Learning by Modeling Intra-Class Variation. Transactions on Machine Learning Research (TMLR), 2023. V. Likhosherstov, A. Arnab, K. Choromanski et al. PolyViT: Co-training Vision Transformers on Images, Videos and Audio. Transactions on Machine Learning Research (TMLR), 2023. C. Knoll, A. Weller and F. Pernkopf. Self-Guided Belief Propagation -- A Homotopy Continuation Method. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2022. W. Liu, Y. Wen, B. Raj, R. Singh and A. Weller. SphereFace Revived: Unifying Hyperspherical Face Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2022. Preprint J. Zerilli, U. Bhatt and A. Weller. How transparency modulates trust in artificial intelligence. Patterns, 2022. S. Avin, H. Belfield, M. Brundage et al. Filling gaps in trustworthy development of AI. Science, 2021. arXiv version for personal use X. Bai, H. Wang, L. Ma et al. Advancing COVID-19 diagnosis with privacy-preserving collaboration in artificial intelligence. Nature Machine Intelligence, 2021. Cambridge research highlight S. Latif, M. Usman, S. Manzoor et al. Leveraging Data Science to Combat COVID-19: A Comprehensive Review. IEEE Transactions on AI, 2020. Preprint D. Coyle and A. Weller. "Explaining" machine learning reveals policy challenges. Science, 2020. S. Ahn, M. Chertkov, A. Weller and J. Shin. Bucket renormalization for approximate inference. Journal of Statistical Mechanics, 2019. O. Meshi, B. London, A. Weller and D. Sontag. Train and Test Tightness of LP Relaxations in Structured Prediction. Journal of Machine Learning Research (JMLR), 2019. S. Cave, R. Nyrup, K. Vold and A. Weller. Motivations and Risks of Machine Ethics. Proceedings of the IEEE, 2019. Full conference papers (refereed and archived) M. Ashman, C. Diaconu, A. Weller, W. Bruinsma and R. Turner. Approximately Equivariant Neural Processes. Neural Information Processing Systems (NeurIPS), 2024. S. Kapoor, N. Gruver, M. Roberts et al. Large Language Models Must Be Taught to Know What They Don’t Know. Neural Information Processing Systems (NeurIPS), 2024. K. Collins, N. Kim, Y. Bitton et al. Beyond Thumbs Up/Down: Untangling Challenges of Fine-Grained Feedback for Text-to-Image Generation. AAAI/ACM Conference on AI, Ethics and Society (AIES), 2024. M. Ashman, C. Diaconu, A. Weller and R. Turner. In-Context In-Context Learning with Transformer Neural Processes. Symposium on Advances in Approximate Bayesian Inference (AABI), 2024. M. Babaei, B. Mirzasoleiman, J. Joo and A. Weller. Towards Balanced Information Propagation in Social Media. French Regional Conference on Complex Systems (FRCCS), 2024 [selected for oral presentation]. K. Choromanski, S. Li, V. Likhosherstov et al. Learning a Fourier Transform for Linear Relative Positional Encodings in Transformers. International Conference on Artificial Intelligence and Statistics (AISTATS), 2024. A. Shamsabadi, G. Tan et al. Confidential-DPproof: Confidential Proof of Differentially Private Training. International Conference on Learning Representations (ICLR), 2024 [selected for spotlight]. L. Yu, W. Jiang et al. MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models. International Conference on Learning Representations (ICLR), 2024 [selected for spotlight]. W. Liu, Z. Qiu et al. Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization. International Conference on Learning Representations (ICLR), 2024. I. Reid, E. Berger, K. Choromanski and A. Weller. Repelling Random Walks. International Conference on Learning Representations (ICLR), 2024. I. Reid, K. Choromanski, E. Berger and A. Weller. General Graph Random Features. International Conference on Learning Representations (ICLR), 2024. I. Reid, A. Weller and K. Choromanski. Quasi-Monte Carlo Graph Random Features. Neural Information Processing Systems (NeurIPS), 2023 [selected for spotlight]. M. Zarlenga, K. Collins et al. Learning to Receive Help: Intervention-Aware Concept Embedding Models. Neural Information Processing Systems (NeurIPS), 2023 [selected for spotlight]. Z. Qiu, W. Liu et al. Controlling Text-to-Image Diffusion by Orthogonal Finetuning. Neural Information Processing Systems (NeurIPS), 2023. V. Likhosherstov, K. Choromanski et al. Dense-Exponential Random Features: Sharp Positive Estimators of the Gaussian Kernel. Neural Information Processing Systems (NeurIPS), 2023. V. Nanda, T. Speicher, J. Dickerson, K. Gummadi, S. Feizi and A. Weller. Diffused Redundancy in Pre-trained Representations. Neural Information Processing Systems (NeurIPS), 2023. J. Heo, V. Piratla, M. Wicker and A. Weller. Use perturbations when learning from explanations. Neural Information Processing Systems (NeurIPS), 2023. M. Wicker, V. Piratla and A. Weller. Certification of Distributional Individual Fairness. Neural Information Processing Systems (NeurIPS), 2023. M. Barker, K. Collins, Dj Dvijotham, A. Weller and U. Bhatt. Selective Concept Models: Permitting Stakeholder Customisation at Test-Time. AAAI Conference on Human Computation and Crowdsourcing (HCOMP), 2023. C. Ashurst and A. Weller. Fairness Without Demographic Data - A Survey of Approaches. ACM conference on Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO), 2023 [selected for oral presentation]. M. Barker, E. Kallina, D. Ashok, K. Collins et al. FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines. ACM conference on Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO), 2023. H. Camilleri, C. Ashurst, N. Jaisankar, A. Weller and M. Zilka. Media Coverage of Predictive Policing: Bias, Police Engagement, and the Future of Transparency. ACM conference on Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO), 2023. Y. Wen, W. Liu, Y. Feng et al. Pairwise Similarity Learning is SimPLE. International Conference on Computer Vision (ICCV), 2023. J. Heo*, P. Song*, W. Liu and A. Weller. Physics-based decoding improves magnetic resonance fingerprinting. International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), 2023 [selected for oral presentation]. K. Collins, U. Bhatt, W. Liu, V. Piratla, I. Sucholutsky, B. Love and A. Weller. Human-in-the-Loop Mixup. Uncertainty in Artificial Intelligence (UAI), 2023 [selected for oral presentation]. I. Sucholutsky, R. Battleday, K. Collins et al. On the Informativeness of Supervision Signals. Uncertainty in Artificial Intelligence (UAI), 2023. A. S. Shamsabadi, J. Hayes, B. Balle and A. Weller. Mnemonist: Locating Model Parameters that Memorize Training Examples. Uncertainty in Artificial Intelligence (UAI), 2023. K. Collins, M. Barker, M. Zarlenga et al. Human Uncertainty in Concept-Based AI Systems. AAAI/ACM Conference on AI, Ethics and Society (AIES), 2023. I. Reid, K. Choromanski, V. Likhosherstov and A. Weller. Simplex Random Features. International Conference on Machine Learning (ICML), 2023 [selected for oral presentation]. Y. Chen, M. Gutmann and A. Weller. Is Learning Summary Statistics Necessary for Likelihood-free Inference? International Conference on Machine Learning (ICML), 2023. K. Choromanski, A. Sehanobish, H. Lin et al. Efficient Graph Field Integrators Meet Point Clouds. International Conference on Machine Learning (ICML), 2023. M. Zilka, R. Fogliato, J. Hron, B. Butcher, C. Ashurst and A. Weller. The Progression of Disparities within the Criminal Justice System: Differential Enforcement and Risk Assessment Instruments. ACM Conference on Fairness, Accountability and Transparency (FAccT), 2023. A. Chan, R. Salganik et al. Harms from Increasingly Agentic Algorithmic Systems. ACM Conference on Fairness, Accountability and Transparency (FAccT), 2023. A. S. Shamsabadi, S. Wyllie et al. Confidential-PROFITT: Confidential PROof of FaIr Training of Trees. International Conference on Learning Representations (ICLR), 2023 [notable top 5% of accepted papers]. M. Wicker, J. Heo, L. Costabello and A. Weller. Robust Explanation Constraints for Neural Networks. International Conference on Learning Representations (ICLR), 2023. W. Liu, L. Yu, A. Weller and B. Schölkopf. Generalizing and Decoupling Neural Collapse via Hyperspherical Uniformity Gap. International Conference on Learning Representations (ICLR), 2023. Z. Qiu, W. Liu, T. Xiao, Z. Liu, U. Bhatt, Y. Luo, A. Weller and B. Schölkopf. Iterative Teaching by Data Hallucination. International Conference on Artificial Intelligence and Statistics (AISTATS), 2023. J. Abad Martinez, U. Bhatt, A. Weller and G. Cherubin. Approximating Full Conformal Prediction at Scale via Influence Functions. Association for the Advancement of Artificial Intelligence Conference on Artificial Intelligence (AAAI), 2023. V. Likhosherstov, K. Choromanski and A. Weller. On the Expressive Flexibility of Self-Attention Matrices. Association for the Advancement of Artificial Intelligence Conference on Artificial Intelligence (AAAI), 2023. V. Nanda, A. Majumdar, C. Kolling et al. Do Invariances in Deep Neural Networks Align with Human Perception? Association for the Advancement of Artificial Intelligence Conference on Artificial Intelligence (AAAI), 2023. S. Joo, S. Jeong, J. Heo, A. Weller and T. Moon. Towards More Robust Interpretation via Local Gradient Alignment. Association for the Advancement of Artificial Intelligence Conference on Artificial Intelligence (AAAI), 2023. M. Espinoza Zarlenga, P. Barbiero, Z. Shams et al. Towards Robust Metrics for Concept Representation Evaluation. Association for the Advancement of Artificial Intelligence Conference on Artificial Intelligence (AAAI), 2023. M. Zilka, B. Butcher and A. Weller. A Survey and Datasheet Repository of Publicly Available US Criminal Justice Datasets. Datasets and Benchmarks track at Neural Information Processing Systems (NeurIPS), 2022. Repository for criminal justice datasets Y. Chen, W. Sun, Y. Li and A. Weller. Scalable Infomin Learning. Neural Information Processing Systems (NeurIPS), 2022. V. Likhosherstov, K. Choromanski, A. Dubey, F. Liu, T. Sarlos and A. Weller. Chefs' Random Tables: Non-Trigonometric Random Features. Neural Information Processing Systems (NeurIPS), 2022. Preprint M. Zarlenga, P. Barbiero, G. Ciravegna et al. Concept Embedding Models. Neural Information Processing Systems (NeurIPS), 2022. Preprint K. Collins, U. Bhatt and A. Weller. Eliciting and Learning with Soft Labels from Every Annotator. AAAI Conference on Human Computation and Crowdsourcing (HCOMP), 2022. Preprint Code+Data N. Grgic-Hlaca, G. Lima, A. Weller and E. Redmiles. Dimensions of Diversity in Human Perceptions of Algorithmic Fairness. ACM conference on Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO), 2022 [new horizons award]. Preprint W. Liu, Z. Liu, L. Paull, A. Weller and B. Schölkopf. Structural Causal 3D Reconstruction. European Conference on Computer Vision (ECCV), 2022. Preprint V. Nanda, T. Speicher, C. Kolling, J. Dickerson, K. Gummadi and A. Weller. Measuring Representational Robustness of Neural Networks through Shared Invariances. International Conference on Machine Learning (ICML), 2022 [selected for long presentation]. Preprint Code K. Choromanski, H. Lin et al. From block-Toeplitz matrices to differential equations on graphs: towards a general theory for scalable masked Transformers. International Conference on Machine Learning (ICML), 2022. Preprint V. Babbar, U. Bhatt and A. Weller. On the Utility of Prediction Sets in Human-AI Teams. International Joint Conference on Artificial Intelligence (IJCAI), 2022 [selected for long presentation]. Preprint B. Butcher, C. Robinson, M. Zilka, R. Fogliato, C. Ashurst and A. Weller. Racial Disparities in the Enforcement of Marijuana Violations in the US. AAAI/ACM Conference on AI, Ethics and Society (AIES), 2022. Preprint M. Zilka, H. Sargeant and A. Weller. Transparency, Governance and Regulation of Algorithmic Tools Deployed in the Criminal Justice System: a UK Case Study. AAAI/ACM Conference on AI, Ethics and Society (AIES), 2022. Preprint I. Chien, N. Deliu, R. Turner, A. Weller, S. Villar and N. Kilbertus. Multi-disciplinary fairness considerations in machine learning for clinical trials. ACM Conference on Fairness, Accountability and Transparency (FAccT), 2022. Preprint H. Zhang, Y. Zhang, W. Liu, A. Weller, B. Schölkopf and E. Xing. Towards Principled Disentanglement for Domain Generalization. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022 [selected for oral presentation]. Preprint Y. Wen, W. Liu, A. Weller, B. Raj and R. Singh. SphereFace2: Binary Classification is All You Need for Deep Face Recognition. International Conference on Learning Representations (ICLR), 2022 [selected for spotlight presentation]. K. Choromanski, H. Lin, H. Chen et al. Hybrid Random Features. International Conference on Learning Representations (ICLR), 2022. L. Viano, Y. Huang, P. Kamalaruban, C. Innes, S. Ramamoorthy and A. Weller. Robust Learning from Observation with Model Misspecification. International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), 2022. Preprint D. Ley, U. Bhatt and A. Weller. Diverse, Global and Amortised Counterfactual Explanations for Uncertainty Estimates. Association for the Advancement of Artificial Intelligence Conference (AAAI), 2022. Preprint J. von Kügelgen, A. Karimi, U. Bhatt, I. Valera, A. Weller and B. Schölkopf. On the Fairness of Causal Algorithmic Recourse. Association for the Advancement of Artificial Intelligence Conference (AAAI), 2022. Preprint A. Khajehnejad, M. Khajehnejad, M. Babaei, K. P. Gummadi, A. Weller and B. Mirzasoleiman. CrossWalk: Fairness-enhanced Node Representation Learning. Association for the Advancement of Artificial Intelligence Conference (AAAI), 2022. Preprint W. Liu, Z. Liu, H. Wang, L. Paull, B. Schölkopf and A. Weller. Iterative Teaching by Label Synthesis. In Neural Information Processing Systems (NeurIPS), 2021 [selected for spotlight presentation]. L. Viano, Y. Huang, P. Kamalaruban, A. Weller and V. Cevher. Robust Inverse Reinforcement Learning under Transition Dynamics Mismatch. In Neural Information Processing Systems (NeurIPS), 2021. V. Likhosherstov, K. Choromanski, J. Davis, X. Song and A. Weller. Sub-Linear Memory: How to Make Performers SLiM. In Neural Information Processing Systems (NeurIPS), 2021. Preprint O. Thomas, M. Zilka, A. Weller and N. Quadrianto. An Algorithmic Framework for Positive Action. In the ACM conference on Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO), 2021 [selected for oral presentation]. V. Likhosherstov, X. Song, K. Choromanski, J. Davis and A. Weller. Debiasing a First-order Heuristic for Approximate Bi-level Optimization. In the International Conference on Machine Learning (ICML), 2021. Preprint J. Simons, S. Adams Bhatti and A. Weller. Machine Learning and the Meaning of Equal Treatment. In Artificial Intelligence, Ethics, and Society (AIES), 2021. U. Bhatt, J. Antoran et al. Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty. In Artificial Intelligence, Ethics, and Society (AIES), 2021. Preprint M. Bakker, D. P. Tu, K. P. Gummadi, A. Pentland, K. Varshney and A. Weller. Beyond Reasonable Doubt: Improving Fairness in Budget-Constrained Decision Making Using Confidence Thresholds. In Artificial Intelligence, Ethics, and Society (AIES), 2021. W. Liu, R. Lin, Z. Liu, J. Rehg, L. Paull, L. Xiong, L. Song and A. Weller. Orthogonal Over-Parameterized Training. In Computer Vision and Pattern Recognition (CVPR), 2021 [selected for oral presentation]. Preprint V. Likhosherstov, J. Davis, K. Choromanski and A. Weller. CWY Parametrization: a Solution for Parallelized Optimization of Orthogonal and Stiefel Matrices. In Artificial Intelligence and Statistics (AISTATS), 2021. Preprint W. Liu, R. Lin, Z. Liu, L. Xiong, B. Schölkopf and A. Weller. Learning with Hyperspherical Uniformity. In Artificial Intelligence and Statistics (AISTATS), 2021. Preprint K. Choromanski, V. Likhosherstov et al. Rethinking Attention with Performers. In the International Conference on Learning Representations (ICLR), 2021 [selected for oral presentation]. Preprint. Google's blog post. Video of talk by Krzysztof at LightOn AI. Explanation video by Yannic Kilcher. Blog post by Johannes Brandstetter and Hubert Ramsauer: Looking at the Performer from a Hopfield point of view J. Antoran, U. Bhatt et al. Getting a CLUE: A method for explaining uncertainty estimates. In the International Conference on Learning Representations (ICLR), 2021 [selected for oral presentation]. Preprint K. Choromanski, J. Davis, V. Likhosherstov et al. Ode to an ODE. In Neural Information Processing Systems (NeurIPS), 2020. Slides K. Choromanski, D. Cheikhi, J. Davis, V. Likhosherstov et al. Stochastic Flows and Geometric Optimization on the Orthogonal Group. In the International Conference on Machine Learning (ICML), 2020. Slides U. Bhatt, A. Weller and J. Moura. Evaluating and Aggregating Feature-based Model Explanations. In the International Joint Conference on Artificial Intelligence (IJCAI), 2020. Poster Video by Umang Slides M. Khajehnejad, A. Rezaei, M. Babaei, J. Hoffmann, M. Jalili and A. Weller. Adversarial Graph Embeddings for Fair Influence Maximization over Social Networks. In the International Joint Conference on Artificial Intelligence (IJCAI), 2020. B. Dimanov, U. Bhatt, M. Jamnik and A. Weller. You shouldn't trust me: Learning models which conceal unfairness from multiple explanation methods. In the European Conference on Artificial Intelligence (ECAI), 2020. U. Bhatt, A. Xiang, S. Sharma, A. Weller, A. Taly, Y. Jia, J. Ghosh, R. Puri, J. Moura and P. Eckersley. Explainable Machine Learning in Deployment. In the ACM Conference on Fairness, Accountability, and Transparency (FAT*), 2020. Blog post by PAI Video by Umang Slides Y. Teng, W. Gao, F. Chalus, A. Choromanska, D. Goldfarb and A. Weller. Leader Stochastic Gradient Descent for Distributed Training of Deep Learning Models. In Neural Information Processing Systems (NeurIPS), 2019. Poster N. Kilbertus, P. Ball, M. Kusner, A. Weller and R. Silva. The Sensitivity of Counterfactual Fairness to Unmeasured Confounding. In Uncertainty in Artificial Intelligence (UAI), 2019. T. Adel and A. Weller. TibGM: A Transferable and Information-Based Graphical Model Approach for Reinforcement Learning. In the International Conference on Machine Learning (ICML), 2019. K. Choromanski*, M. Rowland*, W. Chen and A. Weller. Unifying Orthogonal Monte Carlo Methods. In the International Conference on Machine Learning (ICML), 2019 [*equal contribution]. Poster M. Rowland*, J. Hron*, Y. Tang*, K. Choromanski, T. Sarlos and A. Weller. Orthogonal Estimation of Wasserstein Distances. In Artificial Intelligence and Statistics (AISTATS), 2019 [*equal contribution]. Poster T. Adel, I. Valera, Z. Ghahramani and A. Weller. One-network Adversarial Fairness. In the Association for the Advancement of Artificial Intelligence conference (AAAI), 2019. Poster M. Rowland*, K. Choromanski*, F. Chalus, A. Pacchiano, T. Sarlos, R. Turner and A. Weller. Geometrically Coupled Monte Carlo Sampling. In Neural Information Processing Systems (NeurIPS), 2018 [selected for spotlight presentation] [*equal contribution]. Poster N. Kilbertus, A. Gascon, M. Kusner, M. Veale, K. P. Gummadi and A. Weller. Blind Justice: Fairness with Encrypted Sensitive Attributes. In the International Conference on Machine Learning (ICML), 2018. Poster T. Adel, Z. Ghahramani and A. Weller. Discovering interpretable representations for both deep generative and discriminative models. In the International Conference on Machine Learning (ICML), 2018. K. Choromanski*, M. Rowland*, V. Sindhwani, R. Turner and A. Weller. Structured evolution with compact architectures for scalable policy optimization. In the International Conference on Machine Learning (ICML), 2018 [*equal contribution]. S. Ahn, M. Chertkov, A. Weller and J. Shin. Bucket renormalization for approximate inference. In the International Conference on Machine Learning (ICML), 2018. T. Speicher, H. Heidari, N. Grgić-Hlača, K. P. Gummadi, A. Singla, A. Weller and M. Zafar. A Unified Approach to Quantifying Algorithmic Unfairness: Measuring Individual & Group Unfairness via Inequality Indices. In Knowledge Discovery and Data mining (KDD), 2018 [selected for long presentation]. K. Choromanski*, M. Rowland*, T. Sarlos, V. Sindhwani, R. Turner and A. Weller. The geometry of random features. In Artificial Intelligence and Statistics (AISTATS), 2018 [*equal contribution]. S. Ahn, M. Chertkov, J. Shin and A. Weller. Gauged mini-bucket elimination for approximate inference. In Artificial Intelligence and Statistics (AISTATS), 2018. N. Grgić-Hlača, E. Redmiles, K. P. Gummadi and A. Weller. Human perceptions of fairness in algorithmic decision making: A case study of criminal risk prediction. In The Web Conference (WWW), 2018. M. Babaei, J. Kulshrestha, A. Chakraborty, F. Benevenuto, K. P. Gummadi and A. Weller. Purple Feed: Identifying High Consensus News Posts on Social Media. In the AAAI/ACM conference on Artificial Intelligence, Ethics and Society (AIES), 2018 [selected for oral presentation]. N. Grgić-Hlača, M. Zafar, K. P. Gummadi and A. Weller. Beyond Distributive Fairness in Algorithmic Decision Making: Feature Selection for Procedurally Fair Learning. In the Association for the Advancement of Artificial Intelligence conference (AAAI), 2018. S. Bhatnagar, A. Alexandrova, S. Avin, S. Cave, L. Cheke, M. Crosby, J. Feyereisl, M. Halina, B. S. Loe, F. Martínez-Plumed, H. Price, H. Shevlin, A. Weller, A. Winfield and J. Hernández-Orallo. Mapping Intelligence: Requirements and Possibilities. In Philosophy and Theory of Artificial Intelligence (PT-AI), 2017. M. Zafar, I. Valera, M. Rodriguez, K. P. Gummadi and A. Weller. From parity to preference-based notions of fairness in classification. In Neural Information Processing Systems (NeurIPS), 2017. Poster K. Choromanski*, M. Rowland* and A. Weller. The unreasonable effectiveness of structured random orthogonal embeddings. In Neural Information Processing Systems (NeurIPS), 2017 [*equal contribution]. Poster M. Rowland* and A. Weller*. Uprooting and rerooting higher-order graphical models. In Neural Information Processing Systems (NeurIPS), 2017 [*equal contribution]. Poster M. Balog, N. Tripuraneni, Z. Ghahramani and A. Weller. Lost relatives of the Gumbel trick. In the International Conference on Machine Learning (ICML), 2017 [best paper award honourable mention]. Slides Poster Code R. McAllister, Y. Gal, A. Kendall, M. van der Wilk, A. Shah, R. Cipolla and A. Weller. Concrete Problems for Autonomous Vehicle Safety: Advantages of Bayesian Deep Learning. In the International Joint Conference on Artificial Intelligence (IJCAI), 2017. M. Rowland, A. Pacchiano and A. Weller. Conditions beyond treewidth for tightness of higher-order LP relaxations. In Artificial Intelligence and Statistics (AISTATS), 2017. Poster Will also be presented by Aldo Pacchiano at the International Conference on Principles and Practice of Constraint Programming (CP), 2017. A. Weller. Characterizing tightness of LP relaxations by forbidding signed minors. In Uncertainty in Artificial Intelligence (UAI), 2016. Poster A. Weller. Uprooting and rerooting graphical models. In the International Conference on Machine Learning (ICML), 2016. Slides Poster Video O. Meshi, M. Mahdavi, A. Weller and D. Sontag. Train and test tightness of LP relaxations in structured prediction. In the International Conference on Machine Learning (ICML), 2016. A. Weller, M. Rowland and D. Sontag. Tightness of LP relaxations for almost balanced models. In Artificial Intelligence and Statistics (AISTATS), 2016 [selected for oral presentation]. Slides Poster Also presented by Mark Rowland at the International Conference on Principles and Practice of Constraint Programming (CP), 2016. Slides A. Weller* and J. Domke*. Clamping improves TRW and mean field approximations. In Artificial Intelligence and Statistics (AISTATS), 2016 [*equal contribution]. Poster A. Weller. Bethe and related pairwise entropy approximations. In Uncertainty in Artificial Intelligence (UAI), 2015. Poster A. Weller. Revisiting the limits of MAP inference by MWSS on perfect graphs. In Artificial Intelligence and Statistics (AISTATS), 2015. Poster Also presented at the International Conference on Principles and Practice of Constraint Programming (CP), 2015. Slides A. Weller and T. Jebara. Clamping variables and approximate inference. In Neural Information Processing Systems (NeurIPS), 2014 [selected for oral presentation]. Slides Poster Video A. Weller and T. Jebara. Approximating the Bethe partition function. In Uncertainty in Artificial Intelligence (UAI), 2014. Poster A. Weller, K. Tang, D. Sontag and T. Jebara. Understanding the Bethe approximation: When and how can it go wrong?. In Uncertainty in Artificial Intelligence (UAI), 2014. Poster A. Weller and T. Jebara. On MAP inference by MWSS on perfect graphs. In Uncertainty in Artificial Intelligence (UAI), 2013 [selected for oral presentation]. A. Weller and T. Jebara. Bethe bounds and approximating the global optimum. In Artificial Intelligence and Statistics (AISTATS), 2013. Book chapters J. Zerilli and A. Weller. The Technology. Chapter 2 in The Law of Artificial Intelligence, Sweet and Maxwell, edited by Matthew Lavy and Matt Hervey, 2020. A. Weller. Transparency: Motivations and Challenges. Chapter 2 in Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, Springer Lecture Notes in Computer Science, vol 11700, edited by W. Samek, G. Montavon, A. Vedaldi, L. Hansen, K. Muller, 2019. Preprint Extended version of the workshop paper Challenges for Transparency (see below). Workshop papers (refereed) K. Chandra, K. Collins, W. Crichton et al. WatChat: Explaining perplexing programs by debugging mental models. ACL Workshop on Natural Language Reasoning and Structured Explanations, 2024 [best paper award]. J. Heo, K. Lee, H. Joh, U. Bhatt and A. Weller. Hypothesis- and Structure-based prompting for medical and business diagnosis. AAAI Workshop on Collaborative AI and modeling of Humans (CAIHu), 2024. C. Zhang, K. Collins, A. Weller and J. Tenenbaum. AI for Mathematics: A Cognitive Science Perspective. NeurIPS Workshop on Mathematical Reasoning and AI (MATH-AI), 2023. V. Piratla, J. Heo, S. Singh and A. Weller. Estimation of Concept Explanations Should be Uncertainty Aware. NeurIPS Workshop on Explainable AI in Action (XAIA), 2023. M. Ashman, T. Rochussen and A. Weller. Amortised Inference in Neural Networks for Small-Scale Probabilistic Meta-Learning. Symposium on Advances in Approximate Bayesian Inference (AABI), 2023. L. Ying, K. Collins, M. Wei, C. Zhang, T. Zhi-Xuan, A. Weller, J. Tenenbaum and L. Wong. The Neuro-Symbolic Inverse Planning Engine (NIPE): Modeling probabilistic social inferences from linguistic inputs. ICML Workshop on Theory of Mind in Communicating Agents (ToM), 2023. Dj Dvijotham, S. Omidshafiei, K. Lee, K. Collins, D. Ramachandran, A. Weller, M. Ghavamzadeh, M. Nasr, Y. Fan and J. Liu. Algorithms for Optimal Adaptation of Diffusion Models to Reward Functions. ICML Workshop on New Frontiers in Learning, Control, and Dynamical Systems (Frontiers4LCD), 2023. M. Wei, I. Jonusaite, L. Wong, K. Collins, A. Weller and J. Tenenbaum. Structured and Creative: Combining LLMs with Probabilistic Programs to Model Human-Like Causal Explanations. CogSci, 2023. V. Palaniappan, M. Ashman, K. Collins, J. Heo, A. Weller and U. Bhatt. GeValDi: Generative Validation of Discriminative Models. ICLR Workshop on Pitfalls of limited data and computation for Trustworthy ML, 2023. Short version also accepted as a Tiny Paper at ICLR 2023. K. Collins, U. Bhatt, W. Liu, V. Piratla, B. Love and A. Weller. Human-in-the-loop mixup. AAAI Workshop on Representation learning for Responsible Human-Centric AI (R2HCAI), 2023 [selected for oral presentation]. V. Babbar, U. Bhatt, M. Zilka and A. Weller. Conformal Prediction for Resource Prioritisation in Predicting Rare and Dangerous Outcomes. NeurIPS Workshop on Human in the Loop Learning, 2022. K. Collins, U. Bhatt, W. Liu, B. Love and A. Weller. Human-in-the-Loop Mixup. Works-in-Progress track of AAAI Conference on Human Computation and Crowdsourcing (HCOMP), 2022 [award for best poster/demo]. T. Macdonald, V. Sounderajah, H. Unsworth et al. Creating an evidence standards framework for artificial intelligence enabled digital health technologies. Clinical Abstract at Machine Learning for Healthcare (MLHC), 2022. M. Zilka, C. Ashurst, R. Fogliato and A. Weller. Differential Enforcement and the Progression of Disparities within the Criminal Justice System. Poster at ACM conference on Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO), 2022. B. Butcher, M. Zilka and A. Weller. Racial Disparities in Arrests for Drug Violations in the US: What Can We Learn from Publicly Available Data? Poster at ACM conference on Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO), 2022. M. Babaei, B. Mirzasoleiman, J. Joo and A. Weller. Towards Balanced Information Propagation in Social Media. Poster at ACM conference on Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO), 2022. B. Butcher, M. Zilka and A. Weller. A Framework for Human-in-the-Loop High-Precision Information Extraction from Text Documents. ICML Workshop on Human-Machine Collaboration and Teaming, 2022. V. Chen, U. Bhatt, H. Heidari, A. Weller and A. Talwalkar. Perspectives on Incorporating Expert Feedback into Model Updates. ICML Workshop on Human-Machine Collaboration and Teaming, 2022. Also accepted at ICML Workshop on Responsible Decision Making in Dynamic Environments, and at ICML Workshop on Updatable Machine Learning, 2022. D. Denipitiyage, T. Ajanthan, P. Kamalaruban and A. Weller. Provable Defense Against Clustering Attacks on 3D Point Clouds. AAAI Workshop on Adversarial Machine Learning and Beyond, 2022. U. Bhatt, A. Weller and G. Cherubin. Fast conformal classification using influence functions. In the Tenth Symposium on Conformal and Probabilistic Prediction and Applications (COPA), 2021. A. Khajehnejad, M. Khajehnejad, M. Babaei et al. CrossWalk: Fairness-enhanced Node Representation Learning. In the ICML Workshop on Socially Responsible Machine Learning, 2021. H. Zhang, Y. Zhang, W. Liu, A. Weller, B. Schölkopf and E. Xing. Towards Principled Disentanglement for Domain Generalization. In the ICML Workshop on Machine Learning for Data: Automated Creation, Privacy, Bias, 2021. J. von Kügelgen, A. Karimi, U. Bhatt et al. Updated version of On the Fairness of Causal Algorithmic Recourse. In the ICML Workshop on Algorithmic Recourse, 2021 [selected for spotlight presentation]. D. Ley, U. Bhatt and A. Weller. Diverse and Amortised Counterfactual Explanations for Uncertainty Estimates. In the ICML Workshop on Theoretic Foundation, Criticism, and Application Trend of Explainable AI, 2021. Also accepted at the ICML 2021 Workshops on Algorithmic Recourse, and Socially Responsible Machine Learning. M. Babaei, B. Mirzasoleiman, J. Joo and A. Weller. Partisan Media and News Consumption: Towards Balanced Information Propagation in Social Media. In the IJCAI Workshop on Mining Actionable Insights from Social Networks (MAISoN) - Special Edition on Responsible Social Media Mining (Res-AI), 2021. D. Ley, U. Bhatt and A. Weller. δ-CLUE: Diverse Sets of Explanations for Uncertainty Estimates. In the ICLR Workshop on Security and Safety in Machine Learning Systems, 2021. Also accepted at the ICLR 2021 Workshops on Robust and Reliable ML in the Real World, and Responsible AI. D. Kazhdan, B. Dimanov, H. Terre, P. Lio, M. Jamnik and A. Weller. Is disentanglement all you need? Comparing concept-based and disentanglement approaches. In the ICLR Workshop on Weakly Supervised Learning, 2021. Also accepted at the ICLR 2021 Workshops on Robust and Reliable ML in the Real World, and Responsible AI. U. Bhatt, M. Ashman, A. Margeloiu et al. Do concept bottleneck models learn as intended? In the ICLR Workshop on Responsible AI, 2021. T. Shaikhina, U. Bhatt, R. Zhang, K. Georgatzis, A. Xiang and A. Weller. Effects of Uncertainty on the Quality of Feature Importance Estimates. In the AAAI Workshop on Explainable Agency in AI, 2021. J. von Kügelgen, U. Bhatt et al. On the Fairness of Causal Algorithmic Recourse. In the NeurIPS Workshop on Algorithmic Fairness through the Lens of Causality and Interpretability, 2020. A. Margeloiu, N. Simidjevski, M. Jamnik and A. Weller. Improving Interpretability in Medical Imaging Diagnosis using Adversarial Training. In the NeurIPS Workshop on Medical Imaging, 2020. D. Kazhdan, B. Dimanov, M. Jamnik, P. Lio and A. Weller. Now You See Me (CME): Concept-based Model Extraction. In the CIKM Workshop on Advances in Interpretable Machine Learning and AI (AIMLAI), 2020. U. Bhatt, M. Andrus, A. Weller and A. Xiang. Machine Learning Explainability for External Stakeholders. In the ICML Workshop XXAI: Extending Explainable AI Beyond Deep Models and Classifiers, 2020. Also selected for spotlight presentation at the ICML Workshop on Human Interpretability (WHI), 2020, And accepted at the IJCAI 2020 Workshop on Explainable AI. J. Davis, K. Choromanski, et al. Time Dependence in Non-Autonomous Neural ODEs. In the ICLR Workshop on Integration of Deep Neural Models and Differential Equations (ODE/PDE+DL), 2020. J. Antoran, U. Bhatt, T. Adel, A. Weller and J. M. Hernandez-Lobato. Getting a CLUE: A Method for Explaining Uncertainty Estimates. In the ICLR Workshop on Machine Learning in Real Life (ML-IRL), 2020 [selected for oral presentation]. U. Bhatt, M. B. Zafar, K. P. Gummadi and A. Weller. Counterfactual accuracies for alternative models. In the ICLR Workshop on Machine Learning in Real Life (ML-IRL), 2020. M. Bakker, H. Valdes, D. P. Tu, K. P. Gummadi, K. Varshney, A. Weller and A. Pentland. Fair Enough: Improving Fairness in Budget-Constrained Decision Making Using Confidence Thresholds. In the AAAI Workshop on Safe AI, 2020 [selected for oral presentation, contender for best paper]. B. Dimanov, U. Bhatt, M. Jamnik and A. Weller. Models can be learned to conceal unfairness from explanation methods. In the AAAI Workshop on Safe AI, 2020 [selected for oral presentation]. J. von Kügelgen, P. Rubenstein, B. Schölkopf and A. Weller. Optimal experimental design via Bayesian optimization: Active causal structure learning for Gaussian process networks. In the NeurIPS Workshop "Do the right thing": machine learning and causal inference for improved decision making, 2019. U. Bhatt, A. Xiang, S. Sharma, A. Weller, A. Taly, Y. Jia, J. Ghosh, R. Puri, J. Moura and P. Eckersley. Explainable Machine Learning in Deployment. In the NeurIPS Workshop on Human-centric Machine Learning, 2019. M. Bakker, P. Tu, H. Riveron, K. P. Gummadi, K. Varney, A. Weller and A. Pentland. DADI: Dynamic Discovery of Fair Information with Adversarial Reinforcement Learning. In the NeurIPS Workshop on Human-centric Machine Learning, 2019. H. Wang, N. Grgić-Hlača, P. Lahoti, K. P. Gummadi and A. Weller. An Empirical Study on Learning Fairness Metrics for COMPAS Data with Human Supervision. In the NeurIPS Workshop on Human-centric Machine Learning, 2019. A. Kattamis, T. Adel and A. Weller. Exploring Properties of the Deep Image Prior. In the NeurIPS Workshop on Solving inverse problems with deep networks, 2019. N. Grgić-Hlača, A. Weller and E. Redmiles. Dimensions of Diversity in Human Perceptions of Algorithmic Fairness. In the Workshop on Learning from Team and Group Diversity at the ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW), 2019. N. Kilbertus, A. Gascon, M. Kusner, M. Veale, K. Gummadi and A. Weller. Blind Justice: Fairness with Encrypted Sensitive Attribute. In the FAT/ML Workshop at the International Conference on Machine Learning (ICML), 2018 [selected for oral presentation]. See above for the main conference version. Also presented at the FAIM 2018 Workshop on Privacy in Machine Learning and Artificial Intelligence (PiMLAI18). T. Speicher, M. Zafar, K. P. Gummadi, A. Singla and A. Weller. Reliable learning by subsuming a trusted model: Safe exploration of the space of complex models. In the Workshop on Reliable Machine Learning in the Wild at ICML, August 2017. A. Weller. Challenges for Transparency. In the Workshop on Human Interpretability in Machine Learning (WHI) at ICML, August 2017 [best paper runner up]. Slides M. Zafar, I. Valera, M. Rodriguez, K. P. Gummadi and A. Weller. From parity to preference-based notions of fairness in classification. In the FAT/ML Workshop at KDD, August 2017 [selected for oral presentation]. N. Grgić-Hlača, M. Zafar, K. P. Gummadi and A. Weller. On fairness, diversity and randomness in algorithmic decision making. In the FAT/ML Workshop at KDD, August 2017. N. Grgić-Hlača, M. Zafar, K. P. Gummadi and A. Weller. The case for process fairness in learning: feature selection for fair decision making. In NeurIPS Symposium on Machine Learning and the Law, December 2016 [CFI-Clifford Chance notable paper award]. B. London, O. Meshi and A. Weller. Bounding the integrality distance of LP relaxations for structured prediction. In NeurIPS Workshop on Optimization for Machine Learning, December 2016. K. Tang, A. Weller and T. Jebara. Network ranking with Bethe pseudomarginals. In NeurIPS Workshop on Discrete Optimization in Machine Learning, December 2013. Technical Reports / Preprints M. Ashman, T. Bui, C. Nguyen, S. Markou, A. Weller, S. Swaroop and R. Turner. Partitioned Variational Inference: A Framework for Probabilistic Federated Learning. J. Abad, U. Bhatt, A. Weller and G. Cherubin. Approximating Full Conformal Prediction at Scale via Influence Functions. V. Nanda, A. Majumdar, C. Kolling, J. P. Dickerson, K. P. Gummadi, B. C. Love and A. Weller. Exploring Alignment of Representations with Human Perception. V. Likhosherstov, A. Arnab, K. Choromanski et al. PolyViT: Co-training Vision Transformers on Images, Videos and Audio. V. Likhosherstov, K. Choromanski and A. Weller. On the Expressive Power of Self-Attention Matrices. K. Choromanski, D. Jain, J. Parker-Holder et al. Unlocking Pixels for Reinforcement Learning via Implicit Attention. K. Choromanski, V. Likhosherstov et al. Masked Language Modeling for Proteins via Linearly Scalable Long-Context Transformers. J. Davis, K. Choromanski et al. Time Dependence in Non-Autonomous Neural ODEs. M. Brundage, S. Avin, J. Wang, H. Belfield, G. Krueger et al. Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims. F. Doshi-Velez, M. Kortz et al. Accountability of AI under the law: The role of explanation. PhD thesis A. Weller. Methods for Inference in Graphical Models. Columbia University, 2014. Earlier work A. Weller, D. Ellis and T. Jebara. Structured Prediction Models for Chord Transcription of Music Audio. International Conference on Machine Learning and Applications (ICMLA), December 2009. These methods were used to provide a slight improvement to Dan Ellis' existing, powerful approach to chord transcription, which led to us submitting the best entry to the MIREX open competition that year, see results here. A brief description of the overall 2010 LabROSA chord recognition system is given here. Selected presentations Keynote talk on fairness beyond group statistical parity at the first FORC Symposium, 2020 Elson ethics lecture at St George's on Trust, Transparency and AI, 2018. MLSALT4 graphical models lecture 1: Directed and Undirected Graphical Models, Feb 14 2018 MLSALT4 graphical models lecture 2: An introduction to LP relaxations for MAP inference, Feb 16 2018 MLSALT4 graphical models lecture 3: Junction tree algorithm, belief propagation and variational methods, Feb 21 2018 Clamping variables and approximate inference, Microsoft Research, March 2016 Video (starting around 1:13:15) Cambridge CBL tea talk on Penney's game, Nov 2015 |
|