My research lies at the interface between computer perception (which builds artificial systems for understanding images, sounds and videos), neuroscience (which tries to understand the brain) and machine-learning (which provides a theoretical framework for learning from data). The goal is to develop systems that solve important problems, drawing inspiration from the brain. For example, figuring out how many sound sources there are in an acoustic scene and what the individual contributions from each source are. There are medical and engineering applications of this work, such as in cochlear implants for the deaf. Importantly, the behaviour of these systems can also be compared to neural processing in the brain in order to better understand what the brain is doing.