I am a Research Scientist at DeepMind and part of their Science team. I did my PhD at the Applied AI lab (A2I), supervised by Professor Ingmar Posner. Other recent adventures include a research sabbatical in 2020 at the BCAI collaborating with Max Welling’s lab at the University of Amsterdam and an internship at DeepMind supervised by Adam Santorro in 2021.
My PhD topic was learning invariant and equivariant representations, which remains a keen interest of mine. Simply put: whereas most of deep learning is concerned with finding the important information in an input, I focussed on ignoring harmful or irrelevant parts of information. This is tantamount to leveraging symmetries and can be important to counteract biases or to better leverage structure in the data. While almost all machine learning tasks have some symmetries (which are often leveraged, e.g., by CNNs being translation equivariant), they become particularly prevalent on the length scales of molecules and below. I argue that, if we want to make machine learning enabled breakthroughs in fields like biochemistry and material science, we need to become good at leveraging symmetries.
I am originally from Southern Germany. Before switching to Machine Learning for my PhD, I studied physics at the Universities of Erlangen, Heidelberg, and Imperial College London. In my spare time, I enjoy rock climbing, tennis, playing the guitar, and the occasional carpentry project.