I’m interested in reducing the size of the decision space associated with designing deep neural networks. Any machine learning engineer exploring a neural network based solution to a practical problem will be faced with a large number of possible design decisions. For example, what should the architecture of the network be? And given this network architecture, what’s an appropriate regulariser? Or optimiser? And in what way should the parameters of the network be initialised? All these decisions cost valuable time to explore. AutoML and neural architecture search is one approach to alleviate this problem. Another is advocating a more principled design process in which decisions are guided by theoretical developments. The latter is what I’m currently pursuing.