Most approaches to neural architecture search focus on navigating search spaces that include a few well-established primitives for narrowly scoped tasks like vision and text. In contrast, Roberts et al. propose an approach to automate the design of neural networks for new tasks by expanding the set of operations. They make progress towards a more general search space by using a more expressive family of efficient linear transforms to yield Expressive Diagonalization Operations (which include far more neural operations). Their method starts with a convolutional neural network and replaces its operations with XD-operations; then searches for architectures using simple weight-sharing algorithms.