Although data and research scientists typically apply encoder-decoder architectures to Image-to-image translation tasks, such as colorization and rotation, these models suffer from a strong locality bias (i.e. these architectures are not robust to transformations like flipping a face upside down). Richardson et al. propose a method to learn unsupervised image-to-image transformations by assuming that the transformation mapping is linear and orthogonal. On local transformations, they demonstrate results comparable to state-of-the-art architectures with a much lower training time; on non-local transformations, they demonstrate their method is able to succeed where state-of-the-art fails.