Some researchers believe that human and animal intelligence is formed from a small set of first principles, which if discovered, could enable us to design intelligent machines. If this is true, studying inductive biases that guide human and animal decision-making could enable the discovery of these principles and ultimately, permit researchers to bridge the gap between deep learning and human cognition. In this paper, Anirudh Goyal and Yoshua Bengio consider a large list of key inductive biases, including those exploited by deep learning models today and those that could be applied to improve systematic generalization and transfer learning. They also argue that systematic generalization could potentially be achieved by decomposing knowledge into smaller representations that can be reassembled.