Since the Transformer architecture was proposed in 2017, researchers have applied Transformers to use cases ranging from natural language translation to code recommendations. However, previous attempts to use Transformer to solve advanced mathematical problems have failed. Here, Drori et al. demonstrate that neural networks pretrained on text and fine-tuned on code can solve problems from university-level math courses. The authors also describe a methodology to rephrase problems (by adding context, tidying the text, and through interactive refinement) as programming tasks that a neural network can solve.