Although Transformer models have been successfully applied for several use cases, they are rarely used for reasoning tasks. To address this gap and determine if language models can generate original mathematical terms like humans, Polu and Sutskever apply Transformers to automated theorem proving. They find that pre-training on mathematical data and iteratively training a value function on statements generated by the language model improves prover performance (and also confirms a positive correlation between performance and model size). They then present an automated prover and proof assistant, GPT-f, which achieves state-of-the-art results on the Metamath library. This work demonstrates that Transformer architectures can be applied to formal reasoning.