Finite state automata provide a computational model that enables model developers and researchers to reason about recurrent neural networks (RNNs), including to compare architecture variants, envision how information flows, and examine how the model solves formal tasks. In contrast, a similar abstraction for Transformers did not previously exist. In this paper, Weiss et al. propose a computational model for Transformers in the form of a programming language, RASP. RASP, which maps attention and feed-forward computation into simple primitives, can be used to program solutions to tasks that could be learned by a Transformer.