Model explanations are critical to unlock several safety-critical use cases and to build better tools for monitoring and debugging. However, many approaches to interpretability are difficult to verify. In contrast, the Circuits approach provides verifiable explanations of slices of a neural network by studying individual neurons and weights – but many questions how this approach can scale. In this paper, Cammarata et al. apply the Circuits approach to reverse-engineer curve detectors. They find that the 50,000+ parameter curve detector implements a simple, interpretable algorithm. In general, they observe that neuron families and motifs, like the equivariance motif, can simplify circuits; enabling the Circuits approach to scale.