Machine learning model developers need interpretability methods to facilitate debugging, evaluate fairness and compliance, and for other needs. However, most existing interpretability methods propose ad hoc explanations based on model inputs or use probing datasets to generate atomic labels as explanations that may oversimplify neuron behavior. Moreover, the latter technique cannot characterize neurons that learn compositional concepts (e.g. dog faces, cat bodies). Mu and Andreas instead suggest a procedure that explains the compositional logical concepts encoded by neurons. They find that while compositional explanations can help predict model accuracy, they are not always correlated with model performance. Nonetheless, these explanations can be used to manipulate model behavior.