Most methods for generating explanations in deep learning do not provide practitioners with a clear way to act on the explanation to improve model performance or interpretability. Rieger et al. argue that explainable AI needs to be actionable for practitioners, and propose a method called “contextual decomposition explanation penalization” (CDEP) that allows users to apply domain knowledge to penalize both a model’s prediction and its corresponding explanation. Contextual decomposition, which captures both individual feature importances and interactions between features, can be used to ensure the model does not make correct predictions based on spurious features or relationships. Furthermore, because contextual decompositions are calculated in the forward pass alone, they are far less computationally and memory-intensive than gradient-based explanation methods.