Recently, several research teams have proposed approaches whereby explanations (formulated as additional information or further task specifications) are used to improve model performance. However, these approaches may not be well suited to large language models that store knowledge in their parameters and infer tasks at test time from the input. Hase and Bansal evaluate where and how explanations may be helpful in practice with a specially designed synthetic task and three existing datasets. They recommend performing retrieval over past explanations that can be provided as inputs to a model at prediction time.