Existing approaches to generating labeled datasets for supervised learning suffer from at least two problems: 1) Each classification label contains a few bits of information; 2) Labelling large datasets with human experts is very time-consuming and costly. Observing that humans use contrastive natural language explanations to learn more efficiently, Liang et al. propose to address these issues through a new expert-in-the-loop framework called Active Learning with Contrastive Explanations (ALICE). ALICE constructs a three-step workflow for each “round” of expert interaction: first, ALICE applies active learning to select the closest pairs of training data that belong to different classes in the feature space; next, ALICE asks an expert to explain the difference between the examples and applies a semantic parser to extract knowledge from the explanation; finally, ALICE builds a local classifier to help the global model. The authors find that ALICE can outperform baseline models that rely on 40-100% more training data.