For many machine learning applications including content moderation and autonomous vehicles, performance on relatively rare concepts (e.g. malicious content, occluded stop signs) is critical. While active learning can be used to identify examples of these rare but high value concepts, most active learning approaches scale at best, linearly, since they must scan the entire unlabeled data. To improve the computational efficiency and reduce the labeling cost of active learning, Coleman et al. apply similarity search to find the nearest neighbors to the labeled examples in each selection round. With this approach, called Similarity Search for Efficient Active Learning and Search (SEALS), the authors achieve similar model quality and recall of positive examples as the baseline for both active learning and search on three large-scale datasets.