Label quality can significantly impact the performance of ML models and the success of ML-driven products. However, identifying poor-quality labels can be tedious and challenging. To help model developers find bad or noisy labels, Vincent Warmerdam, a Research Advocate at Rasa, has released DoubtLab. Without DoubtLab, model developers specify reasons to “doubt” a row of data (e.g. when two models disagree on a prediction). While doubt may be assigned randomly, users can also choose from a set of common classification and regression issues as well.