Data practitioners widely use cross-validation to estimate prediction error before deploying a model. Although many believe that cross-validation estimates prediction error of the model fit to the training data, Bates et al. demonstrate that cross-validation actually estimates the average accuracy of models fit on hypothetical datasets from the same population. In addition, confidence intervals based on CV may be misleading. To address these shortcomings, the authors propose nested cross-validation to minimize the miscoverage rates when traditional cross-validation intervals fail. They also recommend against refitting the model on the combined data (when using simple data splitting), which may invalidate the confidence intervals.