Decisionmakers who interact with ML-driven systems often need assurances that a machine learning model trained in one setting will perform well in their context (i.e., that the model was not trained on a loss function significantly different than their own). However, performing distribution calibration (i.e., providing assurance that the distribution of predictions matches the true distribution) can be infeasible for multi-class prediction problems, where there are many possible labels. Here, Zhao et al. propose a new approach, decision calibration, where they focus on minimizing the mispredictions that would lead to different decisions. They present an algorithm that guarantees decision calibration for a set of decision-makers that choose from a bounded set of actions.