New regulations like GDPR afford users the right to be forgotten – to revoke access to their personal data. However, machine learning models may memorize user data, even if it is deleted. Bourtoule et al. have introduced and released the code for SISA training, which outputs the same distribution of models that would result from retraining without the revoked data points. SISA training accelerates the process of unlearning user data through sharding (which reduces the number of data points impacted by an unlearning request) and slicing (which reorders data during retraining such that useful intermediate checkpoints can be saved).