Although some machine learning model developers prefer to write the training loop of PyTorch models, many are reluctant to maintain the boilerplate code needed to run training scripts on heterogeneous hardware backends. Accelerate enables developers to write training scripts while abstracting only the boilerplate code. PyTorch training scripts can then be run in any single or distributed node setting with or without mixed precision. The wrapper also provides a CLI that helps developers quickly configure and test the training environment before running the script.