Continual learning models attempt to mimic human intelligence by enabling AI systems to retain and apply previously learned experiences when solving new problems. However, evaluating continual learning models is challenging since, unlike traditional ML models that are measured by their task accuracy, CL models must transfer knowledge between tasks, retain previously learned skills, and scale to a large number of tasks to perform well. To facilitate the measurement of CL systems, Facebook and Sorbonne University have released CTrL, a standard benchmark for CL, which evaluates how much knowledge is transferred from a sequence of observed tasks by comparing results when a task is learned in isolation to results when a task is learned after observing a sequence of potentially related tasks.