Although language models are achieving SOTA performance on a wide variety of tasks, most research ignores the impact of the passage of time on language (e.g. in the past 2-3 years, there has been much public discourse on the COVID-19 pandemic; however, neither BERT nor RoBERTa were trained on datasets that include this topic). To address this gap, researchers from Cardiff University, University of Porto, and Snap have OSS’ed a series of time-specific language models trained on a large corpus of tweets (evenly distributed across time). In addition to the data and language models, they provide an integrated Python interface to offer single-line access to language models trained for specific periods and use cases, and to facilitate time-aware evaluation.