Sequential recommendation systems leverage sequences of user interactions (such as a user session) to understand user preferences and/or predict user behaviors. More recently, ML practitioners have adapted Transformer models (which are typically model sequences of words for NLP tasks) to build sequential recommendation systems. In this paper, NVIDIA and Facebook AI researchers present the Transformers4Rec library and conduct an empirical study of different Transformer architectures and training methodologies in the context of sequential recommendation systems. In addition, they consider approaches to add side information to these systems.