Imitation learning models learn to mimic expert behaviors through demonstrations. In many domains, these demonstrations can vary widely in style, e.g. the same task can be accomplished at different speeds and through different movement patterns. Learning based on diverse demonstrations may improve model robustness; however, model developers lack tools or strategies to control the generation of these disparate behavioral styles. Zhan et al. develop an approach to control and calibrate the generation of various behavioral styles inspired by data programming. They apply labeling functions, which provide weak labels based on state information in unlabelled data, to define a style-consistency learning objective. Based on experiments on basketball demonstrations in MuJoCo, they demonstrate that their style-calibrated models can generate diverse behaviors from style combinations.