Humans and animals learn by observing the behavior of others (i.e. social learning). However, in most RL applications (including real-world use cases like self-driving and robotics), agents explore their environment individually – which is inefficient and costly. In this paper, Ndousse et al. investigate if RL agents can learn more effectively by observing the behaviors of experts (i.e. without the experts explicitly teaching the RL agents). They find that standard model-free RL agents do not take cues from experts, even when individual exploration is costly and experts are clearly identified. Thus, they propose a new technique, Social Learning with Auxiliary Prediction Loss (SociAPL), which enables RL agents to learn from third-person observation and demonstrate that agents that engage in social learning generalize better and outperform agents that engage