In transfer learning systems, and ML model pre-trained on a large corpora (the source) is then fine-tuned for a second target task. Although research on transfer learning has exploded in recent years, few studies indicate how this approach might fare in real-world settings. Mensink et al., therefore, conduct a large-scale study of transfer learning to understand performance across the different image and task types. They perform over 1200 experiments on 20 datasets (including consumer, driving, aerial, underwater, indoor, synthetic, and other images) and four task types (semantic segmentation, object detection, depth estimation, keypoint detection). Their analysis of the results demonstrates that transfer learning works best when the source dataset includes the image domain of the target dataset (although the source may be much broader than the target). They also find that transfer learning across task types can work, although success depends largely on the source and target task types.