While many are concerned about the threat of “deepfakes” as the quality of synthetic content improves, Chai et al. show that deep neural networks can be trained to detect fake images. Furthermore, patch-based classifiers can identify the image components that are most useful to distinguish fake facial images from real ones. In particular, they find that background and hair texture appear to be common giveaways – the crisp borders found in real images are usually hard for generative models to replicate. The authors conclude with a warning that detecting fake images is a cat-and-mouse problem, and encourage people to use their research to better anticipate strategies for manipulating content.