In the past few years, new tools have made machine learning accessible to a wider audience. However, the democratization of machine learning has also engendered a spate of abuses, including by unauthorized third parties building facial recognition models. To address this problem, researchers at the University of Chicago SAND Lab released Fawkes, a tool designed for personal privacy protection against facial recognition systems. It enables users to add pixel-level changes imperceptible to the human eye, referred to as “cloaks”, before releasing images online. The authors’ experiments demonstrate that Fawkes provides at least 95% protection against facial recognition models, and even when uncloaked images are leaked and used for training, the cloaked model can still provide user privacy in 80% of cases. Fawkes is currently intended only for use in personal privacy protection or academic research.