In this blog post, Charrington considers the topic of bias in AI systems in light of the debate surrounding PULSE, a recent machine learning model that generates high-resolution images from low-resolution inputs. Although PULSE produces more realistic and higher resolution images than previous attempts at super-resolution, it performs far worse on images of non-white subjects. Charrington discusses how bias manifests both in datasets and in algorithm design, pointing out that model developers may choose not to prioritize robustness to dataset bias when building AI systems. Critically, he asks the AI community to more thoughtfully consider the practice of compromising accuracy with regards to a minority group in favor of overall accuracy.