Recently, Michael Littman, a Professor at Brown University, presented evidence that groups of colluding authors were violating academic conference policies to increase their odds of paper acceptance. Jacob Buckman, however, suggests that in addition to such explicit academic fraud, there are also more subtle ways in which researchers cheat the system – including by massaging their datasets or configurations to exaggerate their model performance. He argues that more explicit instances of fraud may compel researchers to regard published work more critically and skeptically, thereby raising the bar for research. As such, he concludes that collusion rings and other blatant and aggressive forms of fraud may, in fact, help the ML and CS research communities develop stronger scientific norms.