Many ML practitioners have achieved state-of-the-art performance by increasing the number of parameters in deep neural networks. However, this technique is not accessible to those in compute-restricted environments. While compression techniques might mitigate this problem when data is abundant, they may not perform as effectively on low-resource datasets. Here, Ahia et al. examine the impact of pruning techniques on Neural Machine Translation models that translate between English and low-resource languages (Yoruba, Hausa, Igbo). They demonstrate that magnitude pruning does not negatively impact performance on frequent sentences; however, it may disproportionately degrade performance on less frequent input patterns. Nonetheless, sparsity may also benefit generalization to new data.