Several papers and articles have discussed the threat of adversarial examples designed to fool machine learning models into making predictions, including contexts where this may cause significant harm. However, fewer posts have addressed the threat of neural networks as a delivery vehicle for malware. In this paper, Wang et al. demonstrate how attackers might embed large-sized malware in the redundant neurons of a neural network. They show that malware can be embedded in an AlexNet model with minimal accuracy loss and without being detected by antivirus engines.