The authors investigate and report insights on the dynamics of deep neural networks based on a recent paper, “Deep Ensembles: A Loss Landscape perspective,” which found that ensembles with different random initializations are more functionally dissimilar than ensembles of the same model at different snapshots of training progress. They dissect and visualize the cosine similarity of weight spaces in each ensemble, and similarly visualize the disagreement between predictions within each ensemble. Finally, they produce convergence trajectory plots and supply reproducible code and analysis to enable continued exploration.