In industry, data practitioners often evaluate ML/AI systems by comparing model performance to human performance. To facilitate the evaluation of computer vision models, researchers from the Bethge Lab at the University of Tuebingen have released modelvshuman. This Python toolbox enables users to evaluate both PyTorch and TensorFlow models on 17 out-of-distribution datasets with human comparison data collected under highly controlled laboratory conditions. The repository also includes a model zoo with implementations of Vision Transformer variants, BagNet, and other popular models.