Although data is abundant; well-prepared, labeled datasets are not. As such, ML researchers have been advancing self-supervised learning models that can learn from very large uncurated datasets. Recently, Meta released Seer, a family of self-supervised computer vision models pretrained on a billion, random public Instagram images. The biggest of these models (which range from 156M to 10B parameters) outperforms existing supervised and self-supervised models trained on ImageNet on 70% of benchmarks. The 10B parameter model also exceeds the performance of existing models on fairness benchmarks and is more robust to out-of-domain generalization.