Although the ML community remains captivated by large language models, few ML researchers and practitioners outside highly resourced labs have full access to these models. Without access, they may struggle to understand the limitations of these models, how they work, or how they ought to be governed. To address this problem, Meta AI has released OPT-175B, a language model with 175-billion parameters trained on publicly available datasets. The Meta AI team has open-sourced a suite of smaller baseline models and code to train and deploy the models on NVIDIA GPUs. The largest pretrained models are available under a noncommercial license upon request.