As data practitioners continue to work with bigger models on larger datasets, new frameworks for parallelizing existing methods continue to emerge. Most recently, the Multi-Agent Research group at Shanghai Jiao Tong University has released MALib, a framework that parallelizes population-based multi-agent reinforcement learning (PB-MARL) approaches (including Policy Space Response Oracle, Self-Play, and Neural Fictitious Play) in distributed computing environments. In addition, MALib is designed to enable code reuse and to promote RL research.