Although researchers have proposed several definitions and conventions for algorithmic fairness, Sambasivan et al. find that these approaches reflect Western values and experiences that may not generalize to other countries and local environments. To develop fairness frameworks that are not based primarily on the study of Western populations, they conduct interviews with researchers and activists, collect observations of current AI deployments in India, and apply feminist, decolonial, and anti-caste approaches to analyzing data. They find that in India, models are overfitted for digitally-rich profiles that exclude many sub-groups including those without Internet access. In addition, Indian users have few opportunities to exercise agency when interfacing with AI-driven systems. Finally, India lacks the tools, policies, and stakeholders to make careful decisions about the application of AI to high-stakes domains. They recommend operationalizing algorithmic fairness in India in a manner that accounts for these local structures and phenomena.