Many approaches to explaining AI systems examine the correlation of features with model predictions. In contrast, Galhotra et al. propose a causality-based approach that applies probabilistic contrastive counterfactuals to quantify and provide human-understandable explanations of the direct or indirect impact of a feature on an AI system at the global, local, and sub-population levels. Their system, LEWIS, which uses model-agnostic, post-hoc explanation methods, calculates new probabilistic measures – sufficiency and necessity scores – to determine if a feature is sufficient and necessary for an algorithm’s decision.