Data preparation and cleaning (collectively, data validation) is an iterative process that is integral to data analysis and modeling. While dataframe libraries can facilitate the data validation process, most dataframe libraries don’t scale well to larger datasets. As such, the data validation workflow is often slow and tedious. Xin et al. propose opportunistic evaluation: an approach to accelerate data validation by decoupling the specification and execution of dataframe queries, such that some queries can be computed during “think time” (i.e., when users are considering their next step or writing code). The opportunistic evaluation uses program slicing to prioritizes the computation of interactions and code that influences interactions while computing non-critical portions during think time.