Data quality issues can degrade the performance of ML systems – including those deployed to address high stake domains that can have a deep impact on safety (e.g. cancer detection, suicide prevention, landslide detection). In this paper, Sambasivan et al. study “data cascades” (failures that occur due to data quality issues arising from technical debt) through interviews with 53 AI practitioners across the world. They find that AI practitioners are not properly incentivized to address data quality problems – instead, they are motivated to focus on model development and shortening development cycles. The authors suggest that HCI researchers can build tools and interfaces to facilitate data excellence.