Serverless query processing platforms like Athena and BigQuery dynamically allocate resources per query. However, because it is very challenging to predict complex resource consumption behavior, users often misallocate resources, thereby significantly increasing cloud costs. To address this issue, Pimpley et al. present an approach wherein they use machine learning to predict the optimal resources for each query, which are selected at compile-time in the SCOPE system (which runs internal big data workloads at Microsoft). They frame this task as predicting the performance characteristic curve, an exponentially decaying curve that represents the relationship between resources and performance. Their approach can also be applied to enable users to explore performance/resource trade-offs and adjust resource allocation based on their requirements.