Stateful cloud services and application services store their states on secondary storage such as cloud storage or server-local SSDs. To decrease response times, stateful applications may store a subset of their state in local available or remote memory caches, although the latter has a slightly higher access time due to network latency. However, using remote memory as a cache (e.g., when insufficient local memory is available for a stateful service) requires careful and automated RDMA configuration tuning and dynamic reconfiguration if changes in remote memory availability occur. To address these challenges, Zhang et al. propose Redy, a SLO-based memory cache that utilizes stranded and unused server memory to achieve specific performance targets while minimizing resource cost. Redy can handle failures and reclamations by adjusting cache regions when client requirements or memory availability changes. The authors implement Redy with FASTER, Microsoft’s production-grade key-value store, and show that it is significantly faster than spilling to SSD when the working set exceeds local memory.