Since late 2015, LinkedIn has used Venice – a key-value store optimized for serving read-heavy workloads such as derived data – for use cases like recommendation engines. However, in late 2018, LinkedIn had to update Venice to support large batch-gets with thousands of keys per request (e.g. for applications like “People You May Know”). In this post, Gaojie Liu describes how his team evolved Venice to support large fanout while also meeting network bandwidth usage and low latency requirements. He reviews techniques like pushing down computation to the Venice server layer, switching to RocksDB, using a queue-based mechanism to select a replica to serve incoming read traffic, and implementing connection warming strategies to reduce HTTPS connection setup time.