To meet low latency requirements, compute functions Рsuch as those associated with emerging AI workloads like parallel tree search and reinforcement learning Рmay need direct access to their in-memory object stores. However, most existing object stores require clients to interface with the server using network sockets or inter-process communication (IPC). In contrast, Zhuo et al. present Lightning, which performs metadata and data operations via shared memory; thereby completely obviating IPC overhead. In addition, they implement and formally verify metadata integrity guarantees to ensure that clients do not modify the object store’s key data structures (using Intel MPK hardware), and metadata consistency guarantees to ensure that crashed clients do not cause metadata corruption (using undo logging).