Real-life user data is needed to train models for several natural language processing tasks. To respect the privacy of users while preserving performance, sensitive data is often replaced with surrogates (either randomly selected words from the same category or words within a specific category marker). In this paper, Adelani et al. apply differential privacy to derive formal privacy guarantees for these de-identification strategies. In addition, they evaluate five different text transformation approaches on three common NLP tasks using six different corpora. Through these experiments, they find that only word-by-word replacement is robust to performance drops.