Although pre-trained language models achieve SOTA results on several benchmarks and show strong fluency and grammatical correctness, it is unclear that these models can produce non-repetitive utterances that reflect factual world knowledge. Before adopting these models for real-world use cases, researchers will need fact-checking mechanisms. In this paper, Massarelli et al. propose an experimental methodology to appraise the repetitiveness and verifiability of generated text, which they apply to evaluate various decoding algorithms. Based on their findings, they propose Delayed Beam Search (DelayedBS), a new decoding strategy that iterates between sampling and finding most likely utterances, to generate text that is less repetitive but more verifiable.