Although several datasets exist to evaluate models that must access specific information within large bodies of information, each has a unique format, set of preprocessing assumptions, set of knowledge sources, and other features. To help advance R&D on task-agnostic and task-specific model architectures for knowledge intensive tasks, Facebook has released the Knowledge Intensive Language Tasks (KILT) benchmark. KILT aligns 11 widely used public data sets for fact-checking, open-domain question answering, slot filling, entity linking, and dialog generation with a single corpus – a recent snapshot of Wikipedia.