Although language models like GPT-3 achieve SOTA results, they are difficult to adapt and/or control (including to prevent the generation of biased and/or offensive content). GeDi, or Generative Discriminator, is a method of controlling language model generation without finetuning the model directly. GeDis use a class-conditional language model to guide generation from other pre-trained, potentially larger language models. This approach is significantly less expensive and does not risk restricting the diverse generation capabilities of the original model. GeDi can be used for tasks such as sentiment control, detoxification, topic generation, and more.