Named entity disambiguation (NED), wherein textual mentions are mapped to entities in a knowledge base, is critical for many real-world NLP applications. Existing NED systems that memorize co-occurrences of textual context and entities often fail to disambiguate the long tail of entities that occur less frequently in training. In contrast, the OSS, unsupervised Bootleg system uses both textual and structural information (e.g. type and relation signals) to predict an entity for each detected mention in an input sentence. Bootleg represents types, relations, and entities as embeddings in a simple Transformer-based architecture and achieves SOTA results on three NED benchmarks.