OpenAI recently released DAL·E2, a tool designed to augment the creative process by generating and/or editing images using natural language descriptions. DAL·E2 relies upon a two-stage model that applies a prior to generating an image embedding from a text caption and then a diffusion model-based decoder to generate an image conditioned on the image embedding. In this post, Sam Altman reflects upon why Dall·E2 is noteworthy. He predicts that humans will use natural language to interface with computers to complete more and increasingly complex tasks and postulates that AI will replace human labor for some jobs in the future. He also shares some insight into why OpenAI is adopting an incremental deployment strategy.