Machine learning models including LSTMs and hidden Markov models are often used to predict future events from time-series data. However, the accuracy of these models could be improved by integrating domain-specific knowledge about how past events affect the probability of future events. Here, Mei et al. propose representing this knowledge as a set of possible event types and Boolean facts in a temporal deductive database. The facts change in response to update rules and derivative rules. Each fact is associated with an embedding that represents the facts’ provenance, including its experience of past events. For some facts, which determine the set of possible events, the architecture also computes the probabilities of the event types. Mei et al. show that this approach, Neural Datalog, improves prediction in synthetic and real-world domains.