In previous decades, AI research could be decomposed into several nonintegrated subdisciplines like vision, speech, natural language processing, and reinforcement learning. However, in recent years, researchers in these subdomains have become increasingly focused on neural networks with Transformer architectures, which can process different data types for different tasks. In this thread, Andrej Karpathy discusses how this pattern is enabling more rapid innovation. He predicts that this consolidation in architecture will have a further impact on software, hardware, and infrastructure.