Although ML practitioners continue to find new use cases for graph neural networks (GNNs), as mentioned earlier, hardware vendors have struggled to develop efficient FPGA accelerators to support novel GNN architectures. To address this problem, researchers from Georgia Institute of Technology and Beijing University of Posts and Telecommunications have OSS’ed GenGNN, a generic GNN acceleration framework for ultra-fast GNN inference without any graph preprocessing or partitioning. GenGNN, which uses High-Level Synthesis and is composed of a highly optimizes message passing architecture, can support a wide range of GNN models and can adapt to new models.