Medical imaging, VR, and other applications may require models that can reconstruct 3D geometries of human shapes from 2D images. However, existing approaches to 3D human digitization either require expensive multi-view systems or fail to capture detailed information like fingers and/or facial features. To address this problem, Facebook Reality Labs released an OSS implementation of PIFuHD, an end-to-end trainable architecture that can create a detailed 3D reconstruction of a clothed humans from a single high resolution image. This approach can also create more accurate textures by aligning the pixels in the 2D image to their respective point in the 3D field and can recreate the unobserved (back) regions of the person to complete the reconstruction.