High Fidelity 3D Hand Shape Reconstruction via Scalable Graph Frequency Decomposition

Abstract

Despite the impressive performance obtained by recent single-image hand modeling techniques, they lack the capa- bility to capture sufficient details of the 3D hand mesh. This deficiency greatly limits their applications when high-fidelity hand modeling is required, e.g., personalized hand modeling. To address this problem, we design a frequency split network to generate 3D hand mesh using different frequency bands in a coarse-to-fine manner. To capture high-frequency personalized details, we transform the 3D mesh into the frequency domain, and propose a novel frequency decomposition loss to supervise each frequency component. By leveraging such a coarse-to-fine scheme, hand details that correspond to the higher frequency domain can be preserved. In addition, the proposed network is scalable, and can stop the inference at any resolution level to accommodate dif- ferent hardware with varying computational powers. To quantitatively evaluate the performance of our method in terms of recovering personalized shape details, we introduce a new evaluation metric named Mean Signal-to-Noise Ratio (MSNR) to measure the signal-to-noise ratio of each mesh frequency component. Extensive experiments demon- strate that our approach generates fine-grained details for high-fidelity 3D hand reconstruction, and our evaluation metric is more effective for measuring mesh details com- pared with traditional metrics. The code is available at https://github.com/tyluann/FreqHand.

Publication
IEEE Computer Vision and Pattern Recognition Conference (CVPR), 2023.
Yuanhao Zhai
Yuanhao Zhai
PhD Student in Computer Vision