Withslightly fine-tuning on the target NIR-VIS face recognition datasets, ourmethod can significantly surpass the SOTA performance. Extensive experiments conducted on fourchallenging NIR-VIS face recognition benchmarks demonstrate that the proposedmethod can achieve comparable performance with the state-of-the-art (SOTA)methods without requiring any existing NIR-VIS face recognition datasets. Moreover, to facilitate the identity feature learning, wepropose an IDentity-based Maximum Mean Discrepancy (ID-MMD) loss, which notonly reduces the modality gap between NIR and VIS images at the domain levelbut encourages the network to focus on the identity features instead of facialdetails, such as poses and accessories. We then use aphysically-based renderer to generate a vast, high-resolution andphotorealistic dataset consisting of various poses and identities in the NIRand VIS spectra. Specifically, we reconstruct 3D face shape andreflectance from a large 2D facial dataset and introduce a novel method oftransforming the VIS reflectance to NIR reflectance. To overcome this problem, we propose a novel method for pairedNIR-VIS facial image generation. Near infrared (NIR) to Visible (VIS) face matching is challenging due to thesignificant domain gaps as well as a lack of sufficient data for cross-modalitymodel training.
0 Comments
Leave a Reply. |