Laser & Optoelectronics Progress, Volume. 61, Issue 12, 1215006(2024)
Visible-Infrared Person Re-Identification Via Feature Constrained Learning
Due to the huge modal difference between visible and infrared images, visible-infrared person re-identification (VI-ReID) is a challenging task. Recently, the main problem of VI-ReID is how to effectively extract useful information from the shared features across modalities. To solve this problem, we propose a dual-flow cross-modal pedestrian recognition network based on the visual Transformer, which utilizes a modal token embedding module and a multi-resolution feature extraction module to supervise the model in extracting discriminative modal shared information. In addition, to enhance the discrimination of the model, the modal invariance constraint loss and the feature center constraint loss are designed. The modal invariance constraint loss will guide the model to learn the invariant features between modalities. The feature center constraint loss will supervise the model to minimize inter-class feature differences and maximize intra-class feature differences. Extensive experimental results on the SYSU-MM01 dataset and RegDB dataset show that the proposed method is better than most existing methods. On the large-scale SYSU-MM01 dataset, our model can achieve 67.69% and 66.82% in terms of the first matching characteristic and the mean average precision.
Get Citation
Copy Citation Text
Jing Zhang, Guangfeng Chen. Visible-Infrared Person Re-Identification Via Feature Constrained Learning[J]. Laser & Optoelectronics Progress, 2024, 61(12): 1215006
Category: Machine Vision
Received: Aug. 7, 2023
Accepted: Sep. 18, 2023
Published Online: May. 20, 2024
The Author Email: Guangfeng Chen (chengf@dhu.edu.cn)
CSTR:32186.14.LOP231858