Opto-Electronic Engineering, Volume. 51, Issue 9, 240119-1(2024)
Quadrupl-stream input-guided feature complementary visible-infrared person re-identification
Current visible-infrared person re-identification research focuses on extracting modal shared saliency features through the attention mechanism to minimize modal differences. However, these methods only focus on the most salient features of pedestrians, and cannot make full use of modal information. To solve this problem, a quadrupl-stream input-guided feature complementary network (QFCNet) is proposed in this paper. Firstly, a quadrupl-stream feature extraction and fusion module is designed in the mode-specific feature extraction stage. By adding two data enhancement inputs, the color differences between modalities are alleviated, the semantic information of the modalities is enriched and the multi-dimensional feature fusion is further promoted. Secondly, a sub-salient feature complementation module is designed to supplement the pedestrian detail information ignored by the attention mechanism in the global feature through the inversion operation, to strengthen the pedestrian discriminative features. The experimental results on two public datasets SYSU-MM01 and RegDB show the superiority of this method. In the full search mode of SYSU-MM01, the rank-1 and mAP values reach 76.12% and 71.51%, respectively.
Get Citation
Copy Citation Text
Bin Ge, Nuo Xu, Chenxing Xia, Haijun Zheng. Quadrupl-stream input-guided feature complementary visible-infrared person re-identification[J]. Opto-Electronic Engineering, 2024, 51(9): 240119-1
Category: Article
Received: May. 23, 2024
Accepted: Aug. 18, 2024
Published Online: Dec. 12, 2024
The Author Email: