Laser & Optoelectronics Progress, Volume. 59, Issue 8, 0810010(2022)
Multi-Loss Joint Cross-Modality Person Re-Identification Method Integrating Attention Mechanism
The difficulty of cross-modality person re-identification task is to extract more effective modal shared features. To solve the problems, this paper proposes a multi-loss joint cross-modality person re-identification method based on attention mechanism. Firstly, the attention model is embedded in the ResNet50 network, preserving the details. Secondly, the feature is divided into six local features to make the network focus on local deep information and enhance the representation ability of the network. Finally, the extracted local feature column vectors were normalized by batch processing, and the cross-entropy loss and improved hetero-center loss were used for joint supervised learning to accelerate the model convergence and improve the model accuracy. The proposed method achieves an average accuracy of 56.82% and 75.44% in the SYSU-MM01 and RegDB datasets, respectively. The experimental results show that the proposed method can effectively improve the accuracy of cross-modality person re-identification.
Get Citation
Copy Citation Text
Fengsui Wang, Furong Liu, Jingang Chen, Qisheng Wang. Multi-Loss Joint Cross-Modality Person Re-Identification Method Integrating Attention Mechanism[J]. Laser & Optoelectronics Progress, 2022, 59(8): 0810010
Category: Image Processing
Received: Mar. 30, 2021
Accepted: Apr. 29, 2021
Published Online: Apr. 11, 2022
The Author Email: Wang Fengsui (fswang@ahpu.edu.cn)