Journal of Optoelectronics · Laser, Volume. 35, Issue 7, 745(2024)

Cross-modality person re-identification based on dual enhancement network

CHEN Mengdie, LU Jian*, and ZHANG Qi
Author Affiliations
  • School of Electronics and Information, Xi'an Polytechnic University, Xi'an, Shaanxi 710600, China
  • show less

    This paper proposes a dual enhanced network (DEN) based on channel and feature learning to address the problem of poor accuracy in cross- modality person re-identification (ReID) caused by heterogeneous sample differences, person occlusion, and background interference. At the channel level, visible channels are randomly swapped to explore the relationship between visible and infrared channels, enhancing the model's robustness to multimodal sample changes. At the feature level, a normalization-based attention module (NAM) is introduced before module sharing network to avoid noise interference on modality-invariant information learning by punishing weights with smaller contribution factors, and a feature separation module (FSM) is used to separate identity-related features from identity-independent features, improving the model's recognition ability for heterogeneous samples. Finally, the network is trained and supervised using hard sample triples and weighted regularization loss to constrain pedestrian feature learning. On the RegDB dataset, DEN achieves a high level of accuracy, with a Rank1 accuracy of 94.86% and mAP of 90.10%.

    Tools

    Get Citation

    Copy Citation Text

    CHEN Mengdie, LU Jian, ZHANG Qi. Cross-modality person re-identification based on dual enhancement network[J]. Journal of Optoelectronics · Laser, 2024, 35(7): 745

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category:

    Received: Nov. 17, 2022

    Accepted: Dec. 13, 2024

    Published Online: Dec. 13, 2024

    The Author Email: LU Jian (chen_2372699@163.com)

    DOI:10.16136/j.joel.2024.07.0783

    Topics