Opto-Electronic Engineering, Volume. 47, Issue 12, 190669(2020)
Feature pyramid random fusion network for visible-infrared modality person re-identification
Existing works in person re-identification only considers extracting invariant feature representations from cross-view visible cameras, which ignores the imaging feature in infrared domain, such that there are few studies on visible-infrared relevant modality. Besides, most works distinguish two-views by often computing the similarity in feature maps from one single convolutional layer, which causes a weak performance of learning features. To handle the above problems, we design a feature pyramid random fusion network (FPRnet) that learns discriminative multiple semantic features by computing the similarities between multi-level convolutions when matching the person. FPRnet not only reduces the negative effect of bias in intra-modality, but also balances the heterogeneity gap between inter-modality, which focuses on an infrared image with very different visual properties. Meanwhile, our work integrates the advantages of learning local and global feature, which effectively solves the problems of visible-infrared person re-identification. Extensive experiments on the public SYSU-MM01 dataset from aspects of mAP and convergence speed, demonstrate the superiorities in our approach to the state-of-the-art methods. Furthermore, FPRnet also achieves competitive results with 32.12% mAP recognition rate and much faster convergence.
Get Citation
Copy Citation Text
Wang Ronggui, Wang Jing, Yang Juan, Xue Lixia. Feature pyramid random fusion network for visible-infrared modality person re-identification[J]. Opto-Electronic Engineering, 2020, 47(12): 190669
Category: Article
Received: Nov. 2, 2019
Accepted: --
Published Online: Jan. 14, 2021
The Author Email: Juan Yang (yangjuan6985@163.com)