Infrared Technology, Volume. 47, Issue 6, 722(2025)
Infrared-Visible Person Re-Identification Based on Context Information
[4] [4] WU A, ZHENG W S, YU H X, et al. RGB-infrared cross-modality person re-identification[C]//Proceedings of the IEEE International Conference on Computer Vision, 2017: 5380-5389.
[5] [5] WANG Z, WANG Z, ZHENG Y, et al. Learning to reduce dual-level discrepancy for infrared-visible person re-identification[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2019: 618-626.
[6] [6] DAI P, JI R, WANG H, et al. Cross-modality person re-identification with generative adversarial training[C]//IJCAI. 2018,1(3): 6.
[7] [7] LI D, WEI X, HONG X, et al. Infrared-visible cross-modal person reidentification with an x modality[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2020,34(4): 4610-4617.
[8] [8] ZHONG X, LU T, HUANG W, et al. Grayscale enhancement colorization network for visible-infrared person re-identification[J].IEEE Transactions on Circuits and Systems for Video Technology,2021,32(3): 1418-1430.
[9] [9] YE M, SHEN J J, Crandall D, et al. Dynamic dual-attentive aggregation learning for visible-infrared person re-identification[C]//Computer Vision–ECCV, 2020: 229-247.
[10] [10] WU Q, DAI P, CHEN J, et al. Discover cross-modality nuances for visible-infrared person re-identification[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021: 4330-4339.
[11] [11] ZHANG D, ZHANG Z, JU Y, et al. Dual mutual learning for cross-modality person re-identification[J].IEEE Transactions on Circuits and Systems for Video Technology, 2022,32(8): 5361-5373.
[13] [13] ZHANG Y, KANG Y, ZHAO S, et al. Dual-semantic consistency learning for visible-infrared person re-identification[J].IEEE Transactions on Information Forensics and Security, 2022,18: 1554-1565.
[14] [14] Nguyen D T, HONG H G, Kim K W, et al. Person recognition system based on a combination of body images from visible light and thermal cameras[J].Sensors, 2017,17(3): 605.
[15] [15] YE M, SHEN J, LIN G, et al. Deep learning for person re-identification: a survey and outlook[J].IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021,44(6): 2872-2893.
[16] [16] YE M, LAN X, WANG Z, et al. Bi-directional center-constrained top-ranking for visible thermal person re-identification[J].IEEE Transactions on Information Forensics and Security, 2019,15: 407-419.
[17] [17] YE M, LAN X, LI J, et al. Hierarchical discriminative learning for visible thermal person re-identification[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2018: 7501-7508.
[18] [18] WANG G, ZHANG T, CHENG J, et al. RGB-infrared cross-modality person re-identification via joint pixel and feature alignment[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019: 3623-3632.
[19] [19] FU C, HU Y, WU X, et al. CM-NAS: Cross-modality neural architecture search for visible-infrared person re-identification[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021: 11823-11832.
[20] [20] YE M, RUAN W, DU B, et al. Channel augmented joint learning for visible-infrared recognition[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021: 13567-13576.
[21] [21] ZHENG X, CHEN X, LU X. Visible-infrared person re-identification via partially interactive collaboration[J].IEEE Transactions on Image Processing, 2022,31: 6951-6963.
[22] [22] YANG M, HUANG Z, HU P, et al. Learning with twin noisy labels for visible-infrared person re-identification[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 14308-14317.
[23] [23] CHEN C, YE M, QI M, et al. Structure-aware positional transformer for visible-infrared person re-identification[J].IEEE Transactions on Image Processing, 2022,31: 2352-2364.
Get Citation
Copy Citation Text
GE Bin, ZHENG Haijun, SHI Huaizhong, XIA Chenxing, WU Cheng. Infrared-Visible Person Re-Identification Based on Context Information[J]. Infrared Technology, 2025, 47(6): 722