Journal of Optoelectronics · Laser, Volume. 34, Issue 8, 833(2023)

Pedestrian re-identification based on style normalization and global attention in corrupted images

XIONG Wei1,2,3、*, LIU Yue1, XU Tingting1, SUN Peng1, ZHAO Di1, and LI Lirong1,2
Author Affiliations
  • 1[in Chinese]
  • 2[in Chinese]
  • 3[in Chinese]
  • show less
    References(17)

    [3] [3] HENDRYCKS D,DIETTERICH T.Benchmarking neural ne- twork robustness to common corruptions and perturbations[EB/OL].(2019-03-28)[2022-07-28].https://arxiv.org/abs/1903.12261.

    [4] [4] HENDRYCKS D,MU N,CUBUK E D,et al.Augmix: a simple data processing method to improve robustness and uncertainty[EB/OL].(2020-02-17)[2022-07-28].https://arxiv.org/abs/1912.02781v2.

    [5] [5] CHEN M,WANG Z,ZHENG F.Benchmarks for corruption invariant person re-identification[EB/OL]. (2021-11-01)[2022-07-28].https://arxiv.org/abs/2111.00880v1.

    [6] [6] BISWAS K,KUMAR S,BANERJEE S,et al.SMU:Smooth activation function for deep networks using smoothing maximum technique[EB/OL]. (2022-04-11)[2022-07-28].https://arxiv.org/abs/2111.04682v2.

    [7] [7] PRAJIT R, BARRET Z, QUOC V. Searching for activation functions[EB/OL]. (2017-10-16)[2022-07-28]. https://arxiv.org/abs/1710.05941v2.

    [8] [8] WANG F,JIANG M,QIAN C,et al.Residual attention network for image classification[C]//IEEE Conference on Computer Vision and Pattern Recognition,July 21-26,2018,Honolulu,HI,USA.New York:IEEE,2017:3156-3164.

    [9] [9] WOO S,PARK J,LEE J Y,et al.CBAM:convolutional block attention module[C]//European Conference on Computer Vision,September 8-14,2018,Munich,Germany.Berlin:Springer,2018:3-19.

    [10] [10] JIN X,LAN C,ZENG W,et al.Style normalization and restitution for generalizable person re-identification[C]//IEEE/CVF Conference on Computer Vision and Pattern Recognition,September 13-19,2020,Seattle,WA,USA.New York:IEEE,2020:3143-3152.

    [11] [11] LIU Y,SHAO Z,HOFFMANN N.Global attention mechanism: retain information to enhance channel-spatial interactions[EB/OL].(2021-12-10)[2022-07-28].https://arxiv.org/abs/2112.05561.

    [12] [12] ZHOU K,YANG Y,CAVALLARO A,et al.Omni-scale feature learning for person re-identification[C]//IEEE/CVF International Conference on Computer Vision,June 15-20,2019,Long Beach,CA,USA.New York:IEEE,2019:3702-3712.

    [13] [13] HUANG X,BELONGIE S.Arbitrary style transfer in real-time with adaptive instance normalization[C]//IEEE International Conference on Computer Vision,October 22-29,2017,Venice,Italy.New York:IEEE,2017:1501-1510.

    [14] [14] PAN X,LUO P,SHI J,et al.Two at once: enhang learning and generalization capacities via IBN-Net[C]//European Conference on Computer Vision,September 8-14,2018,Munich,Germany.Berlin:Springer,2018:464-479.

    [15] [15] HU J,SHEN L,SUN G.Squeeze-and-excitation networks[C]//IEEE Conference on Computer Vision and Pattern Recognition,June 18-23,2018,Salt Lake City,UT,USA.New York:IEEE,2018:7132-7141.

    [16] [16] ZHENG Z,ZHENG L,YANG Y.A discriminatively learned CNN embedding for person re-identification[EB/OL].(2017-02-03)[2022-07-28].https://arxiv.org/abs/1611.05666.

    [17] [17] HERMANS A,BEYERL,LEIBE B.In defense of the triplet loss for person re-identification[EB/OL]. (2017-03-22)[2022-07-28].https://arxiv.org/abs/1703.07737.

    [18] [18] LUO H,GU Y,LIAO X,et al.Bag of tricks and a strong baseline for deep person re-identification[C]//IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops,June 15-20,2019,Long Beach,CA,USA.New York:IEEE,2019:8514-8522.

    [19] [19] YE M,SHEN J,LIN G,et al.Deep learning for person re-identification:a survey and outlook[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2021,44(6): 2872-2893.

    Tools

    Get Citation

    Copy Citation Text

    XIONG Wei, LIU Yue, XU Tingting, SUN Peng, ZHAO Di, LI Lirong. Pedestrian re-identification based on style normalization and global attention in corrupted images[J]. Journal of Optoelectronics · Laser, 2023, 34(8): 833

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Received: Jul. 28, 2022

    Accepted: --

    Published Online: Sep. 25, 2024

    The Author Email: XIONG Wei (xw@mail.hbut.edu.cn)

    DOI:10.16136/j.joel.2023.08.0548

    Topics