Optics and Precision Engineering, Volume. 31, Issue 22, 3345(2023)

Adaptive feature matching network for object occlusion

Lin MAO, Hongyang SU*, and Dawei YANG
Author Affiliations
  • School of Electromechanical Engineering, Dalian Minzu University, Dalian116600, China
  • show less
    References(27)

    [1] DANELLJAN M, BHAT G, KHAN F S et al. ATOM: accurate tracking by overlap maximization[C], 4655-4664(15).

    [2] DANELLJAN M, VAN GOOL L, TIMOFTE R. Probabilistic regression for visual tracking[C], 7181-7190(13).

    [3] LI B, YAN J J, WU W et al. High performance visual tracking with siamese region proposal network[C], 8971-8980(18).

    [4] FAN H, LING H B. Siamese cascaded region proposal networks for Real-Time visual tracking[C], 7944-7953(15).

    [5] LI B, WU W, WANG Q et al. SiamRPN: evolution of siamese visual tracking with very deep networks[C], 4277-4286(15).

    [6] WANG Q, ZHANG L, BERTINETTO L et al. Fast online object tracking and segmentation: a unifying approach[C], 1328-1338(15).

    [7] LUKEŽIČ A, MATAS J, KRISTAN M. D3S-A discriminative single shot segmentation tracker[C], 7131-7140(13).

    [8] XIE F, WANG C Y, WANG G T et al. Correlation-Aware deep tracking[C], 8741-8750(18).

    [9] CUI Y T, JIANG C, WANG L M et al. MixFormer: End-To-End tracking with iterative mixed attention[C], 13598-13608(18).

    [10] OH S W, LEE J Y, XU N et al. Video object segmentation using Space-Time memory networks[C], 9225-9234(2019).

    [11] XIE H Z, YAO H X, ZHOU S C et al. Efficient regional memory network for video object segmentation[C], 1286-1295(20).

    [12] PAUL M, DANELLJAN M, VAN GOOL L et al. Local memory attention for fast video semantic segmentation[C], 1102-1109(2021).

    [13] WANG H, WANG W N, LIU J. Temporal memory attention for video semantic segmentation[C], 2254-2258(19).

    [14] FU Z H, LIU Q J, FU Z H et al. STMTrack: Template-Free visual tracking with Space-Time memory networks[C], 13769-13778(20).

    [15] XU Y D, WANG Z Y, LI Z X et al. SiamFC++: towards robust and accurate visual tracking with target estimation guidelines[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 34, 12549-12556(2020).

    [16] MAYER C, DANELLJAN M, BHAT G et al. Transforming model prediction for tracking[C], 8721-8730(18).

    [17] ZHOU Z K, PEI W J, LI X et al. Saliency-Associated object tracking[C], 9846-9855(10).

    [18] BHAT G, DANELLJAN M, VAN GOOL L et al. Know Your Surroundings Exploiting Scene Information for object Tracking[M]. Computer Vision - ECCV 2020, 205-221(2020).

    [19] YU Y C, XIONG Y L, HUANG W L et al. Deformable siamese attention networks for visual object tracking[C], 6727-6736(13).

    [20] ZHANG Z P, PENG H W, FU J L et al. Ocean Object-Aware Anchor-Free Tracking[M]. Computer Vision - ECCV 2020, 771-787(2020).

    [21] BHAT G, JOHNANDER J, DANELLJAN M et al. Unveiling the Power of Deep Tracking[M]. Computer Vision-ECCV 2018, 493-509(2018).

    [22] CHEN Z D, ZHONG B N, LI G R et al. SiamBAN: target-aware tracking with Siamese box adaptive network[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1-17(2022).

    [23] WANG N, ZHOU W G, WANG J et al. Transformer meets tracker: exploiting temporal context for robust visual tracking[C], 1571-1580(20).

    [24] BHAT G, DANELLJAN M, VAN GOOL L et al. Learning discriminative model prediction for tracking[C], 6181-6190(2019).

    [25] VOIGTLAENDER P, LUITEN J, TORR P H S et al. Siam R-CNN: visual tracking by re-detection[C], 6577-6587(13).

    [26] YAN B, PENG H W, FU J L et al. Learning Spatio-Temporal transformer for visual tracking[C], 10428-10437(10).

    [27] ZHANG Z P, LIU Y H, WANG X et al. Learn to Match: automatic matching network design for visual tracking[C], 13319-13328(10).

    Tools

    Get Citation

    Copy Citation Text

    Lin MAO, Hongyang SU, Dawei YANG. Adaptive feature matching network for object occlusion[J]. Optics and Precision Engineering, 2023, 31(22): 3345

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category:

    Received: Apr. 26, 2023

    Accepted: --

    Published Online: Dec. 29, 2023

    The Author Email: Hongyang SU (wxhxhwdn0725@163.com)

    DOI:10.37188/OPE.20233122.3345

    Topics