Optics and Precision Engineering, Volume. 32, Issue 3, 435(2024)

Attention interaction based RGB-T tracking method

Wei WANG... Feiya FU, Hao LEI and Zili TANG* |Show fewer author(s)
Author Affiliations
  • The 63870 Unit of PLA, Weinan714299, China
  • show less
    References(32)

    [1] C L LI, X Y LIANG, Y J LU et al. RGB-T object tracking: Benchmark and baseline. Pattern Recognition, 96, 106977(2019).

    [2] X C ZHANG, P YE, H LEUNG et al. Object fusion tracking based on visible and infrared images: a comprehensive review. Information Fusion, 63, 166-187(2020).

    [3] C L LI, W L XUE, Y Q JIA et al. LasHeR: a large-scale high-diversity benchmark for RGBT tracking. IEEE Transactions on Image Processing, 31, 392-404(2022).

    [4] C L LI, H CHENG, S Y HU et al. Learning collaborative sparse representation for grayscale-thermal tracking. IEEE Transactions on Image Processing, 25, 5743-5756(2016).

    [5] P Y ZHANG, J ZHAO, C J BO et al. Jointly modeling motion and appearance cues for robust RGB-T tracking. IEEE Transactions on Image Processing, 30, 3335-3347(2021).

    [6] C L LI, X H WU, N ZHAO et al. Fusing two-stream convolutional neural networks for RGB-T object tracking. Neurocomputing, 281, 78-85(2018).

    [7] J F HENRIQUES, R CASEIRO, P MARTINS et al. High-speed tracking with kernelized correlation filters. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37, 583-596(2015).

    [8] L C ZHANG, M DANELLJAN, A GONZALEZ-GARCIA et al. Multi-modal fusion for end-to-end RGB-T tracking, 27, 2252-2261(2019).

    [9] G BHAT, M DANELLJAN, L VAN GOOL et al. Learning discriminative model prediction for tracking, 6181-6190(2019).

    [10] B YAN, H W PENG, J L FU et al. Learning spatio-temporal transformer for visual tracking, 10, 10428-10437(2021).

    [11] B LI, W WU, Q WANG et al. SiamRPN++: evolution of Siamese visual tracking with very deep networks, 15, 4277-4286(2019).

    [12] T L ZHANG, X R LIU, Q ZHANG et al. SiamCDA: complementarity- and distractor-aware RGB-T tracking based on Siamese network. IEEE Transactions on Circuits and Systems for Video Technology, 32, 1403-1417(2022).

    [13] B HAN. Learning multi-domain convolutional neural networks for visual tracking, 27, 4293-4302(2016).

    [14] C L LI, A D LU, A H ZHENG et al. Multi-adapter RGBT tracking, 27, 2262-2270(2019).

    [15] P Y ZHANG, D WANG, H C LU et al. Learning adaptive attribute-driven representation for real-time RGB-T tracking. International Journal of Computer Vision, 129, 2714-2729(2021).

    [16] C L LI, L LIU, A D LU et al. Challenge-aware RGBT Tracking. Computer Vision-ECCV 2020, 222-237(2020).

    [17] Y XIAO, M M YANG, C L LI et al. Attribute-based progressive fusion network for RGBT tracking. Proceedings of the AAAI Conference on Artificial Intelligence, 36, 2831-2838(2022).

    [18] C Q WANG, C Y XU, Z CUI et al. Cross-modal pattern-propagation for RGB-T tracking, 13, 7062-7071(2020).

    [19] C Y XU, Z CUI, C Q WANG et al. Learning cross-modal interaction for RGB-T tracking. Science China Information Sciences, 66, 119103(2022).

    [20] A VASWANI, N M SHAZEER, N PARMAR et al. Attention is all you need, 5998-6008(2017).

    [21] A DOSOVITSKIY, L BEYER, A KOLESNIKOV et al. An image is worth 16×16 words: transformers for image recognition at scale, 1(2021).

    [22] Z LIU, Y T LIN, Y CAO et al. Swin Transformer: hierarchical Vision Transformer using Shifted Windows, 10, 9992-10002(2021).

    [23] X Z ZHU, W J SU, L W LU et al. Deformable DETR: deformable transformers for end-to-end object detection, 1(2021).

    [24] B Y CHEN, P X LI, L BAI et al. Backbone is all your need: a simplified architecture for visual object tracking, 375-392(2022).

    [25] Y B ZHU, C L LI, J TANG et al. Quality-aware feature aggregation network for robust RGBT tracking. IEEE Transactions on Intelligent Vehicles, 6, 121-130(2021).

    [26] J T MEI, D M ZHOU, J D CAO et al. HDINet: hierarchical dual-sensor interaction network for RGBT tracking. IEEE Sensors Journal, 21, 16915-16926(2021).

    [27] Z Z TU, C LIN, W ZHAO et al. M5L: multi-modal multi-margin metric learning for RGBT tracking. IEEE Transactions on Image Processing, 31, 85-98(2022).

    [28] Y GAO, C L LI, Y B ZHU et al. Deep adaptive fusion network for high performance RGBT tracking, 91-99(2019).

    [29] Y B ZHU, C L LI, B LUO et al. Dense feature aggregation and pruning for RGBT tracking, 465-472(25).

    [30] H ZHANG, L ZHANG, L ZHUO et al. Object tracking in RGB-T videos using modal-aware attention network and competitive learning. Sensors, 20, 393(2020).

    [31] A D LU, C QIAN, C L LI et al. Duality-gated mutual condition network for RGBT tracking. IEEE Transactions on Neural Networks and Learning Systems(2022).

    [32] C L LI, C L ZHU, Y HUANG et al. Cross-modal ranking with soft consistency and noisy labels for robust RGB-T tracking, 831-847(2018).

    Tools

    Get Citation

    Copy Citation Text

    Wei WANG, Feiya FU, Hao LEI, Zili TANG. Attention interaction based RGB-T tracking method[J]. Optics and Precision Engineering, 2024, 32(3): 435

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category:

    Received: Jul. 19, 2023

    Accepted: --

    Published Online: Apr. 2, 2024

    The Author Email: TANG Zili (tang_zili@qq.com)

    DOI:10.37188/OPE.20243203.0435

    Topics