Acta Optica Sinica, Volume. 36, Issue 1, 115001(2016)

Long-Term Visual Tracking Based on Spatio-Temporal Context

Liu Wei*, Zhao Wenjie, and Li Cheng
Author Affiliations
  • [in Chinese]
  • show less

    Aiming at the tracking drift problem due to object model update falsely in the online learning tracking algorithms, a simple but efficient solution is proposed. In the target area point trackers are uniformly sampled, which are assessed based on texture description in two consecutive frames point trackers and then the initial location of target is completed. Multi-dimensional feature spatio-temporal context model is used to output the precise position of object by the confidence map, the model update rate is decided combining with the confidence map and a multiscale update mechanism is proposed. Experimental results show that the proposed algorithm can complete the robust tracking under the condition of background interference, fast motion, occlusion, illumination changing and scale changing. In the video sequence of 320 pixel×240 pixel, the average tracking speed can keep in 55.1 frame/s, which meets real-time application requirement.

    Tools

    Get Citation

    Copy Citation Text

    Liu Wei, Zhao Wenjie, Li Cheng. Long-Term Visual Tracking Based on Spatio-Temporal Context[J]. Acta Optica Sinica, 2016, 36(1): 115001

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Machine Vision

    Received: Jun. 30, 2015

    Accepted: --

    Published Online: Dec. 31, 2015

    The Author Email: Wei Liu (1224337250@qq.com)

    DOI:10.3788/aos201636.0115001

    Topics