Optoelectronics Letters, Volume. 20, Issue 6, 379(2024)

Evaluating quality of motion for unsupervised video object segmentation

[in Chinese] and [in Chinese]
Author Affiliations
  • Jiangsu Key Laboratory of Big Data Analysis Technology, Jiangsu Collaborative Innovation Center on Atmospheric Environment and Equipment Technology, Nanjing University of Information Science & Technology, Nanjing 210044, China
  • show less

    Current mainstream unsupervised video object segmentation (UVOS) approaches typically incorporate optical flow as motion information to locate the primary objects in coherent video frames. However, they fuse appearance and motion information without evaluating the quality of the optical flow. When poor-quality optical flow is used for the interaction with the appearance information, it introduces significant noise and leads to a decline in overall performance. To alleviate this issue, we first employ a quality evaluation module (QEM) to evaluate the optical flow. Then, we select high-quality optical flow as motion cues to fuse with the appearance information, which can prevent poor-quality opticalflow from diverting the network’s attention. Moreover, we design an appearance-guided fusion module (AGFM) to better integrate appearance and motion information. Extensive experiments on several widely utilized datasets, including DAVIS-16, FBMS-59, and YouTube-Objects, demonstrate that the proposed method outperforms existing methods.

    Tools

    Get Citation

    Copy Citation Text

    [in Chinese], [in Chinese]. Evaluating quality of motion for unsupervised video object segmentation[J]. Optoelectronics Letters, 2024, 20(6): 379

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Received: Sep. 24, 2023

    Accepted: Nov. 25, 2023

    Published Online: Aug. 23, 2024

    The Author Email:

    DOI:10.1007/s11801-024-3207-1

    Topics