Laser & Optoelectronics Progress, Volume. 57, Issue 18, 181509(2020)

No Reference Video Quality Assessment Based on Spatio-Temporal Features and Attention Mechanism

Ze Zhu1, Qingbing Sang1,2、*, and Hao Zhang1
Author Affiliations
  • 1School of Internet of Things Engineering, Jiangnan University, Wuxi, Jiangsu 214122, China
  • 2Jiangsu Provincial Engineering Laboratory of Pattern Recognition and Computational Intelligence, Wuxi, Jiangsu 214122, China
  • show less

    With the rapid development of video technology, more and more video applications gradually enter people's lives, Therefore, conducting research on video quality is very meaningful. Herein, a no-reference video quality assessment algorithm based on the powerful feature-extraction capabilities of convolutional neural networks and recurrent neural networks combined with the attention mechanism is proposed. This algorithm first extracts the spatial features of the distorted videos by using the Visual Geometry Group (VGG) network, the distortion of video airspace feature extraction. Further, we use cycle time-domain features of neural networks to extract the video distortion. Then the introduced attention mechanism important degree for the space-time characteristics of the video is calculated according to the important degree of the overall characteristics of the video. Finally, regression of the entire connection layer is performed to obtain the evaluation score of the video quality. Experiment results on three public video databases show that the predicted results are in good agreement with human subjective quality scores and have better performance than the latest video quality evaluation algorithms.

    Tools

    Get Citation

    Copy Citation Text

    Ze Zhu, Qingbing Sang, Hao Zhang. No Reference Video Quality Assessment Based on Spatio-Temporal Features and Attention Mechanism[J]. Laser & Optoelectronics Progress, 2020, 57(18): 181509

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Machine Vision

    Received: Jan. 7, 2020

    Accepted: Feb. 24, 2020

    Published Online: Sep. 2, 2020

    The Author Email: Sang Qingbing (sangqb@163.com)

    DOI:10.3788/LOP57.181509

    Topics