Laser & Optoelectronics Progress, Volume. 56, Issue 15, 151503(2019)

Human Action Recognition Algorithm Based on Bi-LSTM-Attention Model

Mingkang Zhu1 and Xianling Lu2、*
Author Affiliations
  • 1 Key Laboratory of Advanced Process Control for Light Industry (Ministry of Education) Jiangnan University, Wuxi Jiangsu 214122, China
  • 2 School of Internet of Things Engineering, Jiangnan University, Wuxi, Jiangsu 214122, China
  • show less

    This study proposed a human action recognition algorithm based on the Bi-LSTM-Attention model to solve the problem of low action recognition rate. This problem was caused by the inability of long short term memory (LSTM) networks to effectively extract correlative informations before and after actions. The proposed algorithm first extracted 20 image frames from each video and used the Inceptionv3 model to extract deep features from these frames. Then, forward and backward Bi-LSTM neural networks were constructed to learn the temporal information in the feature vectors. The influences of network weights on recognition results were adaptively perceived using the attention mechanism. This step was performed so that the model could achieve more accurate recognition based on the relationship between informations acquired before and after performing the given action. Finally, the videos were connected via a fully-connected layer to a Softmax classifier for classification. Comparison between the Action Youtobe and KTH human action datasets and existing methods revealed that the proposed algorithm effectively improved the action recognition rate.

    Tools

    Get Citation

    Copy Citation Text

    Mingkang Zhu, Xianling Lu. Human Action Recognition Algorithm Based on Bi-LSTM-Attention Model[J]. Laser & Optoelectronics Progress, 2019, 56(15): 151503

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Machine Vision

    Received: Jan. 23, 2019

    Accepted: Mar. 11, 2019

    Published Online: Aug. 5, 2019

    The Author Email: Lu Xianling (jnluxl@jiangnan.edu.cn)

    DOI:10.3788/LOP56.151503

    Topics