Optical Instruments, Volume. 44, Issue 4, 16(2022)
A dual-branch network for action recognition
[1] CHEN Z, LI S, YANG B, et al. Multi-scale spatial temporal graph convolutional network for skeleton-based action recognition[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 35, 1113-1122(2021).
[2] [2] DU Y, WANG W, WANG L. Hierarchical recurrent neural wk f skeleton based action recognition[C]Proceedings of the IEEE Conference on Computer Vision Pattern Recognition. Boston: IEEE, 2015: 11101118.
[3] POPPE R. A survey on vision-based human action recognition[J]. Image and Vision Computing, 28, 976-990(2010).
[4] [4] SHAHROUDY A, LIU J, NG T T, et al. NTU RGB+D: A large scale dataset f 3D human activity analysis[C]Proceedings of the IEEE Conference on Computer Vision Pattern Recognition. Las Vegas, NV, USA: IEEE, 2016: 1010 1019.
[5] [5] FEICHTENHOFER C, FAN H Q, MALIK J, et al. Slowfast wks f video recognition[C]Proceedings of the IEEECVF International Conference on Computer Vision. Seoul: IEEE, 2019: 6201 − 6210.
[6] [6] LIU Z, ZHANG H W, CHEN Z H, et al. Disentangling unifying graph convolutions f skeletonbased action recognition[C]Proceedings of the IEEECVF Conference on Computer Vision Pattern Recognition. Seattle, WA, USA: IEEE, 2020: 140 149.
[7] SHI L, ZHANG Y F, CHENG J, et al. Skeleton-based action recognition with multi-stream adaptive graph convolutional networks[J]. IEEE Transactions on Image Processing, 29, 9532-9545(2020).
[8] [8] SHI L, ZHANG Y F, CHENG J, et al. Twostream adaptive graph convolutional wks f skeletonbased action recognition[C]Proceedings of the IEEECVF Conference on Computer Vision Pattern Recognition. Long Beach, CA, USA: IEEE, 2019: 12028 12037.
[9] [9] YE F F, PU S L, ZHONG Q Y, et al. Dynamic GCN: contextenriched topology learning f skeletonbased action recognition[C]Proceedings of the 28th ACM International Conference on Multimedia. New Yk, NY, USA: Association f Computing Machinery, 2020: 55 − 63.
[10] [10] CHENG K, ZHANG Y F, HE X Y, et al. Skeletonbased action recognition with shift graph convolutional wk[C]Proceedings of the IEEECVF Conference on Computer Vision Pattern Recognition. Seattle, WA, USA: IEEE, 2020: 180 189.
[11] [11] GAO X, HU W, TANG J X, et al. Optimized skeletonbased action recognition via sparsified graph regression[C]Proceedings of the 27th ACM International Conference on Multimedia. New Yk, NY, USA: Association f Computing Machinery, 2019: 601 610.
[12] [12] LI M S, CHEN S H, CHEN X, et al. Actionalstructural graph convolutional wks f skeletonbased action recognition[C]Proceedings of the IEEECVF Conference on Computer Vision Pattern Recognition. Long Beach, CA, USA: IEEE, 2019: 3590 3598.
[13] [13] YAN S J, XIONG Y J, LIN D H. Spatial tempal graph convolutional wks f skeletonbased action recognition[C]The ThirtySecond AAAI Conference on Artificial Intelligence. New leans: AAAI, 2018.
[14] [14] SI C Y, CHEN W T, WANG W, et al. An attention enhanced graph convolutional LSTM wk f skeletonbased action recognition[C]Proceedings of the IEEECVF Conference on Computer Vision Pattern Recognition. Long Beach, CA, USA: IEEE, 2019: 1227 1236.
[15] [15] ZHANG P F, LAN C L, ZENG W J, et al. Semanticsguided neural wks f efficient skeletonbased human action recognition[C]Proceedings of the IEEECVF Conference on Computer Vision Pattern Recognition. Seattle, WA, USA: IEEE, 2020: 1109 1118.
[16] [16] ZHAO R, WANG K, SU H, et al. Bayesian graph convolution LSTM f skeleton based action recognition[C]Proceedings of the IEEECVF International Conference on Computer Vision. Seoul: IEEE, 2019: 6881 − 6891.
[17] [17] WU C, WU X J, KITTLER J. Spatial residual layer dense connection block enhanced spatial tempal graph convolutional wk f skeletonbased action recognition[C]Proceedings of the IEEECVF International Conference on Computer Vision Wkshops. Seoul: IEEE, 2019: 1740 − 1748.
[18] [18] LEE C Y, XIE S, GALLAGHER P, et al. Deeplysupervised s[C]Proceedings of the Eighteenth International Conference on Artificial Intelligence Statistics. Lille, France: PMLR, 2015: 562 570.
[19] [19] WANG X L, GIRSHICK R, GUPTA A, et al. Nonlocal neural wks[C]Proceedings of the IEEE Conference on Computer Vision Pattern Recognition. Salt Lake City, UT, USA: IEEE, 2018: 7794 7803.
[20] [20] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]Proceedings of the 31st International Conference on Neural Infmation Processing Systems. Red Hook, NY, USA: Curran Associates Inc., 2017: 6000 6010.
[21] LIU J, SHAHROUDY A, PEREZ M, et al. NTU RGB + D 120: a large-scale benchmark for 3D human activity understanding[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42, 2684-2701(2020).
[22] [22] KAY W, CARREIRA J, SIMONYAN K, et al. The kiics human action video dataset[J]. arXiv: 1705. 06950, 2017.
[23] [23] CAO Z, SIMON T, WEI S E, et al. Realtime multiperson 2D pose estimation using part affinity fields[C]Proceedings of the IEEE Conference on Computer Vision Pattern Recognition. Honolulu, HI, USA: IEEE, 2017: 1302 1310.
[24] [24] SHI L, ZHANG Y F, CHENG J, et al. Skeletonbased action recognition with directed graph neural wks[C]Proceedings of the IEEECVF Conference on Computer Vision Pattern Recognition. Long Beach, CA, USA: IEEE, 2019: 7904 7913.
[25] [25] LIU J, SHAHROUDY A, XU D, et al. Spatiotempal LSTM with trust gates f 3D Human action recognition[C]14th European Conference on Computer Vision. Amsterdam, The herls: Springer, 2016: 816 833.
[26] [26] LIU J, WANG G, HU P, et al. Global contextaware attention LSTM wks f 3D action recognition[C]Proceedings of the IEEE Conference on Computer Vision Pattern Recognition. Honolulu, HI, USA: IEEE, 2017: 3671 3680.
[27] [27] LIU M Y, YUAN J S. Recognizing human actions as the evolution of pose estimation maps[C]Proceedings of the IEEE Conference on Computer Vision Pattern Recognition. Salt Lake City, UT, USA: IEEE, 2018: 1159 1168.
Get Citation
Copy Citation Text
Xiaofei QIN, Rui CAI, Meng CHEN, Wenqi ZHANG, Changxiang HE, Xuedian ZHANG. A dual-branch network for action recognition[J]. Optical Instruments, 2022, 44(4): 16
Category: APPLICATION TECHNOLOGY
Received: Dec. 21, 2021
Accepted: --
Published Online: Oct. 19, 2022
The Author Email: