Computer Applications and Software, Volume. 42, Issue 4, 150(2025)
GAIT RECOGNITION USING DISENTANGLED REPRESENTATION LEARNING BASED ON INFORMATION ENTROPY
[1] [1] Nixon M, Tan T, Chellappa R. Subjects allied to gait[M]//Human Identification Based on Gait. Springer, 2006: 5-15.
[2] [2] Han J, Bhanu B. Individual recognition using gait energy image[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006, 28(2): 316-322.
[3] [3] Bashir K, Xiang T, Gong S. Gait recognition using gait entropy image[C]//3rd International Conference on Imaging for Crime Detection and Prevention, 2009: 1-6.
[4] [4] Wu Z, Huang Y, Wang L, et al. A comprehensive study on cross-view gait based human identification with deep CNNs[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 39(2): 209-226.
[5] [5] Li X, Makihara Y, Xu C, et al. Gait recognition via semi-supervised disentangled representation learning to identity and covariate features[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020: 13306-13316.
[6] [6] Xu C, Makihara Y, Li X, et al. Gait recognition from a single image using a phase-aware gait cycle reconstruction network[C]//European Conference on Computer Vision, 2020: 386-403.
[7] [7] Feng Y, Li Y, Luo J. Learning effective gait features using LSTM[C]//2016 23rd International Conference on Pattern Recognition, 2016: 325-330.
[8] [8] Ariyanto G, Nixon M. Marionette mass-spring model for 3D gait biometrics[C]//2012 5th IAPR International Conference on Biometrics, 2012: 354-359.
[9] [9] Zhang Z, Tran L, Yin X, et al. Gait recognition via disentangled representation learning[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 4710-4719.
[10] [10] Denton E, Birodkar V. Unsupervised learning of disentangled representations from video[EB]. arXiv: 1705.10915, 2017.
[11] [11] Balakrishnan G, Zhao A, Dalca A, et al. Synthesizing images of humans in unseen poses[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018: 8340-8348.
[12] [12] Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation[C]//International Conference on Medical Image Computing and Computer-Assisted Intervention, 2015: 234-241.
[13] [13] Esser P, Sutter E, Ommer B. A variational u-net for conditional appearance and shape generation[C]//2018 IEEE Conference on Computer Vision and Pattern Recognition, 2018: 8857-8866.
[14] [14] Tran L, Yin X, Liu X. Disentangled representation learning GAN for pose-invariant face recognition[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition, 2017: 1283-1292.
[15] [15] Tran L, Yin X, Liu X. Representation learning by rotating your faces[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 41(12): 3007-3021.
[16] [16] Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets[C]//28th International Conference on Neural Information Processing Systems, 2014: 2672-2680.
[17] [17] Kraskov A, Stgbauer H, Grassberger P. Estimating mutual information[J]. Physical Review E, 2004, 69(6): 066138.
[18] [18] Esser P, Haux J, Ommer B. Unsupervised robust disentangling of latent characteristics for image synthesis[C]//2019 IEEE/CVF International Conference on Computer Vision, 2019: 2699-2709.
[19] [19] Hsieh J, Liu B, Huang D A, et al. Learning to decompose and disentangle representations for video prediction[EB]. arXiv: 1806.04166, 2018.
[20] [20] He K, Gkioxari G, Dollr P, et al. Mask R-CNN[C]//2017 IEEE International Conference on Computer Vision, 2017: 2980-2988.
[21] [21] Brazil G, Yin X, Liu X. Illuminating pedestrians via simultaneous detection & segmentation[C]//2017 IEEE International Conference on Computer Vision, 2017: 4960-4969.
[22] [22] Brazil G, Liu X. Pedestrian detection with autoregressive network phases[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 7224-7233.
[23] [23] Shutler J, Grant M, Nixon M, et al. On a large sequence-based human gait database[M]//Applications and Science in Soft Computing. Springer, 2004: 339-346.
[24] [24] Sarkar S, Phillips P, Liu Z, et al. The humanID gait challenge problem: Data sets, performance, and analysis[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, 27(2): 162-177.
[25] [25] Hofmann M, Geiger J, Bachmann S, et al. The TUM gait from audio, image and depth (GAID) database: Multimodal recognition of subjects and traits[J]. Journal of Visual Communication and Image Representation, 2014, 25(1): 195-206.
[26] [26] Chen X, Weng J, Lu W, et al. Multi-gait recognition based on attribute discovery[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 40(7): 1697-1710.
[27] [27] Kusakunniran W, Wu Q, Zhang J, et al. Support vector regression for multi-view gait recognition based on local motion feature selection[C]//2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2010: 974 -981.
[28] [28] Kusakunniran W. Recognizing gaits on spatio-temporal feature domain[J]. IEEE Transactions on Information Forensics and Security, 2014, 9(9): 1416-1423.
[29] [29] Hu M, Wang Y, Zhang Z, et al. View-invariant discriminative projection for multi-view gait-based human identification[J]. IEEE Transactions on Information Forensics and Security, 2013, 8(12): 2034-2045.
[30] [30] Kusakunniran W, Wu Q, Zhang J, et al. Recognizing gaits across views through correlated motion co-clustering[J]. IEEE Transactions on Image Processing, 2013, 23(2): 696-709.
[31] [31] Shiraga K, Makihara Y, Muramatsu D, et al. GEINet: View-invariant gait recognition using a convolutional neural network[C]//2016 International Conference on Biometrics, 2016: 1-8.
[32] [32] Alotaibi M, Mahmood A. Improved gait recognition based on specialized deep convolutional neural network[C]//2015 IEEE Applied Imagery Pattern Recognition Workshop, 2015: 1-7.
Get Citation
Copy Citation Text
Cao Zhenjun, Zhu Ziqi. GAIT RECOGNITION USING DISENTANGLED REPRESENTATION LEARNING BASED ON INFORMATION ENTROPY[J]. Computer Applications and Software, 2025, 42(4): 150
Category:
Received: Dec. 31, 2021
Accepted: Aug. 25, 2025
Published Online: Aug. 25, 2025
The Author Email: