Optics and Precision Engineering, Volume. 26, Issue 1, 238(2018)
Accurate and rapid contour extraction of visual measurement for rail wear
[1] [1] HERATH S, HARANDI M, PORIKLI F. Going deeper into action recognition: A survey [J]. Image and Vision Computing, 2017, 60: 4-21.
[3] [3] WANG H, ULLAH M M, KLSER A, et al.. Evaluation of local spatio-temporal features for action recognition[C]. Proceedings of British Machine Vision Conference, BMVC, 2009: 7-10.
[4] [4] WANG H, KLSER A, SCHMID C, et al.. Action recognition by dense trajectories[C]. Proceedings of 2011 IEEE Conference on Computer Vision and Pattern Recognition, IEEE, 2011: 3169-3176.
[5] [5] WANG H, SCHMID C. Action recognition with improved trajectories[C]. Proceedings of 2013 IEEE International Conference on Computer Vision, IEEE, 2013: 3551-3558.
[8] [8] LIU J G, LUO J B, SHAH M. Recognizing realistic actions from videos “in the Wild”[C]. Proceedings of 2009 IEEE Conference on Computer Vision and Pattern Recognition, IEEE, 2009: 1996-2003.
[9] [9] RODRIGUEZ M, ORRITE C, MEDRANO C, et al.. One-shot learning of Human activity with an MAP adapted GMM and simplex-HMM [J]. IEEE Transactions on Cybernetics, 2017, 47(7): 1769-1780.
[10] [10] PENG X J, WANG L M, WANG X X, et al.. Bag of visual words and fusion methods for action recognition: Comprehensive study and good practice [J]. Computer Vision and Image Understanding, 2016, 150: 109-125.
[11] [11] BHATTACHARYAS, SUKTHANKAR R, JIN R, et al.. A probabilistic representation for efficient large scale visual recognition tasks[C]. IEEE Proceedings of 2011 IEEE Conference on Computer Vision and Pattern Recognition, IEEE, 2011: 2593-2600.
[12] [12] YANG X D, TIAN Y L. Action recognition using super sparse coding vector with spatio-temporal awareness[C]. Proceedings of 13th European Conference on Computer Vision, Springer, 2014: 727-741.
[13] [13] JI SH W, XU W, YANG M, et al.. 3D convolutional neural networks for Human action recognition [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(1): 221-231.
[14] [14] VAROL G, LAPTEV I, SCHMID C. Long-term temporal convolutions for action recognition [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, doi: 10.1109/TPAMI.2017.2712608.
[15] [15] LE Q V, ZOU W L, YEUNG S Y, et al.. Learning hierarchical invariant spatio-temporal features for action recognition with independent subspace analysis[C]. Proceedings of 2011 IEEE Conference on Computer Vision and Pattern Recognition, IEEE, 2011: 3361-3368.
[16] [16] WANG H R, YUAN CH F, HU W M, et al.. Action recognition using nonnegative action component representation and sparse basis selection [J]. IEEE Transactions on Image Processing, 2014, 23(2): 570-581.
[17] [17] LIU L, SHAO L, LI X L, et al.. Learning spatio-temporal representations for action recognition: A genetic progra mming approach [J]. IEEE Transactions on Cybernetics, 2016, 46(1): 158-170.
[18] [18] PARK E, HAN X F, BERG T L, et al.. Combining multiple sources of knowledge in deep CNNs for action recognition[C]. Proceedings of 2016 IEEE Winter Conference on Applications of Computer Vision, IEEE, 2016: 1-8.
[19] [19] SIMONYAN K, ZISSERMAN A. Two-stream convolutional networks for action recognition in videos[C]. Advances in Neural Information Processing Systems 27, NIPS, 2014: 568-576.
[20] [20] ZHU W J, HU J, SUN G, et al.. A key volume mining deep framework for action recognition[C]. Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition, IEEE, 2016: 1991-1999.
ZHU W J, HU J, SUN G, et al.. A key volume mining deep framework for action recognition[C]. Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition, IEEE, 2016: 1991-1999.
[21] [21] ZHAN D, YU L, XIAO J, et al.. Study on high-accuracy vision measurement approach for dynamic inspection of full cross-sectional rail profile [J]. Journal of the China Railway Society, 2015, 37(9): 96-106. (in Chinese)
[22] [22] KANG G Q, LI CH M, QIN L J, et al.. Research on a method of calibrating dynamic rail profile data [J]. Chinese Journal of Sensors and Actuators, 2015, 28(2): 221-226. (in Chinese)
[23] [23] HUA CH Q, KOU D H, FU SH L, et al.. Approach comparison of several rail wear instrumentation and measurements [J]. Chinese Railways, 2013(4): 67-70. (in Chinese)
[24] [24] WANG X H, WAN Y, LI R, et al.. A multi-object image segmentation C-V model based on region division and gradient guide [J]. Journal of Visual Communication and Image Representation, 2016, 39: 100-106.
[26] [26] HARUKI T, KIKUCHI K. Video camera system using fuzzy logic [J]. IEEE Transactions on Consumer Electronics, 1992, 38(3): 624-634.
[29] [29] STEGER C. An unbiased detector of curvilinear structures [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1998, 20(2): 113-125.
[32] [32] Peoples Republic of China Ministry of Railways. Maintenance Rules for Ballasted Track High-speed Railway Track(Trial) [M]. Beijing: China Railway Publishing House, 2013. (in Chinese)
Get Citation
Copy Citation Text
[in Chinese], [in Chinese], [in Chinese], [in Chinese], [in Chinese], [in Chinese], [in Chinese]. Accurate and rapid contour extraction of visual measurement for rail wear[J]. Optics and Precision Engineering, 2018, 26(1): 238
Category:
Received: May. 8, 2017
Accepted: --
Published Online: Mar. 14, 2018
The Author Email: (jxsjtao@my.swjtu.edu.cn)