Chinese Journal of Liquid Crystals and Displays, Volume. 37, Issue 3, 386(2022)
Micro-expression recognition based on video magnification and dual-branch network
[1] [1] EKMAN P, FRIESEN W V. Nonverbal leakage and clues to deception [J]. Psychiatry, 1969, 32(1): 88-106.
[2] [2] YAN W J, WU Q, LIANG J, et al. How fast are the leaked facial expressions: the duration of micro-expressions [J]. Journal of Nonverbal Behavior, 2013, 37(4): 217-230.
[3] [3] WANG S J, YAN W J, LI X B, et al. Micro-expression recognition using dynamic textures on tensor independent color space [C]//Proceedings of the 22nd International Conference on Pattern Recognition. Stockholm: IEEE, 2014: 4678-4683.
[4] [4] ENDRES J, LAIDLAW A. Micro-expression recognition training in medical students: a pilot study [J]. BMC Medical Education, 2009, 9(1): 47.
[5] [5] FRANK M G, KIM D J, KANG S, et al. Improving the ability to detect micro expressions in law enforcement officers. Manuscript in preparation, 2014.
[6] [6] DAVISON A K, MERGHANI W, YAP M H. Objective classes for micro-facial expression recognition [J]. Journal of Imaging, 2018, 4(10): 119.
[7] [7] EKMAN M P. Micro expression training tool[R]. Oakland: CD-ROM, 2003.
[8] [8] LI X B, PFISTER T, HUANG X H, et al. A spontaneous micro-expression database: inducement, collection and baseline [C]//Proceedings of the 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition. Shanghai, China: IEEE, 2013: 1-6.
[9] [9] YAN W J, WU Q, LIU Y J, et al. CASME database: a dataset of spontaneous micro-expressions collected from neutralized faces [C]//Proceedings of the 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition. Shanghai, China: IEEE, 2013: 1-7.
[10] [10] YAN W J, LI X B, WANG S J, et al. CASME Ⅱ: an improved spontaneous micro-expression database and the baseline evaluation [J]. PLoS One, 2014, 9(1): e86041.
[11] [11] LIU Y J, ZHANG J K, YAN W J, et al. A main directional mean optical flow feature for spontaneous micro-expression recognition [J]. IEEE Transactions on Affective Computing, 2016, 7(4): 299-310.
[12] [12] CRISTINACCE D, COOTES T F. Feature detection and tracking with constrained local models [C]//Proceedings of the British Machine Vision Conference 2006. Edinburgh, 2006: 929-938.
[13] [13] LIONG S T, SEE J, WONG K, et al. Automatic micro-expression recognition from long video using a single spotted apex [C]//Proceedings of the Asian Conference on Computer Vision. Taipei, China: Springer, 2016: 345-360.
[14] [14] LIONG S T, SEE J, WONG K, et al. Automatic apex frame spotting in micro-expression database [C]//Proceedings of the 3rd IAPR Asian Conference on Pattern Recognition (ACPR). Kuala Lumpur: IEEE, 2015: 665-669.
[15] [15] KIM D H, BADDAR W J, RO Y M. Micro-expression recognition with expression-state constrained spatio-temporal feature representations [C]//Proceedings of the 24th ACM International Conference on Multimedia. Amsterdam, Netherlands: ACM, 2016: 382-386.
[16] [16] LI J, WANG Y D, SEE J, et al. Micro-expression recognition based on 3D flow convolutional neural network [J]. Pattern Analysis and Applications, 2019, 22(4): 1331-1339.
[17] [17] KHOR H Q, SEE J, PHAN R C W, et al. Enriched long-term recurrent convolutional network for facial micro-expression recognition [C]//Proceedings of the 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018). Xian, China: IEEE, 2018.
[18] [18] LEI L, LI J F, CHEN T, et al. A novel graph-TCN with a graph structured representation for micro-expression recognition [C]//Proceedings of the 28th ACM International Conference on Multimedia. Seattle: ACM, 2020.
[19] [19] LIU J M, LI K, SONG B L, et al. A multi-stream convolutional neural network for micro-expression recognition using optical flow and EVM [J]. arXiv: 2011.03756, 2020.
[20] [20] BULAT A, TZIMIROPOULOS G. How far are we from solving the 2D & 3D face alignment problem? (and a dataset of 230,000 3D facial landmarks) [C]//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice: IEEE, 2017: 1021-1030.
[21] [21] NEWELL A, YANG K Y, DENG J. Stacked hourglass networks for human pose estimation [C]//Proceedings of the 14th European Conference on Computer Vision. Amsterdam: Springer, 2016.
[22] [22] ZHU X X, RAMANAN D. Face detection, pose estimation, and landmark localization in the wild [C]//Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition. Providence: IEEE, 2012.
[23] [23] WU Y H, RUBINSTEIN M, SHIH E, et al. Eulerian video magnification for revealing subtle changes in the world [J]. ACM Transactions on Graphics, 2012, 31(4): 65.
[24] [24] LIONG S T, SEE J, WONG K, et al. Less is more: micro-expression recognition from video using apex frame [J]. Signal Processing: Image Communication, 2018, 62: 82-92.
[25] [25] ILG E, MAYER N, SAIKIA T, et al. FlowNet 2.0: evolution of optical flow estimation with deep networks [C]//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017.
[26] [26] GODAVARTHY S. Microexpression spotting in video using optical strain [D]. Florida: University of South Florida, 2010.
[27] [27] HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition [C]//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016: 770-778.
[28] [28] HE K M, ZHANG X Y, REN S Q, et al. Identity mappings in deep residual networks [C]//Proceedings of the 14th European Conference on Computer Vision. Amsterdam: Springer, 2016.
[29] [29] WOO S, PARK J, LEE J Y, et al. CBAM: convolutional block attention module [C]//Proceedings of the 15th European Conference on Computer Vision. Munich: Springer, 2018: 3-19.
[30] [30] ZAGORUYKO S, KOMODAKIS N. Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer [C]//Proceedings of the 5th International Conference on Learning Representations. Toulon: OpenReview.net, 2017.
[31] [31] PENG M, WU Z, ZHANG Z H, et al. From macro to micro expression recognition: deep learning on small datasets using transfer learning [C]//Proceedings of the 13th IEEE International Conference on Automatic Face & Gesture Recognition. Xi′an, China: IEEE, 2018: 657-661.
[32] [32] WANG S J, LI B J, LIU Y J, et al. Micro-expression recognition with small sample size by transferring long-term convolutional neural network [J]. Neurocomputing, 2018, 312: 251-262.
[33] [33] KHOR H Q, SEE J, LIONG S T, et al. Dual-stream shallow networks for facial micro-expression recognition [C]//Proceedings of 2019 IEEE International Conference on Image Processing (ICIP). Taipei, China: IEEE, 2019: 36-40.
[34] [34] ZHAO G, PIETIKALNEN M. Dynamic texture recognition using local binary patterns with an application to facial expressions [J]. TEEE Trcms. Pattern Anal. Mach. Intell., 2007, 29(6): 915-928.
[35] [35] XU F, ZHANG J P, WANG J Z. Micro expression identification and categorization using a facial dynamics map [J]. IEEE Transactions on Affective Computing, 2017, 8(2): 254-267.
[36] [36] WANG Y, SEE J, PHAN R C, et al. LBP with six intersection points: Reducing redundant information in LBP-TOP for micro-expression recog nition [C]//Proc. Asian Conf. Comput. Vis.(Beijing: IEEE), 2014: 525-537.
[37] [37] TAKALKAR M A, XU M. Image based facial micro-expression recognition using deep learning on small datasets [C]//Proceedings of 2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA). Sydney: IEEE, 2017: 1-7.
[38] [38] SELVARAJU R R, COGSWELL M, DAS A, et al. Grad-CAM: visual explanations from deep networks via gradient-based localization [C]//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice: IEEE, 2017: 618-626.
[39] [39] EKMAN P, FRIESEN W V. Facial Action Coding System: Investigator's Guide [M]. Palo Alto: Consulting Psychologists Press, 1978.
Get Citation
Copy Citation Text
LI Zhao-feng, ZHU Ming. Micro-expression recognition based on video magnification and dual-branch network[J]. Chinese Journal of Liquid Crystals and Displays, 2022, 37(3): 386
Category:
Received: Dec. 19, 2021
Accepted: --
Published Online: Jul. 21, 2022
The Author Email: LI Zhao-feng (lzf0215@163.com)