Laser & Optoelectronics Progress, Volume. 57, Issue 14, 141026(2020)

Facial Expression Recognition Based on Improved AlexNet

Xu Yang and Zhenhong Shang*
Author Affiliations
  • Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, Yunnan 650500, China
  • show less

    Face expressions are affected by factors such as poses, object occlusion, lighting changes, race, gender, and age. Convolutional neural networks are required to learn features more effectively and accurately. AlexNet has low accuracy in expression recognition and strong input image size limitation. In response to these problems, this paper proposes an improved facial expression recognition algorithm for improved AlexNet networks. Introducing multi-scale convolution to the AlexNet network is more suitable for small-scale expression images, extracting feature information of different scales, and cross-connecting feature fusion with higher-level feature information can be realized while the multiple lower-level feature information is transfered downwards, which can reflect the image information more completely and accurately, and construct a more accurate classifier. Because cross-connections will generate parameter expansion, making network training difficult and affecting recognition results. Therefore, we use global average pooling to reduce the dimensionality of low-level feature information, reduce parameters generated by cross-connections, and reduce overfitting. The accuracy of our algorithm on CK+ and JAFFE databases is 94.25% and 93.02%, respectively.

    Tools

    Get Citation

    Copy Citation Text

    Xu Yang, Zhenhong Shang. Facial Expression Recognition Based on Improved AlexNet[J]. Laser & Optoelectronics Progress, 2020, 57(14): 141026

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Image Processing

    Received: Oct. 9, 2019

    Accepted: Dec. 31, 2019

    Published Online: Jul. 28, 2020

    The Author Email: Shang Zhenhong (shangzhenhong@126.com)

    DOI:10.3788/LOP57.141026

    Topics