Laser & Optoelectronics Progress, Volume. 56, Issue 23, 231502(2019)

Object Classification Based on Multitask Convolutional Neural Network

Miaohui Zhang1,2, Bo Zhang1、*, and Chengcheng Gao1
Author Affiliations
  • 1Henan Key Laboratory of Big Data Analysis and Processing, Henan University, Kaifeng, Henan 475004, China
  • 2Institute of Data and Knowledge Engineering, Henan University, Kaifeng, Henan 475004, China
  • show less

    This paper proposes a multitask convolutional neural network (MTCNN) based on fine-grained images and multi-attribute fusion. The network mainly includes the following key links. First, the label input layer is added to the network; the input multiple labels are copied and separated, and then matched to multiple tasks with a fully connected layer. The Softmax loss function corresponding to the number of labels is added to backpropagate multiple tasks. Then, a fine-grained image in the original image is extracted by the combination of saliency detection and corner detection, and used as the input of MTCNN. The target features extracted by the neural network are more unique and distinguishable. Finally, the MTCNN uses the nonlinear activation function PReLu to further improve the classification accuracy of the network. This paper uses the MTCNN to perform multi-task parallel training in the Car Dataset and achieves a 10% improvement in the classification accuracy over the traditional single task. The results show that the MTCNN has high generalization performance and the accuracy of image classification is obviously improved.

    Tools

    Get Citation

    Copy Citation Text

    Miaohui Zhang, Bo Zhang, Chengcheng Gao. Object Classification Based on Multitask Convolutional Neural Network[J]. Laser & Optoelectronics Progress, 2019, 56(23): 231502

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Machine Vision

    Received: Apr. 12, 2019

    Accepted: May. 27, 2019

    Published Online: Nov. 27, 2019

    The Author Email: Zhang Bo (zhangbo208@163.com)

    DOI:10.3788/LOP56.231502

    Topics