Laser & Optoelectronics Progress, Volume. 56, Issue 6, 061003(2019)

Discrimination of Handwritten and Printed Texts Based on Frame Features and Viterbi Decoder

Qin Lin1、*, Junfeng Xia2, Zhengzheng Tu2, and Yutang Guo1
Author Affiliations
  • 1 School of Computer Science Technology, Hefei Normal University, Hefei, Anhui 230601, China
  • 2 College of Computer Science and Technology, Anhui University, Hefei, Anhui 230039, China
  • show less
    Figures & Tables(11)
    Flow chart of proposed algorithm
    Schematic of hidden Markov model
    Schematic of state transition of hidden Markov model
    All possible Viterbi decoding paths
    Discrimination results of handwritten and printed texts. (a) Frame feature decoding results mapped to text line images; (b) longitudinal image segmentation; (c) re-determination results in each region
    • Table 1. Convolutional neural network structure of OCR based on text line

      View table

      Table 1. Convolutional neural network structure of OCR based on text line

      Layer nameOutput sizeConvolution kernel
      conv1-1conv1-2conv1-3pool124×[(W-3)/2+1]32@3×3, dpad=164@1×164@3×3, dpad=13×3 Max pool, estride=2
      conv2-1conv2-2conv2-3conv2-4pool26×[(W-3)/2-3]64@1×1128@3×3, estride_h=264@1×1128@3×3, dpad=13×3 Max pool, estride_h=2
      conv3-1conv3-2fc1×[(W-3)/2-3]256@3×1, dpad=1128@3×1S@1×1
    • Table 2. Experimental test results based on frame features and Viterbi decoder%

      View table

      Table 2. Experimental test results based on frame features and Viterbi decoder%

      MethodHandwrittenAccuracyPrintedAccuracy
      HOG+SVM67.2461.55
      GMM+Viterbi72.9088.65
    • Table 3. Experimental results based on frame features and Viterbi decoding followed by post-processing

      View table

      Table 3. Experimental results based on frame features and Viterbi decoding followed by post-processing

      MethodHandwrittenaccuracy /%Printedaccuracy /%Frame /s
      GMM+Viterbi72.9088.65502
      GMM+Viterbi+post-processing78.0489.12496
      BiLSTM79.2889.9139
    • Table 4. Character recognition accuracy of different discrimination methods of handwritten and printed texts%

      View table

      Table 4. Character recognition accuracy of different discrimination methods of handwritten and printed texts%

      MethodHandwrittenaccuracyPrintedaccuracy
      SentWordSentWord
      Artificialsegmentation64.9273.0184.6792.10
      GMM+Viterbi+post-processing61.0269.1882.3190.56
      HOG+SVM57.8566.4379.6287.95
    • Table 5. Classification accuracy of handwritten and printed texts after post-processing in each scene%

      View table

      Table 5. Classification accuracy of handwritten and printed texts after post-processing in each scene%

      SceneHOG+SVMGMM+Viterbi+post-processing
      HandwrittenPrintedHandwrittenPrinted
      Signed document67.2461.5578.0489.12
      Natural scene63.8157.4976.3286.71
      Table65.2957.4372.6686.36
      Noisy document60.3155.2371.4882.23
    • Table 6. Character recognition accuracy of handwritten and printed texts in different scenes%

      View table

      Table 6. Character recognition accuracy of handwritten and printed texts in different scenes%

      SceneHOG+SVMGMM+Viterbi+post-processing
      HandwrittenPrintedHandwrittenPrinted
      SentWordSentWordSentWordSentWord
      Signed document57.8566.4379.6287.9561.0269.1882.3190.56
      Natural scene53.0560.9272.2978.7255.5964.9678.4482.86
      Table54.6161.9873.8978.7355.1665.0178.6085.21
      Noisy document45.3554.8766.4072.5648.2156.5268.7376.67
    Tools

    Get Citation

    Copy Citation Text

    Qin Lin, Junfeng Xia, Zhengzheng Tu, Yutang Guo. Discrimination of Handwritten and Printed Texts Based on Frame Features and Viterbi Decoder[J]. Laser & Optoelectronics Progress, 2019, 56(6): 061003

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Image Processing

    Received: Aug. 21, 2018

    Accepted: Oct. 10, 2018

    Published Online: Jul. 30, 2019

    The Author Email: Lin Qin (linqin@hfnu.edu.cn)

    DOI:10.3788/LOP56.061003

    Topics