Advanced Photonics, Volume. 1, Issue 2, 025001(2019)

Fringe pattern analysis using deep learning On the Cover

Shijie Feng1,2,3, Qian Chen1,2、*, Guohua Gu1,2, Tianyang Tao1,2, Liang Zhang1,2,3, Yan Hu1,2,3, Wei Yin1,2,3, and Chao Zuo1,2,3、*
Author Affiliations
  • 1Nanjing University of Science and Technology, School of Electronic and Optical Engineering, Nanjing, China
  • 2Jiangsu Key Laboratory of Spectral Imaging and Intelligent Sense, Nanjing, China
  • 3Nanjing University of Science and Technology, Smart Computational Imaging Laboratory (SCILab), Nanjing, China
  • show less
    Figures & Tables(8)
    Flowchart of the proposed method where two convolutional networks (CNN1 and CNN2) and the arctangent function are used together to determine the phase distribution. For CNN1 (in red), the input is the fringe image I(x,y), and the output is the estimated background image A(x,y). For CNN2 (in green), the inputs are the fringe image I(x,y) and the background image A(x,y) predicted by CNN1, and the outputs are the numerator M(x,y) and the denominator D(x,y). The numerator and denominator are then fed into the arctangent function to calculate the phase ϕ(x,y).
    Schematic of CNN1, which is composed of convolutional layers and several residual blocks.
    Schematic of CNN2, which is more sophisticated than CNN1 and further includes two pooling layers, an upsampling layer, a concatenation block, and a linearly activated convolutional layer.
    Testing using the trained networks on a scene that is not present in the training phase. (a) Input fringe image I(x,y), (b) background image A(x,y) predicted by CNN1, (c) and (d) numerator M(x,y) and denominator D(x,y) estimated by CNN2, (e) phase ϕ(x,y) calculated with (c) and (d).
    Comparison of the phase error of different methods: (a) FT, (b) WFT, (c) our method, and (d) magnified views of the phase error for two selected complex regions.
    Comparison of the 3-D reconstruction results for different methods: (a) FT, (b) WFT, (c) our method, and (d) ground truth obtained by the 12-step PS profilometry.
    Quantitative analysis of the reconstruction accuracy of the proposed method. (a) Measured objects: a pair of standard spheres and (b) 3-D reconstruction result showing the measurement accuracy.
    • Table 1. Phase error of FT, WFT, and our method.

      View table
      View in Article

      Table 1. Phase error of FT, WFT, and our method.

      MethodFTWFTOur
      MAE (rad)0.200.190.087
    Tools

    Get Citation

    Copy Citation Text

    Shijie Feng, Qian Chen, Guohua Gu, Tianyang Tao, Liang Zhang, Yan Hu, Wei Yin, Chao Zuo, "Fringe pattern analysis using deep learning," Adv. Photon. 1, 025001 (2019)

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Letters

    Received: Aug. 22, 2018

    Accepted: Jan. 8, 2019

    Posted: Mar. 25, 2019

    Published Online: Mar. 14, 2019

    The Author Email: Chen Qian (chenqian@njust.edu.cn), Zuo Chao (zuochao@njust.edu.cn)

    DOI:10.1117/1.AP.1.2.025001

    Topics