Advanced Photonics Nexus, Volume. 3, Issue 5, 056015(2024)

Redefinable neural network for structured light array

Hengyang Li1、†, Jiaming Xu1, Huaizhi Zhang1, Cong Hu1, Zining Wan2, Yu Xiao1, Xiahui Tang1, Chenhao Wan1, Gang Xu1、*, and Yingxiong Qin1、*
Author Affiliations
  • 1Huazhong University of Science and Technology, School of Optical and Electronic Information and Wuhan National Laboratory for Optoelectronics, National Engineering Research Center for Laser Processing, Optics Valley Laboratory, Wuhan, China
  • 2Communication University of China, Neuroscience and Intelligent Media Institute, Beijing, China
  • show less
    Figures & Tables(6)
    The concept of RediNet. (a) Three kinds of target structured light arrays, including 3D focus array, Airy beam array, and perfect vortex array. (b) Schematic of conventional neural network with pixel-wise input and output. Three independent neural networks serve three kinds of target distributions. The input data structures and training data differ from each other. (c) Schematic of RediNet. Through parameter unifying, multiple structured lights can be defined in a 3D parameter space, which carries the abstract configuration of the target distribution. The output of the network is the 3D primitive function. With CPF mapping, the 3D primitive function can be transformed into 2D CGHs for different purposes. (d) Corresponding CGHs for the target distributions in (b).
    The architecture of RediNet and CGH generating workflow. (a) Pre-processing. The table contains examples of CPFs α(x,y) of four different structured light species. Two rows in the table correspond to two parameters of structured light properties, and also correspond to the coordinates in parameter space. The CPF species can be further extended, and the values of the parameter can be expanded. (b) The neural network in RediNet. A parameter space P in the left box is the input, carrying the configuration of the target structured light array. The neural network in RediNet possesses an architecture of encoder, decoder, and skip connection. The trained network’s output is primitive function S, displayed in the right box. (c) Post-processing by mapping the 3D primitive function to a 2D phase CGH. For determining the phase value of one pixel at (x0,y0), the first step is positioning, putting (x0,y0) into every CPF and obtain αi(x0,y0). The second step is evaluation, finding the value of the primitive function at the coordinates of αi(x0,y0). All the pixels on a CGH share the same mapping procedure.
    Customizing 2D and 3D focus arrays, LG beam arrays, and Bessel and Airy beam arrays with RediNet. (a) Four foci in a square pattern are shown, with the phase CGH generated by RediNet. In addition, a four-layer focus array is generated and captured, respectively. The target parameter space is shown, where a similar distribution with intensity images can be found. The defocusing distances of the pictures are labeled. (b) Two LG beam arrays are generated. The top one includes LG00, LG01, LG10, and LG11 modes. Detailed intensity distributions on the left are individually enlarged and normalized. Separated phase CGHs for each mode and the final CGH are illustrated. Likewise, the bottom one shows the results and CGHs about LG22, LG23, LG32, and LG33 modes. (c) Two kinds of nondiffracting beam arrays are generated, including Bessel beam arrays and Airy beam arrays. The intensity distributions of a transverse plane and the 3D volume are given, respectively.
    Customizing ring-focus arrays, vortex and perfect vortex beam arrays, helico-conical beam arrays, and snowflake arrays with RediNet. (a)–(e) The distributions of intensity and the distributions of the product of intensity and phase. The TCs l and normalized ring radiuses r (minimum as 1) are labeled in the figures. (f) The generation and the result of the snowflake intensity pattern. Arbitrarily built CPFs and the final CGH are illustrated and are used to generate four snowflakes in different positions on the focal plane.
    Conceptional graph and results of multichannel compound vortex beam array generating with RediNet. (a) The conceptional graph about sculpting a fundamental mode beam to multichannel compound vortex beams with respective multiple OAMs. (b) The captured intensity distribution of multichannel compound vortex arrays on the focal plane, which is similar to interference results, but is generated from only one modulated beam instead of real existing signal and reference beams. (c) An example of a compound vortex beam with +3 and −4 TCs. The detailed distributions of intensity and the distributions of the product of intensity and phase in simulation and experiment are shown for comparison.
    Numerical evaluation of RediNet performance. (a) Flexibility of dimension designation in parameter space. Compared with the first column, values in parameter space are permuted, and the simulation result is different in the second column. The values in the parameter space and the dimension designations are both permuted in the third column, but the simulation result is identical to that in the first column. (b) Flexibility of CGH resolution in mapping procedure. The CPFs of x, y, and θ with three different resolutions are involved in the mapping, and then three CGHs with resolutions of 4802, 9602, and 38402 are produced. Time consumption in each step is labeled. The partially enlarged patterns are shown. (c) Computation time comparison of five algorithms, including CGH by non-convex optimization (NOVOCGH), 3DIFTA, SACAD, DeepCGH, and RediNet. Tasks of generating CGHs of resolution of 5122 and 10242 are included. (d) Diffraction efficiency and RMSE of RediNet with different numbers of beams in an array. In the blue region, numbers of beams are not within the training data set. (e) Correlation analysis of RediNet based on target parameter space and the Fourier series coefficients expanded from the RediNet output primitive function. Data points under the situations of 10, 20, and 30 beams in an array are shown in different colors.
    Tools

    Get Citation

    Copy Citation Text

    Hengyang Li, Jiaming Xu, Huaizhi Zhang, Cong Hu, Zining Wan, Yu Xiao, Xiahui Tang, Chenhao Wan, Gang Xu, Yingxiong Qin, "Redefinable neural network for structured light array," Adv. Photon. Nexus 3, 056015 (2024)

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Research Articles

    Received: May. 27, 2024

    Accepted: Aug. 1, 2024

    Published Online: Sep. 18, 2024

    The Author Email: Gang Xu (gang_xu@hust.edu.cn), Yingxiong Qin (qyx@hust.edu.cn)

    DOI:10.1117/1.APN.3.5.056015

    CSTR:32397.14.1.APN.3.5.056015

    Topics