Chinese Optics Letters, Volume. 23, Issue 9, 093501(2025)

Machine-learning-assisted precision measurement of a tiny rotational angle based on interference vortex modes

Jingwen Zhou1, Yaling Yin1、*, Jihong Tang1, Qi Chu1, Lin Li1, Yong Xia1,2,3、**, Quanli Gu4, and Jianping Yin1
Author Affiliations
  • 1State Key Laboratory of Precision Spectroscopy, School of Physics and Electronic Science, East China Normal University, Shanghai 200241, China
  • 2Collaborative Innovation Center of Extreme Optics, Shanxi University, Taiyuan 030006, China
  • 3NYU-ECNU Institute of Physics at NYU Shanghai, Shanghai 200062, China
  • 4Petroleum Analyzer Company, LP, Houston 77064, USA
  • show less

    In contrast to traditional physical measurement methods, machine-learning-based precision measurement is a “data-driven” approach that constitutes a new field of research. We report a machine-learning-based precision measurement of a rotational angle from a vortex-mode shear interferometer, as the two-dimensional optical images at different angles contain the interference patterns that are inherently encoded into the light orbital angular momentum states. Through our evaluation of different convolutional neural networks, we have determined that the ResNeXt50 model excels in detecting minute angle changes across resolutions of 0.05°, 0.1°, 0.5°, 4°, and 10°. This model for the vortex beams achieves over 99.9% accuracy for resolutions of 0.1°, 0.5°, 4°, and 10°, and over 97.0% accuracy for the highest 0.05° resolution. The new results in experiments and modeling demonstrate a robust, accurate, and scalable approach to high-precision rotational angle measurement.

    Keywords

    1. Introduction

    Since the invention of the Michelson interferometer, instruments based on the diffraction and interference of light waves have achieved remarkable measurement accuracy and sensitivity[1]. Typical optical interferometers mainly include the Young’s double-slit interferometer to measure the spatial coherence of the light field[2,3], the Michelson interferometer to measure the temporal coherence of the light field[4], the Mach-Zehnder interferometer to measure the refractive index and density distribution of gaseous media[5], the Fabry-Perot interferometer to measure the precision in optics and spectroscopy[6], and the laser interferometric gyroscope to measure the Sagnac phase shift and small rotations, and even for navigation[7,8]. A more recent development is the shear interferometer, as characterized by its simple construction using a wedged glass plate and no need for reference light. This interferometer is effective in determining the sign and magnitude of the topological charge[9].

    On the other hand, several well-established methods have been used for precision angle measurement. For instance, in nonlinear optics, the angle information of a sample surface can be determined by analyzing the frequency components of the second harmonic signal using an optical frequency comb[10]. In quantum physics, weak signal measurement techniques enable angle determination[11,12]. However, their wider applications are limited by the complexity of their experimental technologies and analysis methods. Furthermore, optical encoders are widely used in industry to accurately track rotation angles in dynamic systems by converting rotary motion into digital signals[13]. Certainly, optical interferometry is useful owing to its exceptional accuracy in angle measurement through the analysis of light wave interference[1416].

    Separately, and equally important, machine learning (ML) has become increasingly significant in almost all fields thanks to its exceptional learning and predicting capabilities. With the rapid advancement of ML and growing interdisciplinary applications, a new field — ML-assisted precision measurement — has emerged[17]. This field benefits from its ability to automatically extract features, reduce noise, and enhance sensitivity. Moreover, even though the nature of the detected object varies across areas, the ML models are flexible and consistently meet the demand for precise measurements. Common digital data analysis can effectively use artificial neural networks (ANNs)[18], support vector machines (SVMs)[19], decision trees[20], etc. Meanwhile, image data with specific features, such as stripe or speckle patterns, are well-suited for study by the convolutional neural networks (CNNs)[21,22]. ML applications range from microscopic quantum metrology to macroscopic materials science. In quantum metrology, ML optimizes the generation and detection of entangled states[23], while ML-based methods excel in predicting thermal conductivity in complex semiconductor materials[24]. Also, the advanced image processing and feature extraction capabilities of ML surpass traditional image processing algorithms in tasks such as denoising and enhancement[25], phase retrieval[26], and phase unwrapping[27]. In optical metrology, ML models can be trained to detect specific features such as edges, contours, or shapes. Previous studies have demonstrated the successful identification of super-high-resolution orbital angular momentum using ML, where an almost invisible difference in neighboring fractional orbital angular momentum (OAM) intensity distributions is accurately identified by detecting subtle variations in the light field[2830]. These examples clearly highlight the exceptional potential of ML in advancing precision measurement techniques.

    To further explore the potential of ML-assisted precision measurement, herein, we report a novel approach utilizing ML for the precise measurement of rotational angles with a shear interferometer on optical vortex modes. Vortex-beam laser interferometry offers exceptional sensitivity to rotation by detecting invisible changes in light intensity distribution, making it ideal for precision tasks. Furthermore, as the rotational angle varies, the interference patterns generated by the shearing interferometer exhibit the distinct and discernible characteristics that are amenable to analysis by an ML model. In fact, in this report, our model demonstrates superior accuracy, exceeding 99.9% at angular resolutions of 0.1°, 0.5°, 4°, and 10°. Notably, at the highest resolution of 0.05°, the accuracy reaches 97.08% at a topological charge l=1. The synergistic integration of artificial intelligence with simple measuring devices and the shearing interferometry introduces new possibilities for precision measurement. Our study underscores the potential to surpass the limitations of traditional measurement techniques, heralding a new era in precision measurement technology.

    2. Method

    2.1. Experimental setup

    The optical interferometry is based on a shear interferometer formed by a wedged optical flat with a wedge angle δ. When an incident optical beam strikes at an angle of incidence α, two reflections are produced. The shear interferometer is mounted on a mechanical turntable that facilitates computer-controlled adjustment of the rotational angle β in the direction as shown in Fig. 1(a). While the second reflection occurs after refraction and reflection from the internal wedged surface, forming a tilt angle θ with the horizontal, as shown in Fig. 1(b), the first reflection occurs at the front surface of the glass, as shown in Fig. 1(c). The experimental setup is shown in Fig. 1(d). In brief, a stabilized He-Ne laser at 632.991 nm is the light source to produce the Gaussian beam. Adjust the wavefront curvature to be infinite and the laser beam to be collimated using two confocal lenses, L1 and L2. A spatial light modulator (SLM), a phase-only reflective liquid crystal device (1920 pixel × 1080 pixel, 8.0 µm per pixel pitch, 8-bit phase level), has been pre-loaded with an OAM hologram of l=1 or l=1.5. After modulation by the SLM, the beam hits the wedged optical flat, which has been set at various β values. The resulting interference light distributions are acquired by the CCD, and then transferred to a computer to be processed by a deep learning CNN model (GPU: NVIDIA, RTX-3070; CPU: INTEL, i7-10700).

    Schematic diagrams: (a) illustration of the rotation angle β; the rotation axis is parallel to the surface of the optical platform; (b) side view showing the wedge angle δ to the internal reflection with a tilt angle θ; (c) top view showing the angle of incidence α. (d) Experimental setup for rotational angle measurement. ISO, optical isolator; ND, neutral density filter; M, mirror; L1, L2, lenses; HWP, half-wave plate; BS, beam splitter; SLM, spatial light modulator; SI, rotatable shear interferometer (a wedged optical flat on an electric rotating machinery); CCD, charge-coupled device.

    Figure 1.Schematic diagrams: (a) illustration of the rotation angle β; the rotation axis is parallel to the surface of the optical platform; (b) side view showing the wedge angle δ to the internal reflection with a tilt angle θ; (c) top view showing the angle of incidence α. (d) Experimental setup for rotational angle measurement. ISO, optical isolator; ND, neutral density filter; M, mirror; L1, L2, lenses; HWP, half-wave plate; BS, beam splitter; SLM, spatial light modulator; SI, rotatable shear interferometer (a wedged optical flat on an electric rotating machinery); CCD, charge-coupled device.

    2.2. Theory of shear interferometer

    Considering the interference of an optical beam with a topological charge l, we use a modulated Gaussian beam to generate the optical vortex by carrying a helical wavefront. The light field distribution of the OAM beam can be easily expressed as E(x,y,z=0)=A·exp(x2+y2ω02)exp(ilφ),where A is the complex amplitude, ω0 is the waist of the incident Gaussian beam, and φ is the transverse azimuthal of the helical wavefront: φ=arctan(yx).

    After the vortex light arrives at the shear interferometer, considering the topological inversion due to reflection, the reflected light on the front and back surfaces, when β=0, can be written as E1(x1,y1,z1)=A·exp(x12+y12ω12)exp(ikz1ilφ1iΦ1),E2(x2,y2,z2)=A·exp(x22+y22ω22)exp(ikz2ilφ2iΦ2),where ω1 and ω2 are the waists of reflected light on the front and back surfaces, Φi=k(xi2+yi2)/2R(zi), R(zi) is the curvature of the wavefront, and s is the horizontal displacement between two beams, which is related to the plate average thickness t, angle of incidence α, and index of refraction n. So, s can be expressed as[9]s=tsin(2α)n2sin2α,and the tilt angle θ is given by[9]θ=2δn2sin2α.

    According to the flat we used in the experiment, it is made of UV fused silica, which has n=1.457 at 632.8 nm, δ=75, and t=1.4mm, and the incident angle α is about 40°. The relative phase of two beams can be approximated as Δϕ=kθylφ2+lφ1+ϕ0. We have x1=x2+s, y1=y2, and z2=z1θy+D; D is the optical path difference and ϕ0 incorporates to D. When the shearing interferometer starts to rotate at β, the reflected light on the front surface [as shown in Fig. 1(c)] does not change, but the reflected light on the back surface [as shown in Fig. 1(b)] is affected. As the rotation occurs, the optical path difference changes, leading to variations in the interference pattern. Therefore, there is a one-to-one correspondence between the angles and the patterns in the interference fringe. We have Δϕ=kθ(ycosβ+xsinβ)lφ2+lφ1+ϕ0. So, the interference field distribution can be obtained by Ein=E1+E2.

    For a more accurate simulation of the experimental results, the diffraction of the superposed light field is also considered. The specific results for integer topological charge (l=1) and non-integer topological charge (l=1.5) are presented in Figs. 2(a)2(d), respectively. The non-integer OAM beams can be described as a superposition of a series of Fourier series[30,31]. Figures 2(a) and 2(c) show the experimental interference light field distributions. Figures 2(b) and 2(d) illustrate the theoretical interference light distributions. In short, the experimental diagram agrees well with the theoretical diagram.

    Interferometric distributions of OAM with different rotation angles (β) of the wedge optical flat for integer topological charge l = 1 are shown in light field distribution from experiment (a) and theory (b), and for fractional topological charge l = 1.5 are shown in light field distribution from experiment (c) and theory (d).

    Figure 2.Interferometric distributions of OAM with different rotation angles (β) of the wedge optical flat for integer topological charge l = 1 are shown in light field distribution from experiment (a) and theory (b), and for fractional topological charge l = 1.5 are shown in light field distribution from experiment (c) and theory (d).

    As depicted in Fig. 2, the interferometric light field map exhibits a distinctive forked pattern at the center, accompanied by parallel bright stripes at the periphery. With changes in the rotation angle β, both the forked pattern and the parallel bright stripes undergo rotation. Concurrently, in Figs. 2(a) and 2(b), the inner forked pattern continuously transforms as it rotates, thereby acquiring a unique characteristic. For the sake of clarity, the bifurcation section of the forked pattern is designated as the “tip”, while the opposite end is termed the “tail”. When β=0°, the fork-shaped patterns are distributed horizontally with opposite opening directions and connected tails, forming a double C-like pattern. For β=45°, 90°, and 135°, the top directions of the two fork-shaped patterns are opposite, and are distributed in parallel with no connection. Specifically, at 45°, one side of the left fork pattern is broken; at 90°, both fork patterns are broken; and at 135°, the right fork pattern is broken. Finally, at β=180°, the fork-shaped patterns are horizontally distributed with opposite opening directions, and the tips are connected to form an oval pattern. Interestingly, in terms of the patterns, the theoretical diagram in Fig. 2(b) has good agreement with the experimental diagram in Fig. 2(a), reflecting the essential physics of the experimental images.

    In Figs. 2(c) and 2(d), at a non-integer value of l (1.5), a horizontal gap appears in the light field, causing a partial destruction of the fork pattern. As β changes, the central forked pattern and the parallel bright stripes still rotate, but the transverse gap remains unchanged. At β=0°, it is still apparent that the two fork patterns are connected with the tail, while the two fork patterns are connected with the tip at β=180°. However, in the light intensity diagrams of β=45°, 90°, and 135°, the transverse gap significantly disrupts the original fork structure and some horizontal bright stripes outside. Consequently, the distribution of the interference light field of fractional OAM is more complicated, potentially enabling more possibilities for identification. Again, theory agrees with experiments in general.

    By controlling the rotational angle β precision from the step motor, we can observe the characteristics of the interference light field at a fine angle resolution, as shown in Fig. 3. The resolution of the rotational angle in the first and second rows is 0.05°, and in the third and fourth rows is 0.5°. With Δβ=0.05°, the interference light field can hardly see the rotation, and there is basically an invisible change in the fork pattern for l=1. For l=1.5, the change is more in the handle of the left fork pattern. Of course, with the increase of Δβ, the change between two adjacent light fields becomes greater. For l=1, as in Fig. 3, at β=90°, the left handles of the two forks are disconnected. Between β=90° and β=91.5°, the connection gradually restores until at β=92°, where the handles of the two-fork pattern are completely connected. From β=92° to β=93°, the right handles of the two forks are gradually disconnected. At l=1.5, as β increases from 90° to 90.5°, the tail of the left damaged fork gradually extends to the right side. Then, as β continues to increase from 90° to 92.5°, the broken fork handle on the left side of the damaged fork pattern gradually reconnects until it is completely connected at β=92.5°.

    Experimental interference distributions with a fine resolution of rotational angle (β) at 0.05° in the first and second rows, and a coarse resolution at 0.5° in the third and fourth rows.

    Figure 3.Experimental interference distributions with a fine resolution of rotational angle (β) at 0.05° in the first and second rows, and a coarse resolution at 0.5° in the third and fourth rows.

    3. Network of Architecture

    CNNs have proven to be a powerful solution to precision feature extraction by harnessing the power of convolutional layers to discern and extract pertinent features from complex input data[32]. These convolutional layers employ filters that are adept at capturing intricate features, complemented by pooling layers that serve to reduce spatial dimensions and computational complexity. By strategically stacking multiple convolutional and pooling layers, CNNs embark on a journey of progressively acquiring intricate and abstract features, enabling them to attain cutting-edge performance in image classification tasks. This iterative process enables CNNs to automatically build hierarchical representations of visual data, making them remarkably effective in the field of image recognition. In essence, the fusion of convolutional and pooling layers within CNNs provides a robust framework for precision feature extraction in optical distribution, underscoring their indispensability in the quest for achieving superior performance in image analysis tasks.

    In our study, we evaluate the performance of several classical CNN models for the recognition of a high rotational angle resolution (Δβ=0.05°). The models considered are AlexNet, VGG19, ResNet50, and its variant ResNeXt50[3335]. The recognition accuracies of these models for a rotation angle of 0.05° are presented in Fig. 4. The results show that both ResNet50 and ResNeXt50 have significantly higher accuracies than AlexNet and VGG19, exceeding 92% for both integer and fractional values of l. As a variant of ResNet, ResNeXt has further improved efficiency because of its multi-branch architecture, which can increase model capacity. Considering these factors together, we select the ResNeXt50 network for rotation angle detection. Derived from the original residual network (ResNet), ResNeXt significantly increases the depth and width of the network by implementing a concurrent connection and group convolution mechanism between the input and residual signals—within each convolution block and unit block—to improve classification accuracy.

    Accuracies of different neural networks for the rotational angle resolution Δβ = 0.05°.

    Figure 4.Accuracies of different neural networks for the rotational angle resolution Δβ = 0.05°.

    As shown in Fig. 5, the input image, randomly cropped to 224×224, is resized by increasing the scale and aspect ratio. The ResNeXt50 architecture includes Convolution 1 (Conv1), a max-pooling layer, Conv2, Conv3, Conv4, Conv5, an average pooling layer, a fully connected layer, and a softmax function in sequence. Except for Conv1, which is a basic convolutional layer, Conv2, Conv3, Conv4, and Conv5 are a series of residual blocks with uniform topology, individually outlined in Figs. 5(a)5(d). For example, in Conv2, “×3” denotes three blocks in a stack, with each block’s topology depicted in Fig. 5(a). The first black box represents the first layer, characterized by a filter size of 1×1 and 128 output channels. The subsequent black box represents the second layer, featuring a filter size of 3×3 and 128 output channels. “Grouped = 32” indicates grouped convolutions with 32 groups. The final black box indicates a filter size of 1×1 and 256 output channels. These blocks, incorporating grouped convolutional layers, yield a broader yet more sparsely connected module compared to the original bottleneck residual block in ResNet50. Throughout the training phase, the Adam optimization method is used to attain optimal performance[36]. As for an important evaluation criterion of the training result, we choose the categorical cross-entropy as the loss function, loss=i=1myi·logyi^, where m is the output size, yi^ is the predicted output, and yi is the ideal output.

    Image recognition of rotation angle based on the improved residual convolutional network (ResNeXt50). (a) Building blocks in Conv2; (b) building blocks in Conv3; (c) building blocks in Conv4; (d) building blocks in Conv5. The content in each black box is denoted as the filter size and output channels.

    Figure 5.Image recognition of rotation angle based on the improved residual convolutional network (ResNeXt50). (a) Building blocks in Conv2; (b) building blocks in Conv3; (c) building blocks in Conv4; (d) building blocks in Conv5. The content in each black box is denoted as the filter size and output channels.

    To visualize the image recognition process of the deep learning network, we plotted a gradient-weighted category activation heat map to show which part of the image feature contributes more to the recognition. The results show that the fork pattern caused by OAM plays a more important role than parallel stripes, which is exactly what we expected. It proves that the model has captured the features in the interference light field distribution well and applied them to the subsequent recognition.

    4. Results and Discussion

    In our investigation, we consider a range of resolutions for rotational angle measurements, spanning from 0.05 to 10° (0.05°, 0.1°, 0.5°, 4°, and 10°). To enhance the robustness and generalization capability, the collected images from each task are systematically divided into training, validation, and test sets, following a ratio of 6:2:2. Furthermore, to augment the diversity of the sample set, the distance of the receiving CCD is intentionally varied during data acquisition to capture distinct light field distribution maps; the maximum detection distance is 0.2 m. Throughout the model training process, several pre-processing operations are performed on the images, including rotation, horizontal and vertical cropping, and horizontal and vertical shifting.

    The recognition of lower resolution rotational angles requires less image data due to the obvious characteristics of the light field intensity distribution. In fact, for recognition with rotation angles of 4° and 10°, for a wide range of angles from 0° to 180°, the model training shows a pronounced reduction in the required data, requiring only 20 images per step to achieve desirable results.

    The more challenging aspect is the recognition of rotations with resolutions of 0.5°, 0.1°, and 0.05°. During the experiment, we observed that the interference pattern changed along the adjustment of the rotation angle. In these tasks, the angle detection ranges from β=90° to β=99.5° (Δβ=0.5°), β=90° to β=91.9° (Δβ=0.1°), and β=90° to β=90.95° (Δβ=0.05°). Each task consists of 20 categories, each category containing 1000 images, resulting in a total of 20,000 images for each task model.

    3D plots of the training performance versus angle resolution and epochs are shown in Fig. 6(a) for the training accuracy and in Fig. 6(b) for the training loss. The red sphere denotes l=1, while the blue sphere denotes l=1.5. As shown in Fig. 6(a), the epoch values and training accuracies across a spectrum of rotation angle resolutions are captured, pinpointing the juncture at which the accuracy curve reaches stability. The curves projected onto the yz plane indicate the training accuracy as the model is trained to stability at different angular resolutions. Similarly, the curves projected onto the xy plane indicate the number of epochs required to achieve stability across these resolutions. In Fig. 6(b), the plot focuses on epoch values and training loss as the loss function achieves convergence. The curves projected onto the yz plane correspond to the training loss as the model is trained for stability at different angular resolutions. Also, the curves projected onto the xy plane indicate the epoch number required for convergence at these angular resolutions. Furthermore, it is clear from the xy plane lines in Figs. 6(a) and 6(b) that as the angle resolution increases, the number of epochs required to train the ML model increases, affecting both the accuracy and loss metrics. This suggests that the training at higher resolutions is associated with a slower convergence rate, thus increasing the training time. This phenomenon is likely due to the finer granularity of the angle resolution, which makes the changes in the interference pattern more nuanced, thus making it more difficult to extract features from the images and requiring a longer training period.

    3D plots of training performance on angle resolution and epochs. (a) Accuracy curve; (b) loss curve.

    Figure 6.3D plots of training performance on angle resolution and epochs. (a) Accuracy curve; (b) loss curve.

    When faced with a 10° resolution rotation of the detection angle, the use of l=1 requires only 15 epochs, while l=1.5 requires only 12 epochs, both resulting in an accuracy of over 99.9%. The convergence of the loss function to a position close to zero within 15 epochs further underlines the successful training of the model. Similarly, for a detection angle of rotation with a resolution of 4°, using l=1 and l=1.5 beams, only 14 epochs are required to achieve a training accuracy close to 100%. The satisfactory convergence of the loss function confirms the absence of overfitting during the training of the model. As can be seen from Fig. 7, the test set also achieves an accuracy of over 99.9%, demonstrating the feasibility of using ML for angle recognition.

    Test accuracies of different angle resolutions Δβ.

    Figure 7.Test accuracies of different angle resolutions Δβ.

    For high-precision angle measurements (Δβ=0.1° and Δβ=0.5°), at both l=1 and l=1.5, the accuracy of the test set is also high, close to the accuracy of the training set. Figure 8 shows the confusion matrices for l=1 and l=1.5 at Δβ=0.1°. The horizontal coordinate represents the actual β value, while the vertical coordinate represents the predicted β value for the test set. In Fig. 8(a) (l=1), the image with an angle of 90° is incorrectly identified as 90.1°, resulting in an accuracy of 99.98%. In contrast, all angles are accurately predicted at l=1.5, resulting in a test accuracy of 100%. When the resolution is increased to Δβ=0.05°, there are obvious discrepancies in the measurements for vortex beams from l=1 to l=1.5. However, the test set at l=1 still achieves an accuracy of 97.08% while the test set at l=1.5 achieves an accuracy of 93.68%. As depicted in Figs. 8(c) and 8(d), the angles of the recognition errors are concentrated within the range of 90° to 90.2°, noting that the misidentified angles are often from the adjacent angles, with the error of 0.05°. The results of the test set are consistent with the results of the training set, indicating that the models are reliable and can successfully recognize even the higher-resolution angles. From the results of Δβ=0.05° with different l values, however, the incorporation of a fractional vortex beam introduces additional complexity to the interference patterns, which does not substantially enhance the precision of angle measurement. In fact, the use of integer OAM is more conducive to achieving the desired level of accuracy in such measurements. This preference for integer OAM may stem from its ability to generate more discernible and less complex interference patterns, which are more amenable to accurate analysis and interpretation.

    Confusion matrices of (a) l = 1, Δβ = 0.1°, (b) l = 1.5, Δβ = 0.1°, (c) l = 1, Δβ = 0.05°, and (d) l = 1.5, Δβ = 0.05°.

    Figure 8.Confusion matrices of (a) l = 1, Δβ = 0.1°, (b) l = 1.5, Δβ = 0.1°, (c) l = 1, Δβ = 0.05°, and (d) l = 1.5, Δβ = 0.05°.

    5. Conclusion

    This study proposes an approach of ML-assisted precision rotation angular measurement with optical vortex interference modes. The experimental design, which includes the integer and fractional vortex beams and a shear interferometer, has proven to be a reliable means of generating interference patterns that are sensitive to intricate changes upon rotation. The theory also agrees with the experiments. Furthermore, the ResNeXt50 model shows the superior performance in dynamic angle detection when compared to several classical CNN models, including AlexNet, VGG19, and ResNet50. The study shows that training at higher angular resolutions requires more epochs, but can still achieve high accuracy, with test accuracies over 99.9% for resolutions above 0.1°. Even at 0.05° resolution, the accuracy can reach 97.08% at l=1. In short, this approach provides a new possibility of combining ML with precision measurement, towards advanced measurement tools and sensors for a range of practical applications.

    [21] K. He, X. Zhang, S. Ren et al. Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770(2016).

    [33] C. Szegedy, W. Liu, Y. Q. Jia et al. Going deeper with convolutions. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1(2015).

    [34] S. Xie, R. B. Girshick, P. Dollár et al. Aggregated residual transformations for deep neural networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 5987(2017).

    [36] D. P. Kingma, J. J. C. Ba. Adam: a method for stochastic optimization(2014).

    Tools

    Get Citation

    Copy Citation Text

    Jingwen Zhou, Yaling Yin, Jihong Tang, Qi Chu, Lin Li, Yong Xia, Quanli Gu, Jianping Yin, "Machine-learning-assisted precision measurement of a tiny rotational angle based on interference vortex modes," Chin. Opt. Lett. 23, 093501 (2025)

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Optics in Interdisciplinary Research

    Received: Feb. 28, 2025

    Accepted: Jun. 5, 2025

    Published Online: Sep. 2, 2025

    The Author Email: Yaling Yin (ylyin@phy.ecnu.edu.cn), Yong Xia (yxia@phy.ecnu.edu.cn)

    DOI:10.3788/COL202523.093501

    CSTR:32184.14.COL202523.093501

    Topics