Zhao lijie, Zhang Guishuo, Zou Shida, Wang Guogang, Fan Wenyu, Zhang Yuhong, and Huang Mingzhong

Addressing the problem of a single activated sludge microscopic image under high magnification of a small field of view and limited information for characterizing sludge samples, a multi-image stitching approach of activated sludge microscopic images based on the Floyd algorithm is proposed. First, the scale-invariant feature transform algorithm is used to extract feature points of activated sludge microscopic images, and the distance matrix of multi-image feature matching points is computed using cosine distance. Next, the Floyd algorithm is used to determine the multi-image stitching reference map and the optimized stitching path. Finally, the images are stitched based on the stitching path and the reference map by an affine transformation. The experimental findings show the efficiency of the approach proposed in this study. In the case of a limited field of microscopic and multiple images in disorder, the approach can solve the problem of stitching images.

Nov. 25, 2022
Laser & Optoelectronics Progress
Vol. 59 Issue 22 2210005 (2022)
DOI:10.3788/LOP202259.2210005
Long Tao, Su Chang, and Wang Jian

Detecting salient key points in images and extracting feature descriptors are important components of computer vision tasks such as visual odometry and simultaneous localization and mapping systems. The main goal of the feature point extraction algorithm is to detect accurate key point positions and extract reliable feature descriptors. Reliable feature descriptors should maintain stability against rotation, scale scaling, illumination changes, viewing angle changes, noise, etc. Due to the loss of image information during the downsampling process in recent deep learning-based feature point extraction algorithms, the reliability of the descriptor and accuracy of feature matching are reduced. This study proposes a network structure to detect detail-preserving oriented feature descriptors to solve this problem. The proposed network fuses shallow detail and deep semantic features to sample the descriptors to a higher resolution. Combined with the attention mechanism, local (corners, lines, textures, etc.), semantic, and global features are used to improve the detection of feature points and the reliability of feature descriptors. Experiments on the Hpatches dataset show that the matching accuracy of the proposed method is 55.5%. Additionally, when the input image resolution is 480×640, the homography estimation accuracy of the proposed method is 5.9 percentage points higher than that of the existing method. These results demonstrate the effectiveness of the proposed method.

Nov. 25, 2022
Laser & Optoelectronics Progress
Vol. 59 Issue 22 2215002 (2022)
DOI:10.3788/LOP202259.2215002
Fu Nana, Liu Daming, Zhang Hengbo, and Li Xuandong

To achieve real-time effects of the human behavior recognition network on the embedded platform, a human behavior recognition technique based on the lightweight OpenPose model is proposed. This approach begins with the viewpoint of 18 human body bone key points and calculates the behavior type based on the spatial position of the bone key points. First, the lightweight OpenPose model is used to extract the 18 bone key points to coordinate information about the human body. Then, the key point coding is used to describe the human body behavior. Finally, the classifier is used to classify the acquired key point coordinates to detect the human body behavior status and transplant it into Jetson Xavier NX equipment using a monocular camera for testing. Experimental results show that this method can quickly and accurately identify 11 types of human behaviors, such as walking, waving, and squatting, on the embedded development board Jetson Xavier NX, with an average recognition accuracy rate of 96.08%, and detection speed of >11 frame/s. The frame rate is increased by 177% compared to the original model.

Nov. 25, 2022
Laser & Optoelectronics Progress
Vol. 59 Issue 22 2215001 (2022)
DOI:10.3788/LOP202259.2215001
Guo Yiming, Wu Xiaoqing, Su Changdong, Zhang Shitai, Bi Cuicui, and Tao Zhiwei

This study proposes a generative adversarial network (GAN) based on bidirectional multi-scale feature fusion to reconstruct target celestial images captured by various ground-based telescopes, which are influenced by atmospheric turbulence. This approach first constructs a dataset for network training by convolving a long-exposure atmospheric turbulence degradation model with clear images and then validates the network's performance on a simulated turbulence image dataset. Furthermore, images of the International Space Station collected by the Munin ground-based telescope (Cassegrain-type telescope) that were influenced by atmospheric turbulence are included in this study. These images were sent to the proposed neural network model for testing. Different image restoration assessment shows that the proposed network has a good real-time performance and can produce restoration results within 0.5 s, which is more than 10 times faster than standard nonneural network restoration approaches; the peak signal to noise ratio (PSNR) is improved by 2 dB?3 dB, and structural similarity (SSIM) is enhanced by 9.3%. Simultaneously, the proposed network has a pretty good restoration impact on degraded images that are influenced by real turbulence.

Nov. 25, 2022
Laser & Optoelectronics Progress
Vol. 59 Issue 22 2201001 (2022)
DOI:10.3788/LOP202259.2201001
Gao Wei, He Boyang, Zhang Ting, Guo Meiqing, Liu Jun, Wang Huimin, and Zhang Xingzhong

The perception of the spatial distance between operators and dangerous equipment is a basic safety management and control task issue in a substation scene. With the advancement of lidar and three-dimensional (3D) vision theory, 3D point cloud target detection can provide necessary technical assistance for downstream spatial distance measurement tasks. Aiming at the problem of inaccurate target detection caused by factors such as complex background and equipment occlusion in the substation scene, based on the PointNet++ model, an improved attention module is introduced in the local feature extraction stage, and a 3D object detection network PointNet suitable for substation operation scene is proposed. First, the network undergoes a two-level local feature extraction to obtain fine-grained features in each local area, then encodes all local features into feature vectors using a mini-pointnet to obtain global features, and finally passes through the fully connected layer to predict the results. Considering the large gap between the number of front and background points in the cloud data of substation sites, this study calculates the classification loss using focal loss to make the network pay more attention to the feature information of the front points. Experiments on the self-built dataset show that the PowerNet has a mean average precision (mAP) value of 0.735, which is greater than previous models and can be directly applied to downstream security management and control tasks.

Nov. 25, 2022
Laser & Optoelectronics Progress
Vol. 59 Issue 22 2210010 (2022)
DOI:10.3788/LOP202259.2210010
Sun Zheng, and Wang Shuyan

Intravascular optical coherence tomography (IVOCT) is a minimally invasive imaging model that currently has the highest resolution. It is capable of providing information of the vascular lumen morphology and near-microscopic structures of the vessel wall. For each pullback of the target vessel, hundreds or thousands of B-scan images are obtained in routine clinical applications. Manual image analysis is time-consuming and laborious, and the findings depend on the operators' professional ability in some sense. Recently, as deep learning technology has continuously made significant breakthroughs in the medical imaging field, it has also been used in the computer-aided automated analysis of IVOCT images. This study outlines the applications of deep learning in IVOCT, primarily involving image segmentation, tissue characterization, plaque classification, and object detection. The benefits and limitations of the existing approaches are discussed, and the future possible development is described.

Nov. 25, 2022
Laser & Optoelectronics Progress
Vol. 59 Issue 22 2200002 (2022)
DOI:10.3788/LOP202259.2200002
Li Xiaomiao, Yang Yanchun, Dang Jianwu, and Wang Yangping

This paper proposes a multi-focus image fusion method based on double-scale decomposition and random walk to smooth the edge region and avoid artifacts at the edge junction. The source images are first decomposed into large-scale and small-scale focus images using a Gaussian filter, and the edges of the decomposed large-scale and small-scale focus images are smoothed using various guiding filters. Then, the large-scale and small-scale focus maps are used as the marker nodes of the random walk algorithm, the initial decision map is obtained using the fusion algorithm, and the guided filter is used to optimize the decision map again. Finally, the source images are reconstructed using the decision graphs to produce the final fused image. The results of the experiments show that our method can effectively obtain the focus information in the source images while retaining the edge texture and detailed information of the focus area. It outperformed the competition in both subjective and objective evaluation indicators.

Nov. 25, 2022
Laser & Optoelectronics Progress
Vol. 59 Issue 22 2210011 (2022)
DOI:10.3788/LOP202259.2210011
Lin Wei, Cui Haihua, Zheng Wei, Zhou Xinfang, Xu Zhenlong, and Tian Wei

As a noncontact high-precision optical full-field measurement method, shearography can be used for the nondestructive detection of internal defects in composite materials. However, the obtained phase fringe pattern contains a high amount of speckle noise that seriously affects the detection results and accuracy. Therefore, we propose a phase fringe-filtering method using an unsupervised image style conversion model (CycleGAN). Furthermore, the original noise phase fringe image obtained using shearography is converted into an ideal noiseless fringe image via network training to achieve noise filtering in the phase fringe pattern. The experimental results show that the proposed method achieves high-efficiency filtering for noise in areas where the stripe distribution is relatively sparse, with clear boundaries and significant contrast in filtered images. Additionally, the running time of the proposed method is better than that of the other methods (by approximately 30 ms), achieves high-quality filtering, meets the development demand of dynamic nondestructive testing, and provides a new idea for the noise filtering of phase fringe pattern.

Nov. 25, 2022
Laser & Optoelectronics Progress
Vol. 59 Issue 22 2210009 (2022)
DOI:10.3788/LOP202259.2210009
He Yuquan, Zhang Yongjie, Xie Guangqi, Duan Lingfei, and Zhang Hongqiao

Recently, there have been widespread extensive research and application on charge coupled device (CCD). The imaging shape of linear CCD is a straight line, which can efficiently scan the space plane. This paper proposes a method for target plane positioning using the accumulation characteristics of points in a space line at the same pixel position and the projection information of the origin position of the spatial plane. For the obtained linear image, we used histogram equalization, which extracts the pixel position of the measured target stably, to enhance the contrast. After obtaining the straight line solving matrix and collecting the target's pixel position of each linear array CCD, we obtained line equations passing through the target object, and finally, we calculated its plane position using the least square method. The results show that for the linear CCD module TSL1401, the average measurement error of the measurement system is about 0.19 cm and the standard deviation is about 0.09 cm in the 20 cm×20 cm measured area, proving the effectiveness of the proposed method.

Nov. 25, 2022
Laser & Optoelectronics Progress
Vol. 59 Issue 22 2212001 (2022)
DOI:10.3788/LOP202259.2212001
Zhao Yingran, Yan Keding, and Yang Shuwei

A small Wi-Fi-intelligent microscope system is proposed to get rid of the problems of the traditional microscope's complicated operation, large volume, and limited light source. The common network camera CMOS circuit is used as the microscope's imaging circuit, and the network camera's advantage is reasonably utilized. The communication module adopts network communication between the computer and the microscope, utilizing the AP function of the ESP8266 module. The image can be transmitted to the upper computer software to set the camera parameters and save the image. The cost is low because the mechanical structure of the microscope is designed using NX12.0 software and three-dimensional printing technology. Consequently, the whole system works in conjunction, and the cell image is collected through the microscope.

Nov. 25, 2022
Laser & Optoelectronics Progress
Vol. 59 Issue 22 2211002 (2022)
DOI:10.3788/LOP202259.2211002
loading…