Opto-Electronic Engineering
Co-Editors-in-Chief
Xiangang Luo
2022
Volume: 49 Issue 4
7 Article(s)
Zhuzhang Jin, Xuyuan Fang, Yanhui Huang, Caoqian Yin, and Wei Jin

Overview: Meteorological satellites can monitor weather phenomena of different scales from the air, and the satellite cloud images obtained by them play an important role in weather analysis and forecast. In recent years, with the development of meteorological satellite technology, the spatial and spectral resolution of satellite cloud images and the acquisition frequency of imaging spectrometer have been continuously improved. How to manage massive satellite cloud images and design an efficient cloud image retrieval system has become a difficult problem for meteorologists. However, the traditional cloud image retrieval methods are difficult to obtain ideal retrieval accuracy and retrieval efficiency. Motivated by the impressive success of the modern deep neural network (DNN) in learning the optimization features of specific tasks in an end-to-end fashion, a cloud image retrieval method based on deep metric learning is proposed in this paper. Firstly, a residual 3D-2D convolutional neural network was designed to extract spatial and spectral features of cloud images. Since the features extracted by the traditional classify-based deep network may have greater differences intra-classes than inter-classes, the triplet strategy is used to train the network, and the cloud images are mapped into the metric space according to the similarity between cloud images, so that the distance of similar cloud images in the embedded space is smaller than that of non-similar cloud images. In model training, the convergence performance of traditional triplet loss is improved and the precision of cloud image retrieval is increased by adding a constraint on the distance between positive sample pairs to the lossless triplet loss function. Finally, through hash learning, the cloud features in the metric space are transformed into hash codes, so as to ensure the retrieval accuracy and improve the retrieval efficiency. Experimental results show that the mean average precision (mAP) of the proposed algorithm is 75.14% and 80.14% for the southeast coastal cloud image dataset and the northern hemisphere cloud image dataset respectively, which is superior to other comparison methods.Due to the traditional cloud image retrieval methods are difficult to obtain ideal retrieval accuracy and retrieval efficiency, a cloud image retrieval method based on deep metric learning is proposed. Firstly, a residual 3D-2D convolutional neural network is designed to extract spatial and spectral features of cloud images. Since the features extracted by the traditional classify-based deep network may have greater differences intra-classes than inter-classes, the triplet strategy is used to train the network, and the cloud images are mapped into the metric space according to the similarity between cloud images, so that the distance of similar cloud images in the embedded space is smaller than that of non-similar cloud images. In model training, the convergence performance of traditional triplet loss is improved and the precision of cloud image retrieval is increased by adding a constraint on the distance between positive sample pairs to the lossless triplet loss function. Finally, through hash learning, the cloud features in the metric space are transformed into hash codes, so as to ensure the retrieval accuracy and improve the retrieval efficiency. Experimental results show that the mean average precision (mAP) of the proposed algorithm is 75.14% and 80.14% for the southeast coastal cloud image dataset and the northern hemisphere cloud image dataset respectively, which is superior to other comparison methods.

Apr. 25, 2022
  • Vol. 49 Issue 4 210307 (2022)
  • Rui Sun, Xiaoquan Shan, Qijing Sun, Chunjun Han, and Xudong Zhang

    Overview: Near-infrared image sensors are widely used because they can overcome the effects of natural light and work in various lighting conditions. In the field of criminal security, NIR face images are usually not directly used for face retrieval and recognition because the single-channel images acquired by NIR sensors miss the natural colors of the original images. Therefore, converting NIR face images into VIS face images and restoring the color information of face images can help further improve the subjective visual effect and cross-modal recognition performance of face images, and provide technical support for building a 24/7 video surveillance system. However, NIR face images are different from other NIR images. If the details of face contours and facial skin tones are distorted in the coloring process, the visual effect and image quality of the generated face images will be greatly affected. Therefore, it is necessary to design algorithms to enhance the retention of detailed information in the coloring process of NIR face images. We propose a NIR-VIS face image translation method under a dual contrastive learning framework. The method is based on the dual contrastive learning network and uses contrastive learning to enhance the quality of the images generated from the image localization. Meanwhile, since the StyleGAN2 network can extract deeper features of face images compared with ResNets, we construct a generator network based on the StyleGAN2 structure and embed it into the dual contrastive learning network to replace the original ResNets generator to further improve the quality of the generated face images. In addition, for the characteristics of blurred external contours and missing edges of human figures in NIR domain images, a facial edge enhancement loss is designed in this paper to further enhance the facial details of the generated face images by using the facial edge information extracted from the source domain images. Experiments show that the generation results on two public datasets based on our method are significantly better than those of recent mainstream methods. The VIS face images generated by our method are closer to the real images and possesses more facial edge details and skin tone information of face images.With the wide application of visible-infrared dual-mode cameras in video surveillance, cross-modal face recognition has become a research hotspot in the field of computer vision. The translation of NIR domain face images into VIS domain face images is a key problem in cross-modal face recognition, which has important research value in the fields of criminal investigation and security. Aiming at the problems that facial contours are easily distorted and skin color restoration is unrealistic during the coloring process of NIR face images, this paper proposes a NIR-VIS face images translation method under a dual contrastive learning framework. This method constructs a generator network based on the StyleGAN2 structure and embeds it into the dual contrastive learning framework to exploit the fine-grained characteristics of face images using bidirectional contrastive learning. Meanwhile, a facial edge enhancement loss is designed to further enhance the facial details in the generated face images and improve the visual effects of the face images using the facial edge information extracted from the source domain images. Finally, experiments on the NIR-VIS Sx1 and NIR-VIS Sx2 datasets show that, compared with the recent mainstream methods, the VIS face images generated by this method are closer to the real images and possesses more facial edge details and skin color information of the face images.

    Apr. 25, 2022
  • Vol. 49 Issue 4 210317 (2022)
  • Jianxin Wang, Shuang Chen, Li Chen, Wenbin Yang, Rong Qiu, Fuzhong Bai, and Tonghong Li

    Overview: Femtosecond laser electronic excitation tagging (FLEET) is proposed in 2011 as Molecular tagging velocity measurement technique. FLEET uses one femtosecond laser and one optical detector (usually ICCD) and one signal generator, and works with nitrogen fluorescence as tagging tracer. Its experimental system is simple and no injection problem of tracer particle, hence has a wide application prospect. Fluorescent filament features such as size and intensity distribution are very vital to the velocity measurement accuracy, which are decided by the optical system parameters. Therefore, it is particularly important to study the influence of optical system parameters on it.Using FLEET to measure flow field velocity, the shape and feature of fluorescent filament impact on the accuracy and range of the velocity measurement, and are determined by the parameters of the FLEET optical system. Therefore, it is necessary to study the influence of FLEET optical system parameters on fluorescent filament. In this paper, the influences of main parameters for optical system, namely pulse energy of femtosecond laser and focal length of the focusing lens, on the length of filament, peak intensity, energy density of filament and signal to noise ratio are investigated by experiment. The lifetime of air fluorescent filament under different pressures are measured with the optimum experiment parameters. Experiments show that there is a power density threshold for femtosecond fluorescent filaments excitation, which is about 2×1013 W/cm2 in this experiment. The optimization of optical system parameters should be based on the high signal-to-noise ratio and uniform intensity distribution of filament. The lifetime of femtosecond fluorescent filaments is about several microseconds. Therefore, the time interval between two velocity measurement samples should be less than microseconds. The results are useful for determining the main parameters of FLEET optical system.

    Apr. 25, 2022
  • Vol. 49 Issue 4 210318 (2022)
  • Liang Ma, Yutao Gou, Tao Lei, Lei Jin, and Yixuan Song

    Overview: In recent years, with the continuous development of remote sensing optical technology, the acquisition of a large number of high-resolution remote sensing images has promoted the construction of environmental monitoring, animal protection, national defense and military. In numerous remote sensing image visual tasks, remote sensing aircraft detection is of great significance for civil and national defense. Research of the remote sensing small object detection technology is important. Currently, the object detection method based on deep learning has achieved excellent results in large and medium object testing tasks, but the performance and application of remote sensing small object detection are poor. The main reasons are the following: 1) the model is huge, and the real-time is poor; 2) remote sensing image is complicated and the object scale distribution is wide; 3) remote sensing small object detection dataset is extremely lacking.This paper proposes a robust small object detection method based on multi-scale feature fusion using remote sensing images. When the natural image-based pre-training model is directly applied to the remote sensing images, the large number of parameters and excessive down sampling in widely feature extractions may lead to the disappearances of small objects due to feature gaps. Therefore, based on the distribution of all object sizes in the dataset (i.e., prior knowledge), a lightweight feature extraction module is first integrated via dynamic selection mechanism that allows each neuron to adaptively allocate the receptive field size for detection. Meanwhile, the information reflected by various scale features has different amounts and emphasis. To increase the accuracy of image feature expression, the FPN (feature pyramid networks) module based on adaptive feature weighted fusion is applied by using the grouping convolution to group all feature channels without affecting each other. In addition, deep learning needs a large amount of data to drive. Due to the lack of remote sensing small object dataset, we built a remote sensing plane small object dataset, and processed the plane and small-vehicle objects in DOTA dataset to make its distribution of size meet the requirement of small object detection. Experimental results show that compared with most mainstream detection methods, the proposed method achieves better results on DOTA and self-built datasets.

    Apr. 25, 2022
  • Vol. 49 Issue 4 210363 (2022)
  • Weihang Cao, Zhuang Li, Chengkun Shi, Jiazhen Lin, Xiuji Lin, Guozhen Xu, Huiying Xu, and Zhiping Cai

    Overview: Visible lasers are used extensively in laser color display, laser medical treatment, quantum information, optical communication, and other applications. Trivalent Pr-ion (Pr3+) has attracted much attention due to its rich transitions in the visible band. As early as the 1960s, praseodymium-doped (Pr3+) crystals were investigated as gain mediums for laser production. Compared with the way of obtaining visible laser by optical nonlinear processes such as frequency doubling and mixing, the way of directly down converting the laser energy level by using Pr3+ doped crystal to obtain visible laser avoids the use of the nonlinear optical crystal, which makes the laser have high conversion efficiency, compact structure, good quality of laser beam, and no requirement of strict temperature control. Especially in recent years, the emergence of commercial pump sources such as laser diode (LD) and optically pumped semiconductor laser (OPSL) has made great progress in the research of Pr3+ doped solid-state lasers. In this paper, Pr3+ solid-state lasers are divided into three types: continuous-wave output type, pulse output type and single longitudinal mode output type. Among them, the continuous-wave laser is typical in green, orange and red laser bands. The laser output power can exceed the watt level. The maximum output power at 522 nm green laser is more than 4 W, and the maximum power at 607 nm orange laser is 4.88 W. The maximum power at 639 nm red laser is up to 8.14 W. For pulse laser, the laser output with a pulse width of tens to hundreds of nanoseconds can be obtained in Q-switching, and the pulse width in mode-locking is narrower. The mode-locked pulse widths of 8 ps, 400 fs and 1.1 ps are obtained at red 639 nm, orange 613 nm and 604 nm respectively. The mode-locked pulse widths of other visible optical bands have been reported to range from more than ten picoseconds to hundreds of picoseconds. In the aspect of single longitudinal mode, the research work of realizing single-frequency laser output by using Pr3+ doped crystal mainly focuses on 360 nm UV, 604 nm, 607 nm orange laser and 639 nm, 640 nm red laser. At the same time, taking time as the mainline, this paper summarizes the research history and current status of Pr3+ doped solid-state lasers, and looks forward to the future of Pr3+ doped solid-state lasers.Visible lasers are used extensively in laser color display, laser medical treatment, quantum information, optical communication, and other applications. Trivalent Pr-ion (Pr3+) has attracted much attention because of its rich transitions in the visible band. Especially in recent years, the emergence of commercial pump sources such as laser diode (LD) and optically pumped semiconductor laser (OPSL) has made great progress in the research of Pr3+ doped solid-state lasers. According to the three output types of Pr3+ doped solid-state lasers: continuous-wave output, pulse output and single longitudinal mode output, this paper introduces the typical work of each output type in a specific band. The research history and current status of Pr3+ doped solid-state lasers are summarized with time as the main line, and the future prospects of Pr3+ doped solid-state lasers are projected.

    Apr. 25, 2022
  • Vol. 49 Issue 4 210364 (2022)
  • Ziyi Zhang, Meng Chen, Chunlei Wang, Hepeng Xiang, and Ruiqing Tao

    Overview: The method of using an aspheric lens to shape Gaussian beam has been very mature. A specific aspheric shaping mirror can be designed according to the incident light parameters to shape the Gaussian beam into a flat-top beam so that the laser can be better applied to laser medicine, laser processing and other fields. Aspheric mirror shaping has the advantages of simple structure, high damage threshold and high shaping efficiency. At present, the research on the aspherical shaping system lies in the design of its structure, and there is no detailed study on its shaping characteristics. Therefore, this paper focuses on the shaping effect of the aspherical shaping system under different incident parameters.The method of Gaussian beam shaping using an aspheric lens has been very mature, and specific aspheric shaping mirrors can be designed according to the incident light parameters. However, when the aspheric shaping mirror designed with a 3 mm incident beam waist is tested, it is found that the aspheric shaping mirror is not only applicable to the incident parameters in the design. When the incident beam size and divergence angle are different, there will be an optimal shaping position with flat top distribution behind the shaping mirror. The position is far away from the shaping mirror with the increase of the diameter of the incident beam, and close to the shaping mirror with the increase of the divergence angle. In order to explore the difference in shaping results on the optimal shaping position, the control variable method is used to carry out the experiment. It is found that the diameter and divergence angle of the incident light beam have no obvious change in the flat factor of the flat top distribution on the position, but the beam uniformity and edge steepness will have the best value, and there is an optimal incident parameter. In order to obtain the relationship between the optimal shaping position and the incident beam diameter and divergence angle, a mathematical model is successfully established by using the response surface method. When the beam diameter and divergence angle at an incident position are known, the optimal shaping position can be quickly obtained.

    Apr. 25, 2022
  • Vol. 49 Issue 4 210367 (2022)
  • Jin Wu, Fei Qin, and Xiangping Li

    Overview: Although the diffractive lens with the photon-sieve and the metasurface metasurface type have been severely investigated in recently year, zone plate plate-type constructed with a series concentric phase and amplitude belts is still the most commonly used configuration, and have been frequently used in many applications including space telescope, high high-performance microscope object, projection illumination system, etc. Nevertheless, the integration possibility of such components in the opto-electronic circuits remains a challenge, due to the configuration of the incompatible materials configuration constructed with the opaque metal or dielectric materials with high refractive index. Two-dimensional transition -metal dichalcogenides (2D TMD) have attracted massive attention recently. As their typical representative, Molybdenum disulfide (MoS2) has been intensively investigated and shown extremely high quantum efficiency in photocurrent generation and photo-luminescence process owing to its unique photon-electronic characteristics. However, their capability for wavefront engineering has less been appreciated by far, due to the insufficient phase modulation capability when the thickness of the MoS2 sheet is decreased to atomic layers. In this work, we proposed and experimentally demonstrated an atomic thin Fresnel zone plate device. Based on the loss-assisted phase modulation mechanism, an extraordinary phase modulation of π phase shift for the optimized wavelength of 535 nm has been achieved by a monolayer MoS2 sheet with a thickness of 0.67 nm. Unlike the phase shift that comes from the dielectric or plasmonic resonator which highly rely relies on the spatial dimension of the resonator itself, the loss-assisted phase only determined by the basic configuration scheme has no obvious connection with the geometric size of the scribed pattern. Therefore, such an original phase shift mechanism can be applied for the creation of diffractive optical devices more conveniently. By utilizing the femtosecond laser scribing technique, a binary phase Fresnel zone plate has been fabricated on a monolayer and bilayer MoS2 sheet. The FZP is composed of 8 scribed concentric belts on the MoS2 sheet to form the alternating π and 0 phase zones between the scribed and un-scribed region. The radii of the zone belt are given by the standard FZP equation for satisfying the construction interference at the desired focal position. Experimentally measured results shown that a diffraction diffraction-limited focal spot with a focusing efficiency of around 5% has been obtained by the monolayer FZP device, which is notably outperforms the reported monolayer TMD lens with a focusing efficiency of 0.08%. Benefitting from the unique k dispersion property of the MoS2 sheet, such a significant phase modulation property could be extended to broadband through increasing the thickness of MoS2 from monolayer to bilayer. The simulation results shown that a 0.2π and above phase shift could be achieved in the wavelength region from blue to red light. The broadband focusing property have has been demonstrated in simulation and experiments from the wavelength of 405 nm to 635 nm. Combining with the direct bandgap property of the monolayer MoS2 material, this phenomenon may pave the road for the integrated opto-electronic system.The planar diffractive lens in zone plate-type configuration plays important roles in the modern optical system, especially in the advanced optical imaging system. Most of them are constructed with opaque metal or dielectric materials with a high refractive index, which restricts the integration possibility for the miniaturized photonic systems. In this work, we proposed and experimentally demonstrated an atomic thin Fresnel zone plate device with 2D semiconductor material. Based on the loss-assisted phase modulation mechanism, an extraordinary phase modulation capability in the entire visible region has been achieved by an atomic thin MoS2 sheet. By utilizing the femtosecond laser scribing technique, a binary phase Fresnel zone plate has been fabricated on the atomic thin MoS2 sheet. The diffraction-limited focusing property in broadband has been demonstrated in simulation and experiments. Combining with the direct bandgap property of the monolayer MoS2 material, this phenomenon may pave the road for the integrated optical system.

    Apr. 25, 2022
  • Vol. 49 Issue 4 220011 (2022)
  • Please enter the answer below before you can view the full text.
    Submit