Acta Photonica Sinica
Co-Editors-in-Chief
Yue Hao
Liangliang CHENG, Chenbo XIE, Hao YANG, Zhiyuan FANG, Min ZHAO, Xu DENG, Bangxin WANG, and Kunming XING

Taking the continuous haze pollution process that occurred on January 11-17, 2015 in Beijing as an example, the vertical distribution characteristics of aerosols were obtained by inversion using joint observations of ground-based and space-borne lidar. The pollution sources and transport paths were derived from MODIS satellite remote sensing data and HYSPLIT backward trajectory model analysis, after which the causes of this pollution were revealed by combining ground-based air quality and meteorological observation data. The results show that the near-surface aerosol extinction coefficients inferred from lidar data are generally consistent with the variation of PM2.5concentrations on the ground, while the planetary boundary layer height shows an opposite trend to PM2.5concentrations, and the lowest boundary layer height is 500 m. During the pollution period, it is light wind and high humidity, and the average wind speed and relative humidity are 1.35 m/s and 66%, respectively. The presence of the inversion layer for several days inhibited the diffusion and transport of pollutants in the vertical space, and the intensity of the inversion was as high as 5°C. These two factors led to the continuous accumulation of pollutants, and finally, the PM2.5 concentration reached 448 μg/m3 in the early morning of the January 16th, and the pollution was finally disappeared because of the southerly wind on the January 16th, and the PM2.5 concentration decreased at a rate of 82 μg/(m3· h). During the observation period, the correlation coefficients of PM2.5with NO2 and CO were 0.766 and 0.901, respectively, showing a significant positive correlation, which shows that secondary aerosols from the transformation of gaseous precursor pollutants such as NO2 are an important source of haze. Comprehensive analysis shows that this pollution is dominated by haze, which is caused by the superposition and accumulation of aerosols from regional transmission and local emissions. Pollutants from southern Hebei, Henan and Shanxi are transmitted to Beijing with high-altitude air masses and mixed with locally emitted pollution aerosols, leading to increased pollution.

Mar. 25, 2022
  • Vol. 51 Issue 3 0301001 (2022)
  • Pengfei WU, Huiliang WANG, Sichen LEI, and Shuai DANG

    The fast capture, alignment and stabilization tracking system of free space beam is the key module to ensure the establishment of atmospheric laser communication link and stable communication, and the distorted spot center location technology in turbulent atmospheric channel is the key link to provide accurate and stable feedback. In order to improve the stability of communication link, a sliding weighted centroid location algorithm is proposed to solve the problem of poor location stability of traditional algorithms such as barycenter method and centroid method in atmospheric laser communication. The adaptive threshold segmentation method is improved to improve the accuracy of spot location. Background noise removal and center noise compensation were carried out to preserve the effective gray range of spot to the maximum extent. By improving the threshold segmentation method, the accuracy of spot center location can be improved. Background noise removal and center noise compensation were carried out to preserve the effective gray range of spot to the maximum extent, and the sliding compensation mechanism is added to the nonlinear weighted centroid location algorithm, which can reduce the location error caused by spot scintillation drift in communication, improved spot location stability and accuracy. Gaussian beam passing through atmospheric turbulent phase screen is simulated to generate distorted light spots of different intensity, and the sliding weighted location algorithm is verified theoretically, and the continuous light spot image generated by experiment is verified experimentally. and the results show that, Compared with the traditional barycenter method, centroid method and nonlinear weighted centroid method, the sliding weighted centroid location algorithm proposed in this paper can reduce, the proposed sliding weighted centroid localization algorithm proposed in this paper can reduce the step distance of continuous spot center localization to different degrees, which can be reduced by 21.5% in theoretical results and 11.7% in experimental results. In addition, the total intensity of pixels in the spot center mask of the same area is higher and the comprehensive positioning effect is better.

    Mar. 25, 2022
  • Vol. 51 Issue 3 0301002 (2022)
  • Cunxia LI, Yanghe LIU, Zijian LI, Ningju XI, and Yuanhe TANG

    Ground-Based Airglow Imaging Interferometer (GBAII) is a ground-based wind field detection prototype developed by our research group that integrates the principles of "rotational spectral line temperature measurement" and "four-intensity method". The imaging interferogram of airglow 90~100 km above the earth can be obtained by taking long time exposure of 200 s. Then the information of atmospheric wind speed, temperature and volume emissivity over the Earth can be obtained from the imaging interference fringe. Modulation Transfer Function (MTF) is a very important index in the design, development and testing of ground-based airglow imaging interferometer GBAII, which can characterize the imaging quality of GBAII. In this paper, the modulation transfer function of ground-based airglow imaging interferometer is studied. By using He-Ne laser wavelength of 632.8 nm, O2(0-1) 867.7 nm and O(1S) 557.7 nm as light sources, the MTF values of the optimized design, theoretical calculation and actual images are given. Firstly, the optical imaging structure of GBAII consists of five parts: a tapered mirror light receiving system, a wide-angle MI phase modulation system, a narrowband interference filter, a CCD and three lenses. The optical structure of GBAII is optimized by Code V, and the point sequence diagram of GBAII was obtained. 4×4 was used as a bin, and the diameter of dispersion spot was about 60 μm at 0° field of view, and 80~90 μm at 2°~9.5° field of view, and the limit resolution was 6.25 lp/mm. For the three airglow lines with wavelength 557.7 nm, 630.0 nm and 867.7 nm, all the MTF values are above 0.3, and some of the MTF values are higher than 0.6. However, it can also be seen that the MTF meridional branch and sagittal branch differ greatly in the full field of view, mainly caused by astigmatism. Secondly, according to Fourier optical theory, the MTF expression of GBAII optical system is obtained by calculating the MTF of wide-angle Michelson interferometer, interference filter and CCD. The MTF curve of GBAII is given by substituting the MTF expression of GBAII into the relevant structure and size of GBAII optimized by ultra-wide Angle, thermal compensation and achromatic conditions. For airglow wavelengths at 557.7 nm and 867.7 nm, the MTF value is 0.508 and 0.510, corresponding to Nyquist frequencies of 20 lp/mm and 16 lp/mm, respectively. For the GBAII developed by our researcher group, the MTF value is greater than 0.51 at low frequency, which is greater than 0.35 MTF of international famous wind imaging interferometer WINDII. Thirdly, in order to obtain the experimental MTF value of GBAII, it is necessary to take imaging interferogram through GBAII first, and use software to read gray value of image point by point to calculate contrast of interference fringes. The MTF of GBAII imaging system is equal to the contrast of the image divided by the contrast of the object, where, the contrast of the subject is not equal to but less than 1. He-Ne laser spectrum line of 632.8 nm, O2(0-1) 867.7 nm and O(1S) 557.7 nm were used as light sources respectively to obtain the indoor and outdoor imaging interferogram of GBAII. The experimental MTF of different wavelengths was obtained according to contrast of imaging interferogram. The MTF value of GBAII is greater than 0.8, 0.58 and 0.24 at the wavelength of 632.8 nm laser, O2(0-1) 867.7 nm and O(1S) 557.7 nm airglow, respectively. Based on the MTF value of GBAII studied in this paper, the experimental MTF values show that GBAII has better imaging effect on the vibration spectra of diatomic O2 molecules than that of single atomic O airglow. Since the intensity of O(1S) 557.7 nm and O2(0-1) 867.7 nm airglow is very low, the imaging interference fringes of GBAII airglow need to be obtained by long time exposure on the ground. In terms of the results, the MTF results of the two airglow spectra obtained by GBAII outdoor experiment have a certain gap with that of the laboratory 632.8 nm He-Ne laser, which can be improved by further optimization of GBAII later. It can be seen from the above results that the MTF value of GBAII optimized design, theoretical calculation and actual image has a certain deviation. When laser is used as the light source, the experimental MTF value of GBAII is the largest, and the experimental MTF value of 557.7 nm airglow spectrum line is slightly smaller, and the MTF value of software optimization is in the middle. Overall, the MTF result of GBAII theory and experiment is better than 0.35 MTF of WINDII. The study of MTF of GBAII can provide technical basis for GBAII to successfully detect atmospheric wind field parameters. The research results can provide accurate technical basis for GBAII to successfully detect physical quantities such as wind speed, temperature and volume emissivity in the upper atmosphere, and also lay a theoretical and experimental foundation for the development of similar instruments in China.

    Mar. 25, 2022
  • Vol. 51 Issue 3 0301003 (2022)
  • Xiaofeng LI, Mulin DU, Chuanping XU, Lishu HUANG, Junyu CHEN, and Le CHANG

    The gain of super second generation image intensifier, hereinafter referred to as image intensifier, is determined by photocathode response, gain of microchannel plate, screen efficiency and screen voltage, and is directly proportional to the product between them. The gain of microchannel plate has an exponential relationship with its operation voltage. Generally, when the operation voltage of the microchannel plate is increased by 50 V, the gain of the microchannel plate can be doubled, and the gain of the corresponding image intensifier can also be doubled. Because the photocathode response, screen efficiency and screen voltage are relatively invariant, the gain of image intensifier is mainly determined by the gain of microchannel plate, that is, by the operation voltage of microchannel plate. The higher the operation voltage of the microchannel plate, the higher the gain of the image intensifier. However, when the operation voltage of the microchannel plate is increased, under normal circumstances, only some scintillation points will appear on the fluorescent screen. The higher the voltage is, the brighter the scintillation point is. However, when the operation voltage reaches a certain value, stable output brightness will appear on the fluorescent screen, that is, the image intensifier will produce self-excited luminescence. The so-called self-excited luminescence refers to that the image intensifier can output a certain stable brightness under the condition of no input illumination. When the image intensifier produces self-excited luminescence, the contrast and resolution of the image disappear, and the enhancement of image brightness is meaningless. Therefore, the maximum brightness gain of image intensifier is limited by self-excited luminescence.The reason why the image intensifier produces self-excited luminescence is that the part of luminescence penetrates the aluminum film of the fluorescent screen and microchannel plate, reaches the photocathode, and excites the photocathode to emit photoelectrons. The photoelectron is multiplied by the microchannel plate, and finally the fluorescent screen is excited to emit light, so as to form light feedback. Therefore, light feedback is the direct cause of self-excited luminescence.The main factors affecting the light feedback are the transmittance of the aluminum film of the fluorescent screen and the photocathode response for the luminescence of the fluorescent screen. Light feedback is directly proportional to the photocathode response. The higher the sensitivity of photocathode, the easier it is to produce light feedback. For example, for Na2KSb(Cs) photocathode and Cs2Te photocathode, since the response of Cs2Te photocathode to fluorescent screen luminescence is much lower than that of Na2KSb(Cs) photocathode to fluorescent screen luminescence, the image intensifier of Cs2Te photocathode is more difficult to produce light feedback than that of Na2KSb(Cs) photocathode, so higher gain can be obtained. The light feedback is also directly proportional to the transmittance of the aluminum film of the fluorescent screen. The higher the transmittance of the aluminum film of the fluorescent screen, the easier it is to produce light feedback. Therefore, to suppress the light feedback, it is necessary to reduce the transmittance of the aluminum film of the fluorescent screen.In addition to light feedback, the signal-to-noise ratio is also the factor limiting the maximum gain of image intensifier. Because when the inherent gain of the microchannel plate is certain, increasing the operation voltage of the microchannel plate can not affect the resolution and equivalent background illumination of the image intensifier, but can reduce the signal-to-noise ratio of the image intensifier, which is the most important performance parameter of the image intensifier. Therefore, in order to further improve the maximum brightness, gain of the image intensifier, under the conditions of certain photocathode response, screen efficiency and screen voltage, it is necessary to further improve the inherent gain of the microchannel plate and reduce the transmittance of the aluminum film of the fluorescent screen.

    Mar. 25, 2022
  • Vol. 51 Issue 3 0304001 (2022)
  • Di PAN, Qi ZHOU, Xuan LIU, Bing LIU, and Hui FANG

    In high dynamic conditions, the star spot exhibits a tailing phenomenon, which makes it impossible to accurately extract the star point centroid. In the exposure time, the total number of photoelectrons received by the detector does not change, but with the occurrence of the tailing phenomenon, the star energy is dispersed into more pixels, resulting in blurred and broken star point imaging. The tailing, blurring and fracture of the star spot make the traditional method unable to accurately extract the star centroid, which has adverse effects on the attitude determination of the spacecraft. Adjusting the exposure time can shorten the tail length but it will reduce the energy received by the detector, which also makes it difficult to extract the star centroid. Aiming at the above two problems, this paper proposes a method for modeling and detecting of star spot in high dynamic condition. The method is divided into four steps. Firstly, establish the static star spot template based on blackbody radiation. Secondly, generate the dynamic star spot template based on star blur and Bresenham algorithm. Then, search the coarse positioning area of the star target by correlation template matching. Finally, utilize the centroid method to extract the star centroid. Simulation experiments based on stargazing data in the field proved that the simple template generated by Bresenham algorithm and the complex template generated by integration have the same accuracy in extracting the star centroid, but the calculation speed of the former is 500 times that of the latter. Therefore, when using the correlation template matching method to perform coarse star positioning, the simple template generated by the algorithm in this paper is an approximate replacement of the real star spot energy distribution without sacrificing accuracy. The experimental results show that the proposed method can adapt to various exposure time, making the star sensor realize the stable tracking at the angular velocity of 3°/s. When the exposure time is 50ms and the angular velocity is 3°/s, the angular distance error is 10 arcseconds, the average extraction rate is 97%,, which is the increase of 78.4%, 36.6%, compared to the traditional method.

    Mar. 25, 2022
  • Vol. 51 Issue 3 0304002 (2022)
  • Luzi WANG, Yunsheng QIAN, Mohan SUN, and Xiangyu KONG

    Resolution is one of the important parameters of image intensifiers, which reflects the target detection performance (including the night working distance and the image definition) of the combined night vision instruments in 0.001 lx to 0.1 lx dark environment. At present, the mainstream measurement method of this index in the industry is subjective evaluation. In addition, several feasible and reasonable objective evaluation technologies have been presented in recent years. Such as, the objective evaluation method of low-light-level image intensifier resolution based on FFT. However, the disadvantages of traditional subjective evaluation methods are strong human uncertainty and low accuracy, while the traditional objective evaluation techniqes face the challenges of improving time effciency and reducing artificial intervention. To solve the problems above, an objective evaluation method of the resolution of image intensifier based on the gray variation features of the stitched stripe unit image is proposed. The main idea of the method is to scan the stitched unit image with the fixed size template, and calculate the unit definition according to the obtained gray change sequence. The general step of the method is to rotate the target image to the appropriate direction, and then extract the resolution image from the target image by using the spatial position relationship between the image block and the central box of the diaphragm. Then, the cropped resolution image is rotated to a correct direction by means of the image information, and the independent unit image is extracted from the resolution image by using the spatial position correspondence relationship of each stripe unit. The adjacent sequential images belonging to the same unit are stitched along the stripe direction in order, and a fixed-size kernel is used to scan the stitched stripe unit image along the stripe change direction to generate the definition of stripe unit. Finally, the resolution of image intensifier is estimated by combining the constructed definition-resolution correspondence of the stripe unit with the linear fitting algorithm. In order to verify the performance of this method, it is compared with the traditional evaluation method, including the traditional subjective method and the traditional objective method. Besides, six image tubes with different resolutions are adopted in the experiments. Among them, the comparative experiments of timeliness and accuracy shows that, the proposed method is superior to the traditional objective method in timeliness and evaluation accuracy, and better than the traditional subjective method in test accuracy. In addition, the repeatability of the method is also investigated, and the traditional subjective evaluation method is used as the comparison. The results show that for the image tubes with a resolution higher than 40.3 lp/mm, the subjective evaluation results varies greatly, and for the image tubes with a resolution lower than 35.9 lp/mm, the objective evaluaiton results fluctuate greatly, but the repeatability of the proposed method is stronger on the whole. To sum up, the proposed objective evaluation method can overcome the disadvantages of traditional method, which provide an effective and feasible scheme for the standardized measurement of this parameter.

    Mar. 25, 2022
  • Vol. 51 Issue 3 0304003 (2022)
  • Mohan SUN, Yunsheng QIAN, Yingnan REN, Qiang ZHI, Xiangyu KONG, and Yizheng LANG

    Low light level image intensifier is an important military hardware in the night battlefield, and it usually consists of photoelectric cathodes, Microchannel Plates (MCP), fluorescent screens, and adapted dedicated power supplies. The gated power supply applied between the photoelectric cathode and MCP is an important part of the strong light protection of the image intensifier. Its main feature is the application of Pulse Width Modulation (PWM) high voltage instead of the traditional direct current high voltage, and photoelectron passes through only during the duration of the pulse, which achieves the switch operation of the electronic flow. However, domestic low light level image intensifier with gated power supply is in lack of theoretical model support and test indicators at present.Based on the above background, the theoretical models and test indicators are studied in this paper. In terms of theoretical models, this paper incorporates factors that affect the brightness of the fluorescent screen, including incident illuminance, cathode voltage, cathode pulse duty cycle, MCP voltage and fluorescent screen voltage into the model for analysis, and finally establishes an automatic brightness control model. In terms of test indicators, the two parameters of enhancer response time and stabilization time are proposed. Response time represents the time it takes from the brightness peak to 90% of the maximum brightness a high-light environment. The stabilization time represents the time between the moment when the brightness of the screen is at its highest point and the moment when the brightness is stable.Aiming to study the effect of each parameter on the response time and stabilization time of the low light level image intensifier, two variables controlled when the gated power supply is actually used are selected: cathode pulse duty cycle and MCP voltage, and the simulation analysis is carried out. By changing the step size of the variables regulation, the change trend of the brightness of the fluorescent screen in the model is calculated and analyzed, and the simulation curve is drawn to compare with the subsequent experimental results.In order to verify the results of model simulation, a fluorescent screen brightness test system is designed and produced, which includes a light source system, a signal acquisition system and an upper computer software system. The signal acquisition system uses photocells and high-speed signal acquisition circuits to collect the brightness of the fluorescent screen in real time to ensure the accuracy of the test results. Using this test system, three gated image intensifier samples based on the automatic brightness control model were tested, and the experimental results showed that the larger step size of the cathode pulse duty cycle regulation and MCP voltage regulation, the smaller the response time, but the stabilization time would increase as well; the larger step size of the MCP voltage regulation, the smaller the stabilization time, and if this parameter is too large, the brightness of the fluorescent screen will fluctuate. These trends are basically consistent with the simulation results. Besides, the low cathode pulse frequency can also cause unstable brightness oscillation of the fluorescent screen.Based on the above work, this paper verifies the feasibility of automatic brightness model, and analyzes the effect of different control schemes on response time and stabilization time, which is of guiding significance to the subsequent research on gated power supply for Low light level image intensifiers.

    Mar. 25, 2022
  • Vol. 51 Issue 3 0304004 (2022)
  • Qin ZHANG, Xiaofeng BAI, Hongchang CHENG, Gangcheng JIAO, Zhoukui LI, Kun HAN, and Qi LI

    As an important performance index of the low-light-level image intensifiers, the signal-to-noise ratio can determine the detection limit under the low-light conditions. Therefore, an accurate evaluation of the signal-to-noise ratio is helpful to grasp actual working status of the low-light-level image intensifiers. In the photoelectric image transmission process, the quantum noise of the photocathode, the particle noise of the microchannel plate and the phosphor screen will add noise to the transmission chain of the image, thereby reducing the signal-to-noise ratio of the image. Laboratory testing methods for the signal-to-noise ratio of the low-light-level image intensifiers are relatively mature, and the photomultiplier tube is usually used as the core detection device to detect the signal and noise of the output light spot image. The test light source specified in the laboratory test of the signal-to-noise ratio is 2 856 K light source A. Due to the difference of spectral distribution between the night sky light and the light source A, the laboratory measurement results of the signal-to-noise ratio cannot accurately describe the signal-to-noise ratio of the low-light-level image intensifier under actual night sky light conditions, and the actual night sky light spectrum distribution is relatively complicated, which is difficult to be simulated by the laboratory test light source. In response to this problem, the theoretical calculation model of the output signal-to-noise ratio of the low-light-level image intensifier is deduced based on the photocathode and night light spectrum matching relationship and the theory of light quantum noise fluctuation. After verifying the model based on the parameters of the typical low-light image intensifier, the output signal-to-noise of the two under the radiation of three kinds of night sky light and test light source with the same illuminance is calculated based on the model. The comparative analysis results show that:1) Under the radiation of three kinds of night sky light, the actual signal-to-noise ratio of the super second and third generation image intensifiers is far greater than the test results under the radiation of test light source with the same illuminance in the laboratory. The reason is that compared with the light source A, the low-light image intensifier has a larger sensitivity conversion coefficient for the reflection spectrum of the actual grass scene. It further proves that the laboratory test results of the signal-to-noise ratio of the image intensifier cannot describe its actual working status under these three kinds of night sky light conditions; 2) Under the radiation of three kinds of night sky light, The difference in signal-to-noise ratio between the third generation and the super second generation is greater than the laboratory test difference, and this difference increases with the increase of illuminance. Therefore, the third-generation low-light image intensifier has a greater signal-to-noise ratio advantage in the actual working environment of the night sky light radiation. In addition, the theoretical calculation of the signal-to-noise ratio under extremely low illuminance levels is helpful to broaden the illuminance range for the evaluation of the signal-to-noise ratio. Under the condition of clear starlight, the illuminance of the reflected light from the actual scene reaching the cathode surface of the low-light night vision system is 10-5 lx. Under the condition of cloudless clouds, the actual illuminance of the photocathode surface even reaches the level of 10-6 lx. The current laboratory test conditions are difficult to describe the detection limit performance under these two conditions. Therefore, the theoretical calculation under the extremely low illuminance level of 10-6 lx helps to broaden the illuminance for the signal-to-noise ratio evaluation. The research in this thesis lays a theoretical foundation for the evaluation of the signal-to-noise ratio of the low-light-level image intensifiers under the night-sky radiation conditions where the spectral distribution is more complex. At the same time, it also theoretically discussed the signal-to-noise ratio under the extremely low illumination levels.

    Mar. 25, 2022
  • Vol. 51 Issue 3 0304005 (2022)
  • Jiping SUN, and Weiqiang FAN

    Mine video surveillance technology has significant advantages in promoting coal mine safety, high efficiency, and unmanned mining. However, the poor working conditions of cameras in coal mines have led to serious degradation of image quality. For this reason, multi-source fusion processing of mine monitoring video will solve the current problems and be helpful to promote the intelligent development of coal mines. In view of the low computational efficiency of existing image fusion algorithms and poor timeliness, the fusion images acquired by the existing image fusion algorithms have problems such as false targets, fuzzy targets, halo occluded targets, etc., which cannot meet the needs of mine video surveillance. This paper proposes a mine dual-band image fusion algorithm using multi-scale and adaptive Gaussian difference transform combined with rectified non-linear unit and VGG-16. A source image decomposition model based on multi-scale and adaptive difference of Gaussian is designed. This model decomposes infrared and visible images into basic images and detailed images. Among them, the basic image represents the approximate components of the source image, reflecting the general features of the field of view. The detailed image represents the detailed components of the source image, including detailed information such as edges and textures, and is also the most sensitive part of human eye recognition and machine vision. To eliminate the interference part of the light source in the visible basic image and improve the overall contrast and information richness of the fusion image in the underground mine, a rectified non-linear unit function is constructed. The rectified non-linear unit function makes the weight of the infrared basic image automatically adjust with the gray level of the visible basic image, and the “weighted average” basic image fusion strategy is adopted to obtain the fused basic image that eliminates the interference of the light source and retains the general features of visible and infrared images. Meanwhile, the pre-trained VGG-16 network model is used to extract the 4-layer depth features of the detailed image, and the l1-norm and Gaussian operator are used to sequentially obtain the saliency maps corresponding to the 4-layer depth feature. After obtaining 4 pairs of fused images with different depth features through pooling inverse operation and weighted fusion, the fused detail image is obtained by the “maximum value selection” method. The fusion basic image and detail image are reconstructed to obtain the final fusion image. To verify the effectiveness of the proposed algorithm, the experiment selected the source images of the coal mine in four different scenes, combined five typical image fusion algorithms for subjective analysis, and used five quality evaluation indicators of fusion image and average running time for objective evaluation. The experimental results show that the proposed algorithm can eliminate the interference of artificial light sources and obtain fused images of underground mines with clear scenes and salient features, and the fused image is more in line with human visual characteristics. At the same time, it improves the fusion quality and fusion efficiency of heterogeneous images, which is conducive to the further analysis and processing of images. Compared with the other five typical algorithms, the proposed algorithm is more robust. It not only overcomes the shortcomings of traditional algorithms that cannot extract image depth features but also makes it easier to completely eliminate light source interference and obtain more comprehensive, reliable, and rich scene information. In addition, the proposed algorithm can be used for the intelligent analysis of multi-source images of mines and remote monitoring on the ground. It can also be used to eliminate the problem of artificial light source interference in underground space, underground engineering or night road video surveillance images.

    Mar. 25, 2022
  • Vol. 51 Issue 3 0310002 (2022)
  • Ying XIA, Junyao LI, and Dongen GUO

    Remote sensing image scene classification is an important and challenging problem of remote sensing image interpretation. With the generation of a large number of scene-rich high-resolution remote sensing images, scene classification of remote sensing images is widely used in many fields such as smart city construction, natural disaster monitoring and land resource utilization. Due to the advancement of deep learning techniques and the establishment of large-scale scene classification datasets, scene classification methods have been significantly improved. Although the classification methods based on deep learning have achieved high classification accuracy, the supervised methods require a large number of training samples, while the unsupervised classification methods are difficult to meet the practical needs and have low classification accuracy. Meanwhile, the annotation of remote sensing images requires rich engineering skills and expert knowledge, and in remote sensing applications, only a small amount of labeled remote sensing images exist for supervised training in most cases, and a large amount of unlabeled images cannot be fully utilized. Therefore, a semi-supervised learning method that extracts effective features from a large amount of unlabeled data by learning a small amount of labeled data becomes a potential way to solve such problems. To address the problems of complex background of remote sensing images and the inability of supervised scene classification algorithms to utilize unlabeled data, a semi-supervised remote sensing image scene classification method based on generative adversarial networks, namely, residual attention generative adversarial networks, is proposed. First, to enhance the stability of training, the residual blocks with jump structure are introduced in the deep neural network. At the same time, the spectral normalization constrains the spectral norm of the weight matrix in each convolutional layer of the residual block to ensure that the input and output of each batch of data satisfy the 1-Lipschitz continuity, which makes the generative adversarial training always smooth, not only improves the training stability, but also avoids network degradation. Secondly, since the shallow features extracted by the bottom convolution contain mostly local information and low semantics, while the deep features extracted by the top convolution contain more global information but lose part of the detail information. Therefore, the shallow features are fused with the deep features extracted from the multi-layer spectral normalized residual blocks to reduce the loss of features and allow the model to learn the complementary relationships between different features, thus improving the model's representational ability. Finally, to guide the model to focus more purposefully on important features and suppress unnecessary features, an attention module that mimics the signal processing of the human brain is used. Meanwhile, in order to obtain stronger feature representation ability and capture the dependency relationship between features, a gating mechanism is introduced to form an attention module combined with gating. To verify the superiority of the method, experiments were conducted on two high-resolution remote sensing image datasets, EuroSAT and UC Merced. In the EuroSAT dataset, the highest classification accuracy reached 93.3% and 97.4% when the number of labeled features was 2 000 and 21 600, respectively. In the UC Merced dataset, the classification accuracies reached 85.7% and 91.0% when the number of labeled was 400 and 1 680, respectively. To further validate the degree of contribution of each module, ablation experiments were also conducted in the EuroSAT and UCM public datasets, and it can be concluded from the validation that the spectral normalization residual module has the largest contribution, with improvement for different number of labeled samples. The reason is that the spectral normalization ensures that the gradient of the network is limited to a certain range during backpropagation, improving the stability of the generative adversarial network, and also does not destroy the network structure in the process. The next is the attention module combined with gating, especially when the labeled sample size is greater than 10%, the classification effect is improved more because the sample size is sufficient to learn more comprehensive features. The smallest contribution is the feature fusion module, because when the sample size is very small, the network is not sufficiently trained and learned, and a part of redundant or invalid features are extracted, resulting in lower classification accuracy. The above experimental results show that the proposed residual attention generation adversarial network classification method can effectively extract more discriminative features and improve the semi-supervised classification performance for the problem of small sample size of labeled high-resolution remote sensing images, which makes it difficult to extract discriminative features.

    Mar. 25, 2022
  • Vol. 51 Issue 3 0310003 (2022)
  • Cheng YANG, Zhao WANG, Junhui HUANG, and Chao XING

    As a fundamental procedure of machine vision inspection, whether high-quality images can be acquired directly affects the difficulty of subsequent detection algorithm design and the object recognition ability. The light source and illumination mode are critical components of image acquisition. Common illumination modes with fixed light sources include backlighting, vertical lighting, coaxial lighting, etc., which are not suitable for surfaces with subtle targets, highlights, complex textures and embossed printing in special inspection scenes. For coarse-grained reflective surfaces, there is a local reflection phenomenon directly utilizing a general light source. To solve this problem, this paper proposes a novel image acquisition method based on random speckle projection to avoid fixing the light field direction. A series of random speckle patterns are designed to project on the coarse-grained reflective surface through active projection illumination, and a set of images with random light field distribution are obtained by a camera. The abundant information of the acquired images is used to suppress the reflection, and a fused image with uniform light intensity and a high signal-to-noise ratio is generated according to the mean fusion method. Aiming at the problem of perspective distortion during speckle projection, a perspective distortion correction method of the random speckle image is proposed to achieve efficient speckle projection with uniform density. Specifically, an oblique projection geometric model along the perspective distortion direction is established by introducing a projection distortion factor and projection image scale ratio and analyzing the relationship of two parameters and the projector inclined angle and its throw ratio. Compared with the traditional perspective distortion correction method of the projection transformation matrix, the proposed method does not need to rely on feature point calibration, and it is simpler and more practical. This paper takes the surface of zinc oxide resistors as an example to verify the effectiveness of the proposed method. We design and build an image acquisition system that involves a programmable structured light projector, a camera matched lens, detection supports and a computer, which can realize the rapid cycling of image projection and image acquisition. The experimental results show that our method suppresses the local reflection on the surface of the workpiece to a large extent, and the details of the surface texture and scratches of the workpiece are recorded more abundantly in the fusion image. Compared with the general ring light source, the image gray histogram and contrast show an advantage of our method during image quality evaluation, which significantly improves the image contrast by 1.33~2.75 times and lays a good foundation for the subsequent image detection algorithm design.

    Mar. 25, 2022
  • Vol. 51 Issue 3 0310004 (2022)
  • Yunzuo ZHANG, Kaina GUO, Zhaoquan CAI, and Wenxuan LI

    With the development of people's security awareness, video surveillance systems have been widely used, resulting in an increasing amount of surveillance video data. Surveillance videos are usually fixed and continuous for a long time, which leads to a single surveillance video background, motion segments and static segments cross-exist. However, people usually only focus on the motion segments. There is an urgent need to quickly extract motion segments from massive surveillance videos, which has received a lot of attention from researchers. Motion segment extraction is the basis and prerequisite of behavior recognition, surveillance video synopsis and other subsequent processing, and it is also a research hotspot in the field of computer vision. The existing motion segments extraction algorithms are mainly divided into traditional methods and deep learning-based methods. The former needs to process the full amount of data in the spatial domain of the video and detect motion targets frame by frame, which is computationally intensive and time-consuming and can't meet the real-time needs. The latter requires a massive amount of sample data to pre-train the model, which has high algorithm complexity and high requirement for hardware devices. Addressing this problem, this paper proposed a fast motion segment extraction method via nested ellipse spatio-temporal tubes in surveillance video, which can extremely save the amount of calculation. Firstly, surveillance video is elliptically spatio-temporal sampled. The elliptical sampling lines are adaptively generated according to the video sequences with different aspect ratios and pixel on the sampling lines of each frame in the video sequence are extracted to form an elliptical spatio-temporal tube. Secondly, multiple elliptical spatio-temporal tubes sampled progressively according to surveillance scene are integrated to nested elliptical spatio-temporal tubes. Then, nested elliptical spatio-temporal tubes are expanded to generate spatio-temporal plane maps. Finally, the background of spatio-temporal plane maps is removed and the spatio-temporal flow model is constructed to extract motion segments. In this model, the instantaneous spatio-temporal flow curve reflects whether moving targets enter or exit the sub-surveillance area in the corresponding frame, and the accumulative spatio-temporal flow curve reflects the number of moving targets in the sub-surveillance area in the corresponding frame. Flowchart of proposed algorithm as shown in the figure. The proposed algorithm utilizes elliptical spatio-temporal sampling instead of traditional full spatial data processing and requires no pre-training, which can greatly reduce the amount of surveillance video data to be processed. The algorithm form nested elliptical spatio-temporal tubes by progressive sampling, which reduces the computational effectively and can also take into account the targets moving inside the surveillance areas. The experimental on SISOR, KTH and CAVIAR public data sets comparison with mainstream motion segments extraction algorithms. Experimental results show that the proposed algorithm has obvious advantage in calculating time, greatly reduces the amount of calculation under the premise of ensuring detection accuracy, and it can realize fast motion segments extraction in surveillance videos.

    Mar. 25, 2022
  • Vol. 51 Issue 3 0310005 (2022)
  • Jinxin XU, Qingwu LI, Zhiqiang GUAN, and Xiaolin WANG

    Aiming at the problem that the linear reconstruction results of high energy flash X-ray images are affected by system blur, a nonlinear reconstruction algorithm with randomly perturbed optimization and multi-models fusion is proposed. The nonlinear forward model of high energy flash radiography is constructed and the Jacobian matrix of residual vector of the objective function is derived. The solution and uncertain quantification of the inverse problem are considered from the perspective of Bayesian theory, and the nonlinear hierarchical Bayesian model is constructed by introducing weak information prior-based hyper-parameters. The hyper-parameters can avoid manual adjustment of parameters and are not affected by changes in parameter form, and can obtain more accurate parameter estimation results. By accelerating the solution of the randomly perturbed optimization problem the conditional distribution is sampled, and the Jacobian matrix projection-based constraint is used to solve the optimization problem. The proposal distribution of the object parameter is designed to reduce the statistical deviation of samples. In addition, a multi-model fusion strategy is proposed to fuse the sample values from linear and nonlinear Bayesian models under the minimum variance criterion. Surrogate model with strong correlation and physical properties is selected and directly carried out on the expectation estimation. The proposed algorithm improves the efficiency of sample estimation while ensuring that the reconstructed results show clear edges and high accuracy. Nonlinear reconstruction experiment is carried out on high-energy flash X-ray static images under 4 MeV energy level, and compared with the existing reconstruction algorithms based on uncertainty analysis to verify the effectiveness of the proposed algorithm. The irradiated target is an inverted cone, which is made of tin and placed on the center of device.Compared with the linear reconstruction results, the proposed algorithm can effectively suppress the background noise of image and obtain better visual effects in the isosceles region of the cone. Experimental results show that the algorithm can effectively suppress the influence of system ambiguity and noise, and can obtain more accurate reconstruction results than linear reconstruction algorithms.

    Mar. 25, 2022
  • Vol. 51 Issue 3 0310006 (2022)
  • Liang SHENG, Yanhong ZHANG, Jianpeng GAO, Yang LI, Baojun DUAN, and Dongwei HEI

    Three-dimensional (3D) neutron imaging is the cutting-edge diagnostic method for pulsed radiation sources driven by laser or Z-pinch. Neutron images with different energies can accurately diagnose the spatial structure of the burning Deuterium-Tritium (DT) plasma and surrounding materials under high compression ratio, which is of great significance for improving the technical measures of compression symmetry, verifying the physical model and evaluating the ignition performance. Utilized the computed tomography method, 3D information of the target internal structure can be acquired by multi-view neutron imaging method. Along with the limitations of current technology, there are great difficulties in pixel level accurate matching, high-precision time correlation and camera sensitivity difference correction between different imaging axes due to the high radiation intensities, short durations, and wide energy ranges of the pulsed radiation sources. Thus, only a limited number of axis can be implemented.Reconstructing 3D images from severely incomplete two-dimensional data is an ill-posed problem, which suffers from large solution space, artifacts and inability to maintain edge features. JIANG Shaoen et al developed a 3D reconstruction algorithm based on algebraic reconstruction techniques. VOLEGOV P L et al used an iterative Expectation-Maximization (EM) algorithm to reconstruct 3D neutron or X-ray sources. The algorithm reconstructed the main geometrical features of the source, but introduced artifacts affected by the angular distribution of projection directions. VOLEGOV P L et al later adopted Spherical Harmonic Decomposition (SHD) and Cylindrical Harmonic Decomposition (CHD) methods to the reconstruction of 3D radiation sources. While these analytical methods based on basis functions expansion were fast and could acquire the unique solution under certain conditions, their representation abilities were limited by the number of imaging axes and would produce artifacts related to the truncation of expansion order.In this paper, we introduce and realize SHD reconstruction algorithm and EM iterative algorithm. Considering that the inertial confinement fusion radiation sources have some spherical symmetry and the EM algorithm can find local optimum solution, we exploit the SHD reconstruction results as the initial value of EM iterative algorithm. This method actually utilizes a physical prior based on the target physical characters overcoming the question of large solution space for the limited-view 3D reconstruction algorithm in some sense. To evaluate the performance of the proposed method, an ellipsoidal radiation source with a Gaussian intensity distribution is designed as the target source to be reconstructed. Compared with the results of SHD, OSEM and MF-OSEM algorithms, the reconstruction results of this method are closest to the ideal ellipsoid source. Our method also achieves the best results in terms of three quantitative evaluating indices, which are 36.545 4 (PSNR), 0.014 9 (RMSE) and 0.031 5 (DKL), respectively. The results indicate that the EM iterative algorithm with initial values constrained by SHD algorithm exhibits better performances and better adaptability to noise compared with the SHD or EM algorithm. Our method can be used as a benchmark comparison algorithm for the reconstruction of similar 3D radiation sources with few-view projections.

    Mar. 25, 2022
  • Vol. 51 Issue 3 0311001 (2022)
  • Liang SHAN, Feiyang SHI, Bo HONG, Daodang WANG, Junzhe XIONG, and Ming KONG

    Fluids are very common in nature and engineering. Research in engineering fields such as fluid mechanics and aerodynamics is closely related to fluid measurement. Measuring and analyzing the motion state of fluids is a key issue in the field of fluids. The particle image velocimetry method developed in the 1880s is a transient, multipoint, noncontact hydrodynamic velocimetry method. It can accurately measure the transient flow field in the plane without disturbing the fluid. This paper analyzes the imaging principle of tracer particles in the particle image velocimetry algorithm based on color illumination. According to the application environment in the actual flow field and the experimental conditions of the color particle image velocity measurement algorithm, a set of lighting systems consisting of a white light source and a filter with linearly changing wavelength is adopted to provide the particle field with color volume illumination of different depths and the same light intensity. Particles of different depths in the fluid that reflect the light of the corresponding wavelength are modeled as narrow band point light sources with different wavelengths. To obtain the accurate position of each particle imaging on the CCD, namely, the spectral distribution of the particle field, the camera pinhole imaging model is established to analyze and simulate the optical path of particle imaging. When the light reflected by particles passes through the pinhole model of the camera, imaging of particles on a CCD sensor can be achieved. First, the imaging diameter of particles is calculated by the Airy spot diameter and the real diameter of particles. Second, the normalized intensity matrix of the discrete image is obtained by the weight function of the particle intensity distribution because the strength distribution of particles should satisfy a two-dimensional Gaussian distribution. Finally, the particle's imaging diameter and intensity are combined to acquire the particle's point spread function. The three-dimensional imaging model of tracer particles is established according to the pinhole camera model and its corresponding point spread function. The simulated images of particles in the flow field illuminated by color volume light on the color camera are combined with the three-dimensional velocity field generated by the Rankine vortex model to obtain the simulated particle positions at different times. Rainbow particle image velocimetry is used to reconstruct the particle distribution field and particle velocity field of particles with different densities under different velocities of the Rankine vortex field. The AAE and AEPE of the reconstruction results are discussed. With the same particle density, AEPE increases with the speed of the particle, while AAE has the opposite trend. The value of AEPE decreases along with the particle density growth. The reason is that with the increase in the number of particles, the information in 3D reconstruction is more sufficient, which improves the accuracy of 3D reconstruction. Nevertheless, the particle density has little effect on AAE. The comparison between the reconstructed results and the real values shows that the simulated particle image generated by this method can provide support for the research of particle image velocimetry algorithms based on color illumination.

    Mar. 25, 2022
  • Vol. 51 Issue 3 0311002 (2022)
  • Zhirun WANG, Wenjing ZHAO, Aiping ZHAI, and Dong WANG

    Benefitting from the low cost-efficiency and broad detecting wavelength of the single-pixel detector, single-pixel imaging is a promising choice for applications such as multi-wavelength, low light imaging. However, to image a scene, multiple measurements are required in single-pixel imaging, which hinders the improvement of its imaging speed, limiting its further development. For the acceleration of single-pixel imaging, an option is to find the better sampling strategy so that the measurements can be greatly reduced by ignoring the relatively unimportant measurements, degrading as little as possible of its imaging quality. To address this problem, deep Q network based single-pixel imaging which considers the scheme of single-pixel imaging as a decision-making process of deep Q network, is proposed for orthogonal transform based single-pixel imaging. It is proved to be an efficient way to find the optimal sampling strategy for deep Q network based Fourier single-pixel imaging and Hadamard single-pixel imaging, nonetheless, more detailed analysis on it for different transforms is needed. For the development of deep Q network based single-pixel imaging, the performance of deep Q network based single-pixel imaging using different orthogonal transform is analyzed comparatively. Derived from the deep Q network based Fourier single-pixel imaging and Hadamard single-pixel imaging proposed before, deep Q network based discrete cosine transform single-pixel imaging and Krawtchouk moment transform single-pixel imaging are proposed. Using structural similarity and peak signal-to-noise ratio as the quantitative image quality evaluation criteria, and artificial planning method for contrast, the reconstructed results of deep Q network based single-pixel imaging using the four kinds of orthogonal transform are quantitatively analyzed. Comparisons among four orthogonal transform based deep Q network single-pixel imaging are also analyzed. The simulation and experimental show that deep Q network based single-pixel imaging is better than artificial planning in imaging quality because deep Q network finds the optimal sampling strategy in a more efficient manner. Deep Q network brings the most significant imaging quality over the others for discrete cosine transform single-pixel imaging while the deep Q network based Krawtchouk moment transform single-pixel imaging overcomes the local effects in natural images, resulting in great improvement in imaging quality. The concentration of spectra is not a perfect criterion of imaging quality but an approximative one. The results provide guidance for the application and improvement of the deep Q network based single-pixel imaging.

    Mar. 25, 2022
  • Vol. 51 Issue 3 0311003 (2022)
  • Zhao YANG, Pengcheng YANG, Mengmeng ZHANG, Yuan XIAO, and Hui LIU

    In the phantom imaging system, the traditional imaging light source device uses four display screens to display the front, rear, left, right, and four sides of the object, respectively, and the light vertical display screen is mapped on each surface of the quadrangular pyramid, thereby forming a three-dimensional model inside it. Compared with the traditional imaging device, the projection imaging device has the advantages of low manufacturing cost and convenient operation of the imaging system. However, because the projector's light projection method is based on the divergent projection of the midpoint of the light source, and when the light passes through the projector lens, the central axis light can be refracted to generate an off-axis angle, so that the image projection illuminates the area of each surface area of the quadrangular pyramid. Geometric distortion, resulting in distortion of the 3D model. Aiming at this problem, a correction method for local geometric distortion of projection images is proposed. By changing the position of pixel points in the distorted image, it is in the same position as the pixel corresponding to the reference image, the effect of correcting image distortion is achieved. The image projected onto the outer surface of the tetrahedron is a parametrizable projection surface. According to the inverse deformation method, the perspective transformation method is used for point-by-point mapping for geometric correction to ensure the accuracy of the image. First, extract the projection image, select the original image and the regular shape image corresponding to the distorted image for analysis, establish the mapping relationship between the shape of the projected image and the degree of distortion of the 3D model, and construct a correction model for the local geometric distortion of the projected image, so that the original image is compared with the original image. Inversely transform the distorted image to an equal proportion; then through the inverse transformation matrix to reverse the pre-distortion of the original image as a whole, and then artificially deform the original image into the required pre-distorted image; finally, as the projected image cached by the projector, it is used to offset the axial angle and different degrees of image distortion caused by the non-perpendicular mapping of the beam and the projection plane to achieve the distortion compensation of the three-dimensional model to achieve the effect of distortion correction. After that, through the regular graph cube experiment test, the results show that: after correction, the error of the image side length in millimeters is reduced from 34.4% to 3.3%, and the angle error is reduced from 18.3% to 3.3%, which verifies the effectiveness and feasibility of the method.

    Mar. 25, 2022
  • Vol. 51 Issue 3 0311004 (2022)
  • Ming LI, Qiang GAO, Shuang CHEN, and Bo LI

    Femtosecond laser molecular tagging velocimetry technology is the most commonly used non-invasive velocity measurement method. Traditional nitrogen-based molecular tagging velocimetry has a weak signal, a poor signal-to-noise ratio, and a small dynamic range of velocity measurement, which limits its application, especially in low-speed flow field. This paper mainly researches the femtosecond laser-induced chemiluminescence velocimetry technology. Femtosecond laser is introduced to interact with methane/nitrogen flow field, and generate a cyano fluorescence with strong signal intensity and long luminous duration, thus achieving a high signal-to-noise ratio, high precision and wide range velocity measurement. The experimental results show that by changing the concentration of methane, we can adjust the intensity and duration of the cyano fluorescence signal. The lower the concentration, the longer the cyano fluorescence signal lasts. In velocity measurement, it is necessary to comprehensively consider the fluorescence intensity and fluorescence duration required for velocity measurement, and select the appropriate methane concentration to achieve the best velocity measurement effect. This paper also explores the range of the velocity measurement of this method. The method has no upper limit, but the actual measurement upper limit is mainly affected by the time resolution of the hardware of the delay triggering device, the spatial resolution of the imaging system and the minimum gate width of the camera. There is a lower limit of this method. At a concentration of 500 ppm, the fluorescence signal intensity with a signal-to-noise ratio of 8 can still be obtained at 120 μs after the laser radiation. The minimum resolvable displacement of the imaging system in this experiment is 27 μm, and the lower limit under this experimental condition is 0.23 m/s. A lower limit can be obtained by further reducing the methane concentration. In addition, the influence of the laser energy and delay time on the speed measurement accuracy was evaluated through experiments. This work has greatly expanded the application range of femtosecond laser molecular tagging velocimetry, and has great application potential in the field of aerospace.

    Mar. 25, 2022
  • Vol. 51 Issue 3 0314001 (2022)
  • Lingling DENG, Jiacheng SONG, Jintao GUO, and Jiajin ZHENG

    A conductive silver paste is prepared by mixing Ag Nanowires(AgNW) with different concentrations of Hydroxypropyl Methyl Cellulose (HPMC) in one-step process, and then a composite transparent electrode with Ag nanowires embedded in HPMC (AgNW:HPMC) is prepared by spin coating and subsequent hot pressing. The mass ratio of Ag nanowires and HPMC is changed by 1∶1, 1∶2 and 1∶3, and its effects on the photoelectric properties, roughness and stability of the transparent electrode were investigated. The results show that the conductive silver paste obtained by the one-step process can effectively prepare composite transparent electrodes with excellent performance. Increasing the ratio of HPMC in the conductive silver paste slightly decreases the conductivity and transparent of the AgNW:HPMC composite electrode due to the insulating and compactness properties of HPMC. While the flatness of the AgNW:HPMC composite electrodes is significantly improved with HPMC filling the empty space between the silver nanowires. The surface roughness decreases with increasing the proportion of HPMC, and the lowest Root-Mean-Square (RMS) roughness is only 4.6 nm when the mass ratio of Ag nanowires and HPMC is 1∶3. In addition, the composite electrodes with Ag nanowires embedded in HPMC exhibit remarkable stability under extreme conditions. When the nanowire electrode is exposed in atmosphere, HPMC can protect Ag nanowire from the corrosion of water and oxygen. The sheet resistance of the AgNW:HPMC electrode was doubled while the neat Ag nanowire electrode had lost its electrical conductivity after the electrodes were exposed in atmosphere for 21 days. Besides, HPMC can help the composite electrode maintain good photoelectric performance under the strong oxidative environment irradiated by ultraviolet light source. After 15 minutes of ultraviolet treatment, the sheet resistance only increased by 25%. Moreover, the adhesion properties of the AgNW:HPMC composite electrode is improved because HPMC strengthens the contact between the silver nanowires and the substrate, and the good photoelectric performance and surface morphology are still maintained under the destructive test with tape.

    Mar. 25, 2022
  • Vol. 51 Issue 3 0316001 (2022)
  • Dongbao LIANG, Rui ZHANG, Yufang SHEN, and Jian ZHANG

    At present, with the increasingly serious environment and energy, the research and development of green and low energy consumption technology has attracted extensive attention. In the field of lighting, as a green light source in the 21st century, phosphor-converted White Light-Emitting Diodes (WLEDs) are expected to become an indispensable generation of comfortable and healthy lighting system due to its obvious advantages of high luminous efficiency, low environmental pollution and low energy consumption. The two key materials for commercially available WLEDs are yellow YAG:Ce phosphors and blue LED chips. In this scheme, the lack of red component results in only a low color rendering index (4 500 K) cold white light, which are not conducive to the application of indoor lighting. Generally speaking, adding efficient red phosphors on this basis can obtain a higher color rendering index and a lower color temperature. However, one of the costs of adding such phosphors is that the device becomes significantly less efficient. From a more comprehensive and humanized perspective, the combination of ultraviolet LED chip with red, blue and green phosphors has undoubtedly attracted the attention of the majority of scientific researchers. As far as we know, rare earth ions (such as Eu2+ and Ce3+) are used as activators for most of the phosphors that can be excited by ultraviolet LED chips and tricolor phosphors in the lighting scheme. However, the mixture of three primary phosphors can easily cause spectral reabsorption. In addition, an imbalance between supply and demand makes rare earths expensive, which is a major impediment to their commercialization. In view of this situation, choosing phosphors with non-rare earth ions as activators can effectively solve the above problems of rare earth doped phosphors. Nowadays, as another type of activator, bismuth (Bi), has been extensively studied and reported because of the potential optical properties related to the strong interaction with surrounding coordination environments and abundant valence states. In this paper, a series of BaLa1-xGa3O7:xBi3+ (0.01≤x≤0.13) phosphors were synthesized through traditionnal high temperature solid state method. The X-ray diffraction patterns and rietveld refinement results indicate the pyrite structure of above samples. Scanning electron microscope images show that the phosphor particles are irregular in shape with the size of 5~30 μm. Diffuse reflectance spectra of BaLaGa3O7 matrix indicate a suitable optical band gap for Bi3+ luminescence. When Bi3+ substitutes La3+, the excitation wavelength has a red shift from 340 to 350 nm. Under the excitation of 348 nm ultraviolet light, BaLa1-xGa3O7:xBi3+ phosphors exhibit one evident emission peak at 475 nm. With the increase of Bi3+ concentration, the emission intensity firstly increased and then decreased, and this optical phenomenon is generally considered to be related to the concentration quenching. Among them, the emission intensity of BaLa0.89Ga3O7:0.11Bi3+ phosphor reaches the maximum with a quantum yield of 19.2%, and the emission intensity at 150 ℃ still maintains 69.2% of that at 25 ℃. It indicates that the BaLa1-xGa3O7:xBi3+ phosphors have potential application value in the field of near ultraviolet excitied white LEDs.

    Mar. 25, 2022
  • Vol. 51 Issue 3 0316002 (2022)
  • Tiantian QI, Wei LIU, J C THOMAS, Hongyan JIA, Qinqin WEI, Yajing WANG, and Jin SHEN

    Dynamic light scattering technology is a common measurement method that can calculate the particle size distribution of nanoparticles at present. The particle size distribution obtained by the inversion of the autocorrelation function belongs to the problem of solving the first type of Fredholm integral equation, which is a typical ill-posed problem. The data near the baseline of the autocorrelation function has a lot of noise. Therefore, the accuracy and repeatability of the inversion results are affected by the data points of the autocorrelation function, and different data points of the autocorrelation function may lead to different inversion results. To overcome the shortcomings, the characteristics of the root-mean-square error of the fitted correlation function with the number of autocorrelation function points are investigated.The root-mean-square error curve as a function of number of autocorrelation function points can be divided into two stages:in the first stage, the root-mean-square error value of the fitted correlation function grows smoothly with the increase of the number of autocorrelation function points, and in the second stage, as the number of correlation function points increases, the root-mean-square error value of the fitted correlation function grows rapidly.The rapid growth of the root-mean-square error indicates that the noise contained in the autocorrelation function data increases and the reliability of the correlation function data decreases with increasing number of data points. The point of inflection of the root-mean-square error curve marks where the noise contribution from subsequent data points significantly increases the fitting error and produces poorer inversion results.With this in mind, root-mean-square error threshold method as the criterion to truncate the autocorrelation function be proposed. This method adaptively selects the optimal number of autocorrelation function data points by setting the root-mean-square error threshold. It can truncate the autocorrelation function according to the measured particle size and the noise level of the autocorrelation function, and select the optimal autocorrelation function data point. To verify the proposed method, the ACFs for 5 unimodal samples and 1 bimodal sample were analyzed using the root-mean-square error threshold method and the Tikhonov regularization algorithm for inversion. Experimental results show that the particle size results obtained using the root-mean-square error threshold method have higher accuracy and better repeatability than the results obtained by other methods. The root-mean-square error of the fitted correlation function is affected by the noise in the autocorrelation function data, and a larger root-mean-square error value indicates a higher level of noise in the correlation function data, and vice versa.The data near the baseline of the autocorrelation function has a lot of noise, which causes the root-mean-square error value of the fitted correlation function to increase rapidly. Therefore, the root-mean-square error threshold method proposed is essentially realized by using the feature that the root-mean-square value of the fitted correlation function can be used to reflect the noise level of the autocorrelation function data, which can effectively reduce the effect of the noise near the baseline the autocorrelation function on the inversion results.

    Mar. 25, 2022
  • Vol. 51 Issue 3 0329001 (2022)
  • Chunyan LI, Gengpeng LI, Jihong LIU, Dou LUO, and Jiayi LIU

    Spectral confocal technology realizes the optical non-contact precision measurement of micro displacement based on the principle of dispersion and confocal. The measurement accuracy of the spectral confocal technology is limited by the extraction accuracy of the spectral peak wavelength. The General Regression Neural Network (GRNN) is proposed for the spectral characterization of the spectral confocal system and the precise positioning of peak wavelength. The GRNN model is a feedforward network model with simple structure, concise training, and fast convergence speed. The proposed spectral characterization algorithm is verified on established the spectral confocal experimental system. The precision displacement table moves the mirror to the zero working position of the dispersion probe, and the spectrometer detects the dispersion spectrum focused and reflected from the mirror surface. Denoising and intensity normalization is performed on the collected original spectral data, and some spectral signals data in the spectral range near the peak are intercepted and input into the GRNN model as sample data. The input variable is the signal wavelength λ in the spectral data. The normalized intensity of the spectral wavelength is the output variable. The joint probability density function of the input and output variables of the sample is the verification condition. Finally, the GRNN model outputs the maximum probability value of normalized intensity corresponding to wavelength through Parzen nonparametric kernel regression. The GRNN model considers the weight of sample points near output variables in spectral characterization, and it can eliminate the influence of random noise of spectral signals, improve the spectral signal-to-noise ratio, reduce the characterization error of spectral signals, and improve the accuracy of peak wavelength extraction to achieve stable and reliable spectral confocal measurement. Then the wavelength of the spectral peak is extracted in different dispersion positions by the GRNN model. The corresponding relationship between the dispersion wavelength and the focus position is revised. Experimental results show that the GRNN model is better than the traditional algorithms. The spectral fitting curve SNR of the GRNN model is improved. The fit coefficient of the fifth-order dispersion focal shift is 0.999 9. The system resolution is about 2 μm. The measurement error RMSE is about 0.01 μm. The GRNN model suppresses the dispersion model fluctuation caused by wavelength extraction and improves the resolution and stability of system measurement.

    Mar. 25, 2022
  • Vol. 51 Issue 3 0330001 (2022)
  • Shujuan YU, Zhuqin LIU, Dongmei CAO, Yanfeng LIU, and Yanpeng LI

    For symmetric molecules, only odd harmonics are emitted. In particular, the high-order harmonic spectrum shows a significant minimum which corresponds to the minimum in the dipole moment of the bound-continuous state transition. For asymmetric molecules, both odd and even harmonics are emitted. However, in many cases, the striking minimum disappears in the odd or even harmonic spectrum. Fortunately, when the minimum cannot be read from the harmonic spectra directly, it can be probed through the polarization measurement of the odd-even high-order harmonic generation. Specifically, the position of the minimum in the odd or even dipole corresponds to the harmonic order for the maximal ellipticity of the odd or even harmonics. However, for symmetric molecules H2+, the minimum in the transition dipole is not completely consistent with the minimum of the harmonic spectra. For asymmetric molecules HeH2+, the prediction of the dipole minimum by the polarization measurement does not always agree well with the theoretical evaluation. In some cases, a remarkable difference is also observed. This remarkable difference may arise from other mechanisms beyond the description of the simple model, or the inaccurate calculation of dipole moment may be caused only by some rough approximate in relevant theoretical treatments. In this paper, this question is explored by improving the calculation of the dipole moment. The intrinsic relationship between harmonic radiation and the structure of symmetric and asymmetric molecules is studied by a combination of numerical and analytical methods. First, the numerical expressions of the ground state wave functions of symmetric and asymmetric molecules are obtained using the virtual time evolution method. Starting from the accurate ground state wave functions of symmetric and asymmetric molecules, the bound-continuous state transition dipole moments are calculated. The term proportional to the nuclear separation is further subtracted from the transition dipole moment. For symmetric molecules H2+, the calculated odd dipole moment is compared with the harmonic spectrum and the transition dipole moments obtained by the pure analytical method. For asymmetric molecules HeH2+, the calculated odd dipole moment is compared with the harmonic spectrum, the ellipticity of the harmonics and the transition dipole moments obtained by the pure analytical method. Simulation results show that the minimum in the improved odd dipole moments agree more well with that predicted by odd harmonics compared with the transition dipole moments obtained by the pure analytical method for symmetric moleculesH2+. For asymmetric molecules HeH2+, the calculated dipole moment shows a clear minimum, which arises from the effect of two-center interference. However, there is usually no minimum value appearing in the high-order harmonic spectrum of asymmetric molecules. A further comparison between the ellipticity of the harmonics and the corresponding dipole moments shows that the harmonic order at which the ellipticity is maximal corresponds to the order at which the dipole has a minimum. The polarization measurement of harmonics can be used as a tool to detect the position of the minimum value of dipole moment. The obtained ground state wave function significantly improves the consistency between the minimum odd-even dipole moment and the maximum odd-even harmonic polarization at different molecular parameters. These phenomena reveal that the recombination process plays a key role in the harmonic radiation of symmetric and asymmetric molecules and verifies the one-to-one matching between the high-order harmonic spectra and the corresponding dipoles. And molecular orbitals can be reconstructed by transition dipole elements. The research results provide deep insights into the relation between odd high-order harmonic generation and symmetric molecular orbital and the relation between odd-even high-order harmonic generation and asymmetric molecular orbital. The research results have some significance for the role of odd-even harmonic radiation in the ultrafast detection of asymmetric molecules.

    Mar. 25, 2022
  • Vol. 51 Issue 3 0302001 (2022)
  • Ang LIU, Guanghao SHAO, Jiquan ZHAI, Xingwei YE, and Guoqiang ZHANG

    To realize the electrical beam scanning of the phased-array radar, it is necessary to implement true-time delay compensation for the transmitted and received signals in each transmission channel according to the beam direction. Utilizing microwave photonic technology, the microwave signal can be modulated to the optical carrier for transmission and processing. Compared to the frequency of the laser carrier, the relative bandwidth of the microwave signal is extremely narrow. The true-time delay lines based on optical fibers or on-chip waveguides have a large microwave bandwidth, small in-band amplitude and phase fluctuations, and small propagating loss and are immune to electromagnetic interference. The increment of optical lengths of different delay states can be precisely controlled to be far less than a full microwave wavelength. Thus, broadband beamforming can be realized using only subwavelength stepped optical delay lines, which can greatly reduce the beam directional dispersion of broadband microwave signals. The discrete delay values of stepped delay lines lead to discrete directional directions of antenna beams, which results in a certain deviation between the actual and designed directions of the beam. The influence of the minimum delay change on the equivalent phase distribution of the microwave front is analyzed, and a theoretical model of the relationship between delay steps and directional deviations of radar beams is established. The theoretical analysis of beam scanning based on subwavelength stepped optical delay lines shows that the beam directional deviation is proportional to the minimum delay step and inversely proportional to the array element spacing, the square of the number of elements, and the cosine of the beam direction. Through numerical simulation for the X-band wideband radar, the beam direction at each frequency point is almost the same in the frequency range of 8~12 GHz, which indicates that the directional dispersion has been effectively suppressed. It can be observed that some singularities will appear at specific azimuths and delay steps, where the directional deviation will reach extreme values due to the discrete increment of delay. The azimuthal deviation of the singularity gradually increases as the azimuth and the delay step increase. When the delay step is not larger than 3 ps, the azimuthal deviation does not exceed ±0.13°, which is less than ±1/35 of the narrowest beam width and thus can be almost neglected. Peakpower loss and sidelobe suppression are also simulated. When the delay step is less than 3 ps, the beam peak power drops no more than 0.051 dB, and the in-band fluctuation is less than 0.028 dB. The maximum broadband relative sidelobe power is less than -12.5 dB, and the maximum fluctuation is less than 0.24 dB. Based on the scheme of a subwavelength stepped optical delay line without an electrical phase shifter, 9-bit optical delay lines with a minimum delay step of 3 ps are prepared, and the maximum delay exceeds 1.53 ns. The optical delay lines are manufactured by cascaded 2×2 magneto-optical switches and optical fibers. The distributions of the spatial electric field are measured on the nearfield platform and converted to farfield patterns by spherical wave compensation at different designed azimuths and frequencies. When the designed beams point at 0°, 30° and 60°, the measured maximum directional deviations are 0.24°, 0.28° and 0.77°, and the in-band directional dispersions are 0.21°, 0.28° and 0.98°, respectively. Compared to the beam squint based on the scheme of the wavelength stepped optical delay line and electrical phase shifter, the beam directional dispersion is effectively suppressed. Furthermore, under the microwave frequency range of 8~12 GHz and the azimuthal scanning range of ±60°, the experimental results demonstrate that the peak power loss can be reduced to less than 0.89 dB and that the sidelobe suppression ratio can exceed 11.06.

    Mar. 25, 2022
  • Vol. 51 Issue 3 0306001 (2022)
  • Wenlin ZHANG, Kun LIU, Junfeng JIANG, Tianhua XU, Shuang WANG, Zhao ZHANG, Jianying JING, Jinying MA, and Tiegen LIU

    Fiber optic SPR sensors have wide application prospects in the field of biosensing due to the advantages of label-free, fast response in real-time, and good biocompatibility. Traditional multimode fiber optic SPR sensors are limited by low sensitivity, which leads to their insufficient performance in the detection of low-concentration analytes. Therefore, improving the sensitivity of fiber optic SPR sensors and applying them to trace detection of biomolecules have received increasing attention from researchers. There are two main ways to improve sensitivity. One is to increase the strength of the evanescent field by changing the structure of fiber (e.g. U-shaped, D-shaped, tapered, etc.), and the other is to use the excellent photoelectric properties of new materials (e.g. high refractive index oxides, ceramic materials, two-dimensional materials, etc.) to improve the sensitivity of the sensor. Similar to graphene, Transition Metal Dichalcogenides (TMDCs) have also attracted much attention. Among them, Tungsten disulfide (WS2) shows many unique optoelectronic properties such as high complex RI ratio (the ratio of the real part to the imaginary part of the RI), direct band gap and large surface-to-volume ratio. It is shown that the application potential of WS2 in SPR sensors. However, there are relatively few experimental studies of WS2 in SPR sensors, and they mainly focus on prism-based SPR sensors.Based on the above-mentioned methods, the sensitivity of the fiber optic SPR sensor is improved by combining the tapering of the fiber and the coating of WS2 nanomaterials. In this paper, a tapered fiber optic SPR sensor based on the structure of WS2-Au is proposed. The relationship between the dielectric constant of WS2 and the wavelength is obtained by the first principles using the generalized-gradient approximation. The effects of taper ratio and the thickness of WS2 on the sensitivity of sensors are studied through theoretical simulations used the transfer matrix method. Then, four kinds of sensors (600 μm fiber SPR sensor, tapered fiber SPR sensor, tapered fiber Au-WS2 SPR sensor and tapered fiber WS2-Au SPR sensor) are manufactured. Among them, the WS2 nanosheets are modified on the fiber surface using electrostatic self-assembly technology, and the Au film is obtained by vacuum magnetron sputtering coating technology. An experimental setup is built to measure their Refractive Index (RI) sensitivity. The experimental RI sensitivity of the proposed tapered fiber WS2-Au SPR sensor can reach 4 158.171 nm/RIU, which is 125.8% higher than that of the multimode fiber SPR sensor and 50.1% higher than that of the tapered fiber Au-WS2 SPR sensor. Experimental results show a good agreement with the numerical simulations. It is demonstrated that the introduction of the WS2 layer can improve the sensitivity of the fiber optic SPR sensor and enhance the reliability of the sensor. In summary, the developed sensor can provide a high-sensitivity, low-cost, simple and environment-friendly platform for the biochemical detection.

    Mar. 25, 2022
  • Vol. 51 Issue 3 0306002 (2022)
  • Tiesheng WU, Zuning YANG, Huixia ZHANG, Zhihui LIU, Dan YANG, Xu ZHONG, Yan LIU, and Rui LIU

    Surface Plasmon Resonance(SPR) is a physical phenomenon, when the frequency and wave number of incident light coincide with the frequency of free electrons vibrating on the metal surface, then the electrons (i.e., plasma) on the metal surface absorb light energy and resonate, and its resonance wavelength changes with the refractive index of the precious metal surface, so SPR has a wide range of application needs in medical detection, environmental monitoring and other fields. Based on the principle of SPR, this work investigates the refractive index sensing characteristics of D-type highly birefringent photonic crystal fibers in detail. The current reports on PCF SPR sensors are mainly based on the establishment of theoretical models and numerical simulations. It is difficult to experimentally prepare PCF SPR sensors. According to the high birefringence photonic crystal fiber used in the experiment, the photonic crystal fiber of the simulation model is composed of five layers of air holes. The first layer of the cladding contains 2.2 μm, the diameter of the large air hole is 4.5 μm, and the polishing depth is represented by h, that is, the distance from the core of the photonic crystal fiber to the polishing surface, and the angle between the slow axis of the high birefringence photonic crystal fiber and the polishing surface is defined as the polishing direction θ. The gold film is coated on the flat polishing surface of the optical fiber to facilitate contact with the object to be measured. According to previous theoretical and experimental research, we set the thickness (t) of the gold film to 45 nm. The refractive index of the photonic crystal fiber background material and the refractive index of gold used in the simulation are given by the experimental data of linear interpolation. In order to obtain the waveguide mode of the side-polished high-birefringent photonic crystal fiber, this paper uses the finite element method commercial software COMSOL Multiphysics and sets the boundary conditions of the perfect matching layer for simulation. The refractive index of the analyte is set in the range of 1.330 to 1.400. Through finite element method modeling and simulation, the influence of polishing angle on the sensitivity of birefringence and refractive index sensing are studied in this paper. The simulation results show that when the height of the polishing surface from the fiber core is less than 1.5 times the duty cycle, the closer the polishing surface is to the core, the smaller the birefringence. As the polishing angle increases, the birefringence first increases and then decreases, and the refractive index sensing sensitivity decreases accordingly. When the polishing angle is 0 degrees and the refractive index ranges from 1.330 to 1.400, the average refractive index sensitivity of the device is as high as 3 457.14 nm/RIU. In addition, we have prepared a D-type high birefringence photonic crystal fiber SPR sensor. There is a big difference between the theoretical and experimental sensitivity values. The main reasons are: 1) The polishing surface is uneven (defects caused by air holes), which makes it difficult to completely remove the debris generated during the polishing process, which will affect the sensing performance. Performance; 2) After preparing the D-type fiber sample, the fiber is not coated in time. The D-type fiber is exposed to the air for a long time, and the dust in the air will further affect the performance of the device; 3) Although every time Before the test, the sensor will be cleaned repeatedly with ethanol, but it is difficult to completely remove the refractive index matching liquid left in the PCF air hole, which will affect the accuracy of the subsequent measurement results. For example, we tested the refractive index of 1.33 for the first time and cleaned it with alcohol. Drop 1.34 optical fiber matching liquid, because the previous liquid remains, the real value is difficult to achieve 1.34, and the calculation is still calculated according to 1.34; 4) The theoretical maximum value is under the condition that the polishing plane is parallel to the connection line of the two large air holes. The D-type high birefringence photonic crystal fiber SPR sensor was further used to test the concentration of glucose dissolved. The concentration of glucose solution increases from 0 g/dL to 10 g/dL in steps of 2 g/dL. As the glucose concentration increases, the D-type high birefringence photonic crystal fiber SPR sensor The peak wavelength of the pit of the transmission spectrum will be red-shifted. In the 0 g/dL glucose solution, the SPR resonance wavelength appeared at 578.96 nm, and when the glucose solution concentration was 10 g/dL, the SPR resonance wavelength drifted to 587.49 nm. According to the relationship between the glucose concentration and the peak wavelength of the pit of the transmission spectrum, the average sensitivity is 1.89 nm/(g/dL). The research results show that the D-shaped photonic crystal fiber SPR sensor can be applied to the fields of biology, chemistry and environmental monitoring.

    Mar. 25, 2022
  • Vol. 51 Issue 3 0306003 (2022)
  • Xiangxin SHAO, Zixiao MA, Tianqi LU, Dong LI, and Hong JIANG

    A cobweb topology sensor network was designed to meet the high requirements of large structure health monitoring on the reuse capacity and maintenance cost of fiber Bragg grating sensor network.In this structured network, Wavelength Division Multiplex(WDM) is used to increase the multiplexing capacity of the network, and the model based on gated cyclic unit is optimized to demodulate overlapping wavelengths.The new sensor network designed has high network reliability and network reuse capacity. Part of the structure of cobweb network is selected for experiment, and four kinds of fault conditions are designed for comparison. Through the four kinds of fault conditions, the signal can be effectively transmitted, which proves that the cobweb network has high reliability.In this paper, the four cases are summarized and summarized in the form of a table. The table shows that in these four cases, the network can still be used normally and has a high reliability.By improving the network structure of the demodulation model, the recognition accuracy of the model is increased, and the well-trained model is used to demodulate the spectra with different overlapping degrees. In 89.9% cases, the root mean square of the model is less than 1 PM, which proves that the improved model can effectively demodulate the overlapping spectra, and greatly increases the network reuse capacity.The experimental results of demodulation are presented in the form of tables and pictures. It can be seen that the central wavelength of each sensor can be well identified and the physical variables can be obtained under the condition of different degrees of spectral overlap.The new sensor network can increase the reliability and reuse capacity effectively.

    Mar. 25, 2022
  • Vol. 51 Issue 3 0306004 (2022)
  • Yue FENG, Lifang FENG, Jianli JIN, and Shunyi HUANG

    Recently, with the large-scale popularization of Light Emitting Diode (LED), Visible Light Communication (VLC) with LED as the emission light source has developed rapidly. This technology has the advantages of rich spectrum resources, no electromagnetic radiation, high confidentiality and deep coupling with lighting. As one of the important applications of visible light communication, visible light indoor positioning technology has attracted extensive attention of researchers. Among many indoor location methods, the location method based on Received Signal Strength (RSS) is the easiest to implement without additional hardware equipment. It is widely used in the field of indoor location. In the existing literature, the research of visible light positioning technology based on RSS mainly focuses on the multiple light sources model, however, due to the influence of channel attenuation on the received signal, the positioning accuracy is not high when the target receiver is located in the corner area. To solve this problem, an adaptive visible light location algorithm based on region division is proposed. Based on the analysis of single light source and multiple light sources localization algorithms, a two light sources localization algorithm based on symmetrical structure receiver is designed to make up for the localization error in the edge region. The mirror solution generated in the localization of two light sources is eliminated by using the receiver with this special structure. According to the error distribution characteristics of the above three algorithms in the positioning plane, the fairness function is constructed, and combined with the Lambert model, the positioning region is divided into multiple sub regions. In the positioning stage, the receiver area is roughly judged according to the characteristics of the received signal to achieve rough positioning, and then the positioning algorithm with better performance is adaptively selected to achieve accurate positioning. The simulation results show that at 5 m×5 m×3 m indoor environment, the average positioning error of the algorithm is about 2.5 cm, which is improved by 46%, 24% and 55% respectively compared with single light source, two light sources and multiple light sources positioning algorithms. Further, at 1.5 m×1.5 m×2 m indoor environment, an actual visible light positioning system is built, four synchronous label code signals are generated by Field-Programmable Gate Array(FPGA), and the LED light source is driven by the amplification circuit. The receiver uses Photoelectrical Detector (PD) to obtain the signal, the Microcontroller Unit (MCU) decodes it, and then realizes the positioning through the corresponding positioning algorithm. The experimental results show that 94% of the test points achieve the positioning accuracy of 5 cm. The positioning errors at the four corners of the positioning area are 3.2 cm, 3.4 cm, 3.5 cm and 2.8 cm respectively. Compared with the traditional multiple light sources location algorithm, the location error of edge and corner region is greatly improved. This research provides a new method for visible light positioning system, which can significantly improve the positioning accuracy at the cost of low complexity. It has potential research value in the field of visible light communication and positioning.

    Mar. 25, 2022
  • Vol. 51 Issue 3 0306005 (2022)
  • Xi YANG, Huanhuan YIN, Zhihua SHAO, and Xueguang QIAO

    Traditional detection approaches usually employ Piezoelectric Transducers (PZTs) as the ultrasonic source and receiver. However, these current-driven transducers have some inherent drawbacks (susceptibility to electromagnetic interference, narrowband frequency response, and not resistant to high temperature and corrosion). Fiber-optic sensors have attracted significant attention in ultrasonic detecting owing to their outstanding advantages, such as: small size, easy reuse, wideband frequency response, and immunity to electromagnetic interference. The majority of fiber-optic ultrasonic sensors has based on fiber Bragg gratings and Fabry-perot interferometers. However, frequency response range of ultrasonic sensors based on FBGs is relatively narrow. The Fabry-perot interferometers ultrasonic sensors generally consist of a diaphragm and a fiber-optic end-face as two reflectors. Nevertheless, the complex preparation of diaphragm materials, poor chemical stability, and heat resistance, limit the application of sensor. In this study, a compact fiber-optic ultrasonic sensor based on a Tapered Seven-core Fiber (TSCF) is proposed and experimentally demonstrated. This proposed sensor has the advantages of easy fabrication, compact structure, and high sensitivity. The sensor comprises a TSCF sandwiched between two Single-mode Fibers (SMFs), forming a cascade structure of SMF-TSCF-SMF. The SCF (YOFC, MC1010-A, China) is used to make ultrasonic sensors. A commercial fiber fusion splicer (Fujikura, FSM-80C) is used to fabricate the SMF-TSCF-SMF structure. Thereafter, the optical fiber fused biconical taper system (FBTZolix) is used to taper the SCF into the TSCF with diameters of 11 μm, 19 μm and 29 μm. A certain prestress is applied to keep the SCF tight and straight during the fused tapering process. High order modes are easily excited owing to the core mismatch of SMF and Seven-core Fiber (SCF). The excited multiple modes continue to propagate along the SCF and then arrive at the tapered region. These transmission spectra exhibited multiple interference peaks. This is because complex optical modes are excited and are involved in mode interference. Therefore, these transmission spectra are not in a standard sinusoidal pattern, but become more irregular. Due to the sharply reduced taper diameter (as thin as several micrometers), the core distances are largely decreased and the evanescent fields are extended simultaneously. Thus, it is sufficient to induce diverse inter-modal coupling at the abrupt taper, including the mode coupling among cores, and coupling and recoupling of the cladding-to-core modes. Highly sensitive mode interferences are obtained. For the TSCF, the ultrasonic wavelength is much longer than the taper diameter and shorter than the fiber length. The fiber taper is axially constrained, that is, the axial elongation of the fiber taper can be neglected. The core and cladding diameters in the tapered region become thinner, and the TSCF has an obvious effect on evanescent waves. When the sensor is immersed in water, the Ultrasonic Wave (UW) signal periodically changes the refractive index of the surrounding liquid and modulates the transmission spectrum according to the evanescent-field interaction between the liquid and the transmitting light. Meanwhile, due to the effect of evanescent field, the light energy transmitted in the fiber can penetrate into the surrounding medium, resulting in energy reduction. Thus, the TSCF sensor with a diameter taper of 19 μm is used as the receiving source of ultrasonic signals. Driven by a function generator, the PZT (SIUI, 1Z20SJ50DJ) separately emits a 1 MHz continuous wave with a voltage amplitude of 10 V as the ultrasonic source. The edge filtering method is used to demodulate the ultrasonic signal received by the TSCF sensor. A tunable laser (Santec-710) with a 100 kHz linewidth and 0.1 pm tunable resolution was used as the light source. The output power of the tunable laser was 20 mW. The photodetector (New Focus, Model 2117) with a bandwidth of 10 MHz converts the optical signal into a voltage signal, which is finally monitored by an oscilloscope (RIGOL, DS2302A). The bandpass filter built into the photodetector has a frequency range of 500 kHz to 3 MHz, which is used to shield the surrounding noise. UW detection is processed in water at room temperature, which provides an almost constant temperature environment around the sensor. The sensor directly faces the emitting end of PZT with a separation of 2.5 cm. The continuous signals exhibit good uniformity and stability in the time domain. The peak-to-peak voltage of TSCF is about 0.4 V.

    Mar. 25, 2022
  • Vol. 51 Issue 3 0306006 (2022)
  • Luyao YU, Qiang ZHAO, Dawei DU, and Yi QU

    For the field of ocean depth detection, a small volume diaphragm-type fiber Bragg grating pressure sensor with a temperature compensation structure is made. The length and diameter of the fiber Bragg grating pressure sensor are approximately 40 mm and 20 mm, respectively. The diaphragm-type fiber Bragg grating pressure sensor with a temperature compensation structure uses an ultrashort fiber Bragg grating string. There are two fiber Bragg gratings on the ultrashort fiber Bragg grating string. The length of the two fiber Bragg gratings is 1 mm, and the interval between them is 20 mm. One fiber Bragg grating is used to measure pressure, and the other fiber Bragg grating is only affected by temperature, which can eliminate the influence of temperature on the pressure measuring fiber Bragg grating. The optical fiber is encapsulated in a metal tube a short distance away from the measuring pressure fiber Bragg grating. The end of the metal tube is fixed on the elastic metal diaphragm by a laser welding process. In this way, the optical fiber and metal diaphragm are not fixed by epoxy adhesive directly, which can avoid the influence of aging and creep of epoxy adhesive on the performance of the diaphragm-type fiber Bragg grating pressure sensor with a temperature compensation structure. In the measuring range of 0.6 MPa, the theoretical pressure sensitivity of the diaphragm-type fiber Bragg grating pressure sensor with a temperature compensation structure is -1.214 nm/MPa, and the pressure sensitivity of the diaphragm-type fiber Bragg grating pressure sensor with a temperature compensation structure obtained by the finite element analysis method is -1.364 nm/MPa. After the diaphragm-type fiber Bragg grating pressure sensor with a temperature compensation structure is fabricated, the pressure and temperature characteristics of the sensor are tested. With the help of a fiber Bragg grating only affected by temperature, the influence of temperature on the pressure measuring fiber Bragg grating is eliminated through calculation. The actual average pressure sensitivity of the diaphragm-type fiber Bragg grating pressure sensor with a temperature compensation structure is -1.728 nm/MPa. Moreover, the linearity of the boost and buck curves of the diaphragm-type fiber Bragg grating pressure sensor with a temperature compensation structure is more than 99.9%, and the boost and buck curves of the diaphragm-type fiber Bragg grating pressure sensor with a temperature compensation structure also coincide well. In addition, the best way of the tail fiber seal, the reason of the different temperature response characteristics of the double fiber grating, the method of improving the linearity and coincidence degree of the pressure curve, the reason and solution of affecting the stability of the sensor, and the reason of improving the sensitivity of the measured pressure are discussed. First, when the thickness of the metal tube that encapsulates the optical fiber is relatively thin, comparative experiments show that the method of sealing the fiber tail with epoxy glue is better than laser welding. By sealing the fiber tail with epoxy glue, the wavelength shift of the fiber Bragg grating can reach 2 nm. It can be seen that sealing the tail of the optical fiber with epoxy glue more easily maintains the prestress applied to the optical fiber. Second, a temperature response test experiment of a diaphragm-type fiber Bragg grating pressure sensor with a temperature compensation structure is carried out. The experimental results show that the temperature response of the fiber Bragg grating and pressure response of the fiber Bragg grating are slightly different. Combined with the simulation analysis, it is found that the main reason for the difference in the temperature response trend of dual fiber Bragg gratings is the defect of the structural design. Third, to improve the linearity and coincidence of the boost and buck curves of the sensor, temperature and pressure aging processes are added to the sensor manufacturing process. The experimental results show that these methods are effective. Next, considering the influence of an optical fiber tail wobble on the stability of a diaphragm-type fiber Bragg grating pressure sensor with a temperature compensation structure, it is suggested to use apodisated linearly chirped fiber gratings or set up a region to isolate external forces to solve the problem. Finally, the problem that the actual pressure sensitivity of a diaphragm-type fiber Bragg grating pressure sensor with a temperature compensation structure is higher than the theoretical value is discussed from several angles. The main reasons are that the effective length of the fiber Bragg grating decreases due to the flow of epoxy glue and the pressure sensitivity of the fiber Bragg grating increases due to the increase in prestress.

    Mar. 25, 2022
  • Vol. 51 Issue 3 0306007 (2022)
  • Kexin LIU, and Na GAO

    With the development of modern communication technology, traditional electronic instantaneous frequency measurement systems face bottlenecks such as bandwidth and speed, and can no longer meet the needs of modern electronic warfare. This paper proposes a scheme of multiple microwave signal instantaneous frequency measurement. The light source is provided by a single laser and divided into two branches by a splitter. The upper branch microwave signal is modulated onto the optical carrier by a Mach-Zehnder modulator, then used as the pump light, and the lower branch is used to generate an optical frequency comb with high flatness and the same interval. The pump is split into multiple channels by a demultiplexer, and each frequency comb of the Optical Frequency Comb(OFC) is sent into the corresponding channel through the optical circulator. In each channel, the pumb and the frequency comb are sent into dispersion-shifted fiber simultaneously. The Brillouin scattering can be stimulated if the frequency of the pumb is a Brillouin frequency shift higher than the frequency of the comb, and brillouin gain occurs in this channel. Thus frequency-space mapping is achieved. The frequency of the microwave signal can be judged by monitoring the change in the intensity of the output optical signal of the corresponding channel. In the simulation experiment, the instantaneous frequency measurement of the single-frequency signal and the multi-frequency signal in the range of 0~25 GHz is carried out. The simulation results show that the output power of the channel with the microwave signal to be measured is significantly increased compared with other channels. The method can measure single-frequency or multi-frequency microwave signals from 0.1 GHz to 25 GHz with a resolution of 0.1 GHz and a measurement error of ±0.05 GHz. In addition, the influence of the offset point drift of the Mach-Zindel modulator on accuracy of the measurement is analyzed. The simulation results show that the scheme can suppress the influence of the offset point drift on the measurement results to a certain extent, which proves the feasibility and reliability of the scheme. Using the photon assisted method, the microwave frequency can be measured in large frequency range in real time, which has a wide measurement range and strong anti-electromagnetic interference ability, and has a broad application prospect in electronic warfare system.

    Mar. 25, 2022
  • Vol. 51 Issue 3 0306008 (2022)
  • Hong HUANG, Tao WANG, Yuan LI, Fanlin ZHOU, and Yu LI

    At present, image segmentation technology is more and more widely used in the auxiliary diagnosis and treatment of cancer. However, mature and advanced image segmentation methods are mainly focused on natural images. Compared with the natural image, the content of pathological image is more complex, and there are great differences between different images. At the same time, the cancer cells and normal cells in the pathological image are mixed with each other, and there are great similarities between the two cells. These characteristics make many excellent natural image segmentation algorithms can not be directly applied to the segmentation task of pathological images and achieve good performance, so that artificial intelligence algorithms can not be quickly applied to medical auxiliary diagnosis and treatment. Therefore, more accurate segmentation of tumor pathological images is of great significance to promote clinical cancer diagnosis.Aiming at the problems of multiple slice staining, large resolution difference and complex image content in medical pathological images, an improved hierarchical feature fusion segmentation method is proposed. The method mainly includes four parts: encoder, decoder, channel attention module and loss function. Firstly, the method selects U-Net network as the basic network structure. Efficientnet-b4 network is used to replace the original encoder of U-Net network for feature extraction, and the features are output from different layers as the output of the original encoder. The efficientnet-b4 network is transferred from natural images to pathological images for transfer learning, which effectively improves the ability of the network to extract effective features. In the decoder part, the feature fusion method is improved, and adds the hierarchical features of all layers before for feature fusion. Therefore, even the shallowest layer still contains the deepest global features. This method is used to gradually increase the role of global features in segmentation prediction from the deepest to the shallowest layer. It weakens the role of detail features in U-Net network. Therefore, the localization ability of the network on the main area of the lesion and the adaptability of the network to different resolution images are enhanced. At the same time, an improved channel attention module is used in each decoding layer, which is more suitable for pathological images than before. By adding global maximum pooling to extract channel features, more feature information is retained, so as to enhance the learning ability of the attention module and more effectively use the attention mechanism to filter the fused features. It highlights the effective features and suppresses the redundant features at the same time. In order to make the deep semantic information more distinguishable, the fusion characteristics of each depth layer are used to predict the output to construct a multi loss function. During training, the model performs prediction output at each decoding layer, and calculates the corresponding loss function for back-propagation training. Through this method, more effective deep semantic features are obtained for lesion location, which enhances the ability of the model to distinguish between normal and cancerous tissues, and improves the ability of the model to obtain and use global semantic features.Experiments are carried out on the BOT dataset and the seed dataset respectively. The Dice coefficient score of this method is 77.99% and 82.94% respectively, and the accuracy score is 88.52% and 87.42% respectively. Compared with U-Net and deeplabv3+, this method can effectively improve the segmentation accuracy and accuracy of tumor focus tissue, realize more accurate tumor location and segmentation in tumor pathological images, provide auxiliary support for doctors' clinical diagnosis more effectively, and improve the efficiency and accuracy of diagnosis and treatment. At the same time, ablation experiments are carried out on two data sets for the main improvements. The experimental results show the effectiveness of the improved method and can jointly promote the segmentation efficiency of HU-Net in pathological images from different aspects.

    Mar. 25, 2022
  • Vol. 51 Issue 3 0310001 (2022)
  • Shuang GONG, Baoxi YANG, and Huijie HUANG

    Lithographic apparatus is widely recognized as an efficient tool in manufacturing Integrated Circuits (ICs) and other micro–nano structures. During the manufacturing process of ICs, the exposure field is scanned by a narrower illumination field. Illumination uniformity is a key factor in determining resolution and Critical Dimension Uniformity (CDU), which are important performance parameters in advanced lithography systems. To obtain higher photolithography resolution and better CDU, the exposure dose must be kept as uniform as possible in the cross-scanning direction. Improving the consistency of the numerical aperture of each field of relay lens is the premise of ensuring the illumination uniformity. In the optimization conventional imaging optical design, wavefront error, dispersed spot, or optical transfer functions are generally employed as evaluation functions. Therefore, it is impossible to fully and completely satisfying the performance evaluation requirements of the relay lens group using the traditional aberration evaluation method. The general illuminance calculation of the optical system is based on the Monte Carlo method. The computational accuracy degree of the result is related to the number of traces light. In the mainstream optical design software, it is used in a calculation method of four-square cosine of the field angle, in a large number of engineering practice, the difference between the expected illumination distribution of the algorithm and the actual distribution are large. Especially for the telecentric optical system, the result of this calculation method is very worth to be suspected. The relay lens group is used to image the illumination field on the scanning slit plane on the mask plane. The relay lens group has a feature of double telecentric and adjustable pupil, so the above algorithm is still not suitable for the relative illuminance calculation of the image plane of the relay lens group. In this paper, a fast algorithm for calculating the uniformity of illumination is proposed, in which the numerical aperture of the system is calculated by approximate algorithm, and the illumination is characterized by the numerical aperture of exit pupil. The non-uniformity of light field is calculated by this algorithm, which is used as the evaluation function in the process of automatic optimization to optimize the design of relay lens of lithography illumination system. Then the designed relay lens is simulated by the software Light tools. The simulation results show that the non-uniformity of the illumination on the mask surface is less than 0.5% under different coherence factors. Simulation shows that the algorithm results have high correlation with the actual performance, reflecting the actual illumination uniformity, and can improve the uniformity of the system by controlling the value of the evaluation function. The result of the algorithm is proved to have a conservative characteristic, that is, the actual illumination uniformity will exceed the design value, which ensures that the obtained results meet the performance requirements. And this evaluation algorithm has a huge advantage in the speed, which can meet the needs of optimized design. Finally, the integral uniformity of the designed relay lens is tested through experiment, and the results shown that the non-uniformity of illumination is less than 1.21%, which can meet the requirement of the illumination non-uniformity on the mask surface (< 1.5%). It is proved that the fast evaluation algorithm is effective in the optimal design of relay lens.

    Mar. 25, 2022
  • Vol. 51 Issue 3 0322001 (2022)
  • Xinqiqige, Yi CHEN, Hangxin JI, Lei WANG, Yongtian ZHU, Kai ZHANG, and Huatao ZHANG

    As the accuracy requirements of optical instruments continue to increase, optomechanical design engineers increase the dimensional tolerance requirements of key components when designing the structure, which puts high requirements on the difficulty of processing, and the cost increases accordingly. Therefore, it is a problem that needs to be solved they are meeting the accuracy requirements of optical instruments, in precision instrument engineering to reduce the processing difficulty and cost at the same time. The pointing accuracy during the derotation process is determined by the error coupling between multiple components, and the imperfect mechanical rotation, internal imperfections, and structural deformation during the rotation will cause pointing errors. Taking the K-mirror derotation system equipped with optical astronomical telescopes as an example, in view of the coupling of multiple error sources in the optical-mechanical structure, the error distribution requirements of the optical-mechanical components are strict. The Monte Carlo algorithm is used to decompose the coupling of multiple error sources. At the same time, a particle swarm optimization algorithm is proposed to perform intelligent error distribution on multiple error sources to guide the optimization of tolerance distribution, optimal design and structural parameters of machined parts in the process of optical-mechanical structure engineering. First, according to the working principle of the K mirror derotation system, combined with the K mirror derotation pointing accuracy caused by the optical-mechanical structure, the error source is analyzed, and the mathematical model of the derotation system pointing accuracy is built, in which the rotation figure on the focal plane after derotation decided to determine the derotation pointing accuracy. Find out the error source that affects the pointing accuracy of the derotation, including the matching size of key components and the selection of standard parts. According to the working principle of the retraction system and the design scheme of the optical-mechanical structure, the main factors affecting the directionality of the K mirror retraction system are the relative poses of KM1, KM2, KM3 and the deviation angle between the mechanical rotation axis and the main optical axis of the K mirror assembly. Then the error decoupling is carried out through the Monte Carlo algorithm, the pointing accuracy is the optimization goal, the sensitivity of the error source is the main optimization path judgment factor, and the processing difficulty and cost are the secondary optimization path judgment factors, to establish the optimization model. The quasi-particle swarm optimization method is used for the error source of intelligent allocation. There are multiple permutation and combination methods for the results of the particle swarm optimization, which considers the value of multiple factors such as small offset, low processing cost, and less adjustment required. Through optimization iterations, an allocation plan that meets the accuracy of the instrument while reducing cost and processing difficulty is obtained, and guides the optimization design, selection and processing tolerance allocation of key components in the K-mirror structure design. Finally, the optical-mechanical coupling simulation analysis method and the experimental setup method are used to analyze the derotation pointing accuracy of the designed K mirror derotation system. Among them, the simulation analysis method uses MATLAB to establish a unified simulation model, connects the finite element simulation software of the optical-mechanical structure and the ray tracing software to perform the optical-mechanical coupling analysis, and the anti-rotation pointing accuracy obtained is 6.95''. In the experimental setup method, when the adjustment is optimal, the graph displayed on the detector is intercepted, and the smallest circumscribed circle of the graph is found, and the best anti-rotation pointing accuracy is 14.24''. The feasibility of the optical-mechanical structure design scheme of the derotation system and the intelligent error distribution scheme of the optical-mechanical system is verified.

    Mar. 25, 2022
  • Vol. 51 Issue 3 0322002 (2022)
  • Please enter the answer below before you can view the full text.
    4+3=
    Submit