Miniaturization and athermalization design of uncooled long-wave infrared (LWIR) continuous zoom optical systems is difficult given the large relative aperture. The lightweight and miniaturization design of an LWIR continuous zoom optical system with an uncooled focal plane array detector for 640×512 pixels was realized by restraining the object lens size and total length of the system by using a variable diaphragm. Through reasonable material configuration and an active compensation method, the athermalization design of a continuous zoom optical system with a zoom ratio of 8.5 and five lenses was achieved. The system F number is constant at 1.2, the spectral range is 8 ?m to 12 ?m, the field of view (FOV) ranges from 30°×24° to 3.5°×2.8°, and the total length of the system is 187.5 mm. This optical system exhibits light weight, short total length, and high transmittance. Within the temperature range of -40℃ to +60℃, the optical system affords good image quality in the full field of view during zooming.
Infrared imaging systems often produce fringe noise in imaging results owing to the non-uniformity of the detection unit. To obtain better correction results, most deep learning-based infrared image nonuniformity correction algorithms adopt complex network structures, which increase the computational cost. This study proposes a lightweight network-based infrared image non-uniformity correction algorithm and designs a lightweight multi-scale downsampling module (LMDM) for the encoding process of the Unet network. The LMDM uses pixel splitting and channel reconstruction to realize feature map downsampling and realizes multi-scale feature extraction using multiple cascaded depth-wise separable convolutions (DSC). In addition, the algorithm introduces a lightweight channel attention mechanism for adjusting feature weights to achieve better contextual information fusion. The experimental results show that the proposed algorithm reduces memory use by more than 70% and improves the processing speed of the infrared images by more than 24% compared with the comparison algorithm while ensuring that the corrected image has a clear texture, rich details, and sharp edges.
The existing deep learning image fusion methods rely on convolution to extract features and do not consider the global features of the source image. Moreover, the fusion results are prone to texture blurring, low contrast, etc. Therefore, this study proposes an infrared and visible image fusion method with adversarial learning and compensated attention. First, the generator network uses dense blocks and the compensated attention mechanism to construct three local-global branches to extract feature information. The compensated attention mechanism is then constructed using channel features and spatial feature variations to extract global information, infrared targets, and visible light detail representations. Subsequently, a focusing dual-adversarial discriminator is designed to determine the similarity distribution between the fusion result and source image. Finally, the public dataset TNO and RoadScene are selected for the experiments and compared with nine representative image fusion methods. The method proposed in this study not only obtains fusion results with clearer texture details and better contrast, but also outperforms other advanced methods in terms of the objective metrics.
To reduce the difficulty of detecting tiny leakages at multiple leakage points in liquid pipelines, it is necessary to improve the detection accuracy and speed of the leakage points. Bilateral filtering based on nonlinear stationary wavelets is proposed to achieve image noise reduction by building a water circulation pipeline leakage experiment system, changing the sizes and number of the leakage points, changing the temperature of the conveying medium, and applying an infrared thermal imager to monitor the small leakage of the single and complex leakage points. Combined with infrared nondestructive testing technology and a YOLO v4 network model, this study realized the automatic intelligent detection of single and multiple leakage points of liquid pipelines. The results show that compared with the traditional filtering algorithm, the peak signal to noise ratio and structural similarity evaluation indexes of the noise reduction method are improved. The model can quickly and accurately detect and locate single and multiple leakage points of pipelines. The average detection accuracy (mAP) values of the single and multiple leakage points in complex environment reach 0.9822 and 0.98, respectively. Further, the accuracy rates reach 98.3% and 98.36%, and the single frame detection times reach 0.3021 s and 0.3096 s, respectively. This helps realize the identification of leakage points under complex background interference. In comparison with YOLO v3, Faster R-CNN, and SSD 300, the YOLO v4 algorithm has better accuracy, mAP, and t for the detection of single and multiple leakage points and has a higher detection accuracy and detection efficiency.
To improve the color of underwater imaging more effectively, and enhance the contrast and clarity of images, an underwater image enhancement method based on improved histogram matching and adaptive equalization is proposed. Each channel image is subjected to histogram matching using the histogram of the channel image with the largest pixel mean as the benchmark to correct the color deviation of the underwater image; taking full advantage of the independence of the color and lightness components in the HSI color space, this method performs an adaptive local histogram equalization on the lightness component, further improving the contrast and clarity of the image. Subjective and objective experimental data show that compared with some existing methods, the proposed method achieves better visual effects on underwater images after enhancement, with higher information entropy, an average gradient, UIQM, and SSIM. Therefore, the proposed method has a better enhancement effect on underwater images.
To solve the low recognition rate problem of the existing isolation switch state identification, a method of image fusion based on NSST-PCNN-IFVSS is proposed. Image registration is performed in the preprocessing stage of infrared and visible light images; subsequently, pixels and fusion are used to achieve the fusion of the two images. In the fusion stage, the non-subsampled shearlet transform is used to decompose the infrared and visible light images into high- and low-frequency sub-band images. In the high-frequency sub-band image part, a pulse coupled neural network is used for fusion, whereas the image fusion method based on visual saliency segmentation is used for fusion in the low-frequency sub-band image part. The two sub-band images are combined by the inverse transform of the non-subsampled shearlet transform to obtain the fused image. A fusion quality index evaluation scheme is established to compare the effect of this method with common image fusion methods. The fused image is processed by a pixel integration projection algorithm to determine the state of the high-voltage isolation switch. Experimental simulation verifies that the image fusion effect of the non-subsampled shearlet transform-pulse coupled neural network-image fusion based on visual salience segmentation is better than six common fusion methods, and the recognition result after image fusion is better than that of the single visible light image and infrared image.
The existing deep learning-based object detection algorithms encounter various issues during the object detection process in images, such as object viewpoint diversity, object deformation, detection occlusion, illumination variations, and detection of small objects. To address these issues, this paper introduces the concept of contrastive learning into the SSD object detection network and improves the original SSD algorithm. First, by randomly cropping object images and background images from sample images using the method of image cropping, the object image blocks and background image blocks are input into the contrastive learning network for feature extraction and contrastive loss calculation. The supervised learning method is then used to train the SSD network, and the contrastive loss is fed into the SSD network and weighted and summed with the SSD loss value for feedback to optimize the network parameters. Because the contrastive learning concept is introduced into the object detection network, the distinction between the background and object in the feature space is improved. Therefore, the proposed algorithm significantly improves the accuracy of the SSD network for object detection, and obtains satisfactory detection results in both visible and thermal infrared images. In the experiment on the PASCAL VOC2012 dataset, the proposed algorithm shows an increase in the AP50 value by 0.3%, whereas in the case of the LLVIP dataset, the corresponding increase in AP50 value is 0.2%.
A multi-resolution feature extraction convolution neural network is proposed for the problem of inaccurate edge segmentation when existing image semantic segmentation algorithms process low-resolution infrared images. DeepLabv3+ is used as the baseline network and adds a multi-resolution block, which contains both high and low resolution branches, to further aggregate the features in infrared images. In the low-resolution branch, a GPU friendly attention module is used to capture high-level global context information, and a multi-axis-gated multilayer perceptron module is added in this branch to extract the local and global information of infrared images in parallel. In the high resolution branch, the cross-attention module is used to propagate the global features learned on the low resolution branch to the high resolution branch, hence the high resolution branch can obtain stronger semantic information. The experimental results indicate that the segmentation accuracy of the algorithm on the dataset DNDS is better than that of the existing semantic segmentation algorithm, demonstrating the superiority of the proposed method.
In response to the challenges posed by low signal-to-noise ratios and complex task scenarios, an improved detection method called DCS-YOLOv8 (DCN~~C2f-CA-SIoU-YOLOv8) is proposed to address the insufficient infrared occluded object detection and weak target detection capabilities of the YOLOv8 model. Building on the YOLOv8 framework, the backbone network incorporates a lightweight deformable convolution network (DCN~~C2f) module based on deformable convolutions, which adaptively adjusts the network's visual receptive field to enhance the multi-scale feature representation of objects. The feature fusion network introduces the coordinate attention (CA) module based on coordinate attention mechanisms to capture spatial dependencies among multiple objects, thereby improving the object localization accuracy. Additionally, the position regression loss function is enhanced using Scylla IoU to ensure a relative displacement direction match between the predicted and ground truth boxes. This improvement accelerates the model convergence speed and enhances the detection and localization accuracy. The experimental results demonstrate that DCSYOLOv8 achieves significant improvements in the average precision of the FLIR, OTCBVS, and VEDAI test sets compared to the YOLOv8-n\s\m\l\x series models. Specifically, the average mAP@0.5 values are enhanced by 6.8%, 0.6%, and 4.0% respectively, reaching 86.5%, 99.0%, and 75.6%. Furthermore, the model's inference speed satisfies the real-time requirements for infrared object detection tasks.
To optimize the detection performance of infrared imaging spectrometers, a priority fusion temperature control algorithm (PFA) with user-defined indicators and a temperature control accuracy of 1.0 mK is proposed. This algorithm combines basic proportional–integral–derivative (PID), fuzzy PID, and selfdisturbance rejection control algorithms with the BP neural network algorithm to achieve high-performance blackbody temperature control. Results of Simulink simulation experiments show that compared with traditional algorithms, the overshoot of the PFA algorithm decreases from 3.606% to 0.101%, the response time decreases from 64 min to 14.4 min, and the temperature control accuracy reaches 1.0 mK. Simultaneously, a blackbody radiation calibration platform is built, and the physical experimental results are consistent with the theoretical simulation results. This model lays the theoretical foundation for the practical application of the high-precision temperature controlled blackbody in the field of space remote sensing and has remarkable significance in the field of temperature control.
This paper proposes a distortion correction method suitable for large field of view infrared cameras. First, the single-parameter division model is selected as the camera distortion model, and the improved speedup robust features algorithm is used to automatically obtain the feature point pairs of two distorted infrared images with the same scene. The nine-point non-iterative algorithm and kernel density estimation method are then used to obtain the distortion parameters of the image. Finally, according to the obtained distortion parameters, the grayscale interpolation method based on edge preservation is used to correct the distortion of the image. In the entire process, it is not necessary to determine the parameters and scene information of the camera in advance, and the distortion correction is completed by entering two images with the same scene, which provides a new solution for the distortion correction of large-field infrared cameras. The experimental results show that it is feasible and robust to use this method to correct the distortion of large field infrared cameras.
The short- and medium-wave infrared filter is a key device in the aerospace optical remote sensing camera. The spectral response of the high-resolution detector is determined by the spectral characteristics of the short- and medium-wave infrared filter. Owing to the gap between the preparation level and theoreticalvalues, the phenomenon of the spectral angle drift or temperature drift occurs, and the mixed superposition of high- and low-frequency spectra is formed in the high-resolution detector, resulting in restoration spectral distortion. This study introduces a design method for a working band of 3.5 ?m to 4.1 ?m and the development of a short- and medium-wave infrared filter for a high resolution detector. To realize the characteristics of dual band cut-off color separation on the Si substrate (cut-off band wavelength 2.4 ?m to 3.35 ?m and 4.25 ?m to 6.4 ?m; transmittance of over 98% in passband wavelength 3.5 ?m to 4.1 ?m), the film system structure of the F-P band-pass filter is used as the initial structure, which effectively reduces the number of film layers compared with the conventional design concept. The high refractive material of the film is TiO2 and the low refractive material is SiO2, to achieve dual band cut-off. The short- and medium-wave infrared filter achieves the design goal and has the characteristics of dual band cut-off and high band transmittance. In the environmental test, the short- and medium-wave infrared filter exhibits significant stability, and the matching degree between the films is appropriate. The short- and medium-wave infrared filter can be applied in some extreme cases.
The addition of infrared detectors on a ballistic platform, thus detecting and warning the interceptor, is an innovative method to improve the surprise defense and survival capability of the ballistic platform. In this study, the infrared radiation characteristics of the interceptor are analyzed and the parameters of the infrared detector used to detect the interceptor are speculated with respect to the detection capability of the ballistic infrared detector. A detection probability model is derived based on the relationship between the detection probability and input signal-to-noise ratio and a signal-to-noise ratio model is derived based on the radiation difference. Therefore, the detection requirements of the late and mid-range defense interceptors are analyzed, and the detection capability of the bullet-borne infrared detector is analyzed in terms of the detection probability and action distance. The results of the analysis show that the ballistic infrared detector has a strong detection capability for the late stage interceptor; it can guarantee detection in the sunlight area for the middle stage interceptor, whereas only a certain detection angle has detection capability in the sunshadow area.
Traditional data enhancement methods are easy to over-fit. To solve the problem of sample imbalance in the field of view defect image dataset of the ultraviolet image intensifier and improve the recognition accuracy of stripe defects based on deep learning, a field of view defect image generation method of the ultraviolet image intensifier based on a deep convolution generative adversarial network (DCGAN) is proposed. Through the improvement of the loss function of the DCGAN and the optimization of the convolution attention mechanism, the generation model of the field-of-view defect image of the UV image intensifier is established, and the generation of the field-of-view defect image of the UV image intensifier is successfully realized. The image quality evaluation index and defect detection models are then used to verify the effectiveness of the generated image. The experimental results show that the generated UV image intensifier field-of-view defect image can meet the application requirements, and the detection accuracy can be improved by fusing the generated image into the real image and then entering the defect detection model. The research results provide technical support for field-of-view defect detection based on the deep learning of the third-generation low-light-level image intensifier and ultraviolet image intensifier.