Photonics Research, Volume. 13, Issue 7, 1902(2025)
Super-wide-field-of-view long-wave infrared gaze polarization imaging embedded in a multi-strategy detail feature extraction and fusion network
Fig. 1. Concept of SWFOV LWIR gaze polarization imaging. (a) Equipment overview. It integrates a vanadium oxide uncooled IR focal plane detector with SWFOV LWIR gaze polarization lens designed independently. The IR movement focal plane resolution is
Fig. 2. Performance evaluation of SWFOV LWIR gazing polarizer heads. (a) Blur spot diagram: demonstrating diffuse spot characteristics by testing blur spots at four FOV angles with reference to the main light. (b) Calculating the percentage of total enclosed energy at the four FOV angles and comparing it with the diffraction limit curve (which represents the zero distortion response). (c) Using the FFT method, diffraction MTF is calculated for the four FOV angles, with the spatial frequency range set as [0,29] lp/mm (Nyquist frequency is 29 lp/mm). (d) The relative illuminance is calculated by integrating the effective area of the exit pupil observed at the image point (performed in cosine space). The effective
Fig. 3. SWFOV LWIR gaze polarization imaging test. By rotating the holographic line-grid IR polarization device in the IR polarizer head, the image data are captured in four directions: 0°, 45°, 90°, and 135°. Stokes vectors are calculated for each pixel, where the
Fig. 4. Image fusion network. (a) This fusion network contains LLRR model, Detail CNN, ADFU model, and decoder network. The LLRR model extracts sparse features from IR images and IR DoLP images (
Fig. 5. Fusion effects of ablation study. (a) The fusion effects and close-ups of the fusion images obtained from the baseline model, the LLRR model, the ADFU model, and the LLRR + ADFU model are shown. Three scenes were reconstructed using pseudo-color for viewing convenience. (b) Loss function of the image fusion network. (c) Number of parameters of the four ablation models.
Fig. 6. Qualitative comparison of different fusion methods for three scenes. (a) IR. (b) IR DoLP. (c) RFN-Nest. (d) SwinFusion. (e) YDTR. (f) CDDFuse. (g) CMTFusion. (h) DIF-Fusion. (i) DCSFuse. (j) Ours.
Fig. 7. Testing our dataset using different fusion methods. Using the YOLOX detection network to detect car targets in SWFOV LWIR images, SWFOV LWIR DoLP images, and SWFOV fused images, and demonstrating the average precision and confidence of car detection.
Fig. 8. Car detection task is performed at different distance locations. Cars are detected using YOLOX detection network for SWFOV LWIR images, SWFOV LWIR DoLP images, and SWFOV fusion images.
Fig. 9. IR camouflage target recognition experiments. (a) Experimental demonstration. (b) Visual images wearing the IR camouflage suit in bush, building, and grass environments. (c) Recognition tests of IR targets and IR camouflaged targets in road and grass environments, respectively. The SWFOV LWIR images, SWFOV LWIR DoLP images, and SWFOV fusion images are shown from left to right. The gray values of rectangular contour-marked regions in the images are counted and annotated with the HIS color model. The close-ups show the statistics of the gray values of the outline marking area, and the height indicates the gray value. (d) Analyzing the pixel distributions of the SWFOV LWIR images, SWFOV LWIR DoLP images, and SWFOV fusion images of the four scenes in Fig.
|
|
Get Citation
Copy Citation Text
Dongdong Shi, Jinhang Zhang, Jun Zou, Fuyu Huang, Limin Liu, Li Li, Yudan Chen, Bing Zhou, Gang Li, "Super-wide-field-of-view long-wave infrared gaze polarization imaging embedded in a multi-strategy detail feature extraction and fusion network," Photonics Res. 13, 1902 (2025)
Category: Image Processing and Image Analysis
Received: Feb. 18, 2025
Accepted: Apr. 24, 2025
Published Online: Jul. 1, 2025
The Author Email: Fuyu Huang (hfyoptics@163.com), Limin Liu (lk0256@163.com)
CSTR:32188.14.PRJ.559833