ObjectiveMany object target methods perform excellently under ideal lighting conditions, but their performance is not satisfactory in environments with varying lighting, particularly in low-light conditions. Applications such as autonomous vehicles at night, surveillance drones, and security systems requiring continuous monitoring often face challenges under low illumination conditions. Consequently, there is a heightened demand for the performance of target detection methods in low-light environments. Low-light conditions lead to a degradation in image quality, including reduced brightness, decreased contrast, increased noise, loss of detail, and color deviation. These issues significantly impair the accuracy of target detection methods. These issues not only alter the visual appearance of the images but also negatively impact the performance of object detection methods. Therefore, it is crucial to design an end-to-end object detection method that integrates an image enhancement network with a target detector, specifically for low-illumination environments, to maintain high detection accuracy. For this purpose, an end-to-end target detection method for low-illumination images is designed in this paper, named FDLIE-YOLO, which effectively enhances detection accuracy and confidence, reducing omissions and misjudgments.
MethodsTo enhance target detection performance in low-light conditions and reduce missed detections and false positives, an end-to-end low-light image object detection method called FDLIE-YOLO (Fig.1) has been proposed. Firstly, the FDLIENet (Frequency Domain Low-Illumination Image Enhancement Network) is constructed, with the core component being the frequency domain processing module FDPB (Frequency Domain Processing Block) (Fig.2). This Block extracts global information from the image through Fourier transform and, based on the positive correlation between amplitude and brightness, enlarges the amplitude components to enhance both the image brightness and contrast, effectively improving the image quality under low-illumination conditions. Adopting YOLOv8n as the detection module, and achieving end-to-end training through a joint loss function, optimizes image enhancement and target detection(Fig.3). The combined loss comprises a magnitude difference loss with Mahalanobis distance to precisely control image magnitude, and employs the MPDIoU (Minimum Point Distance based IoU) loss to replace the original regression loss, thereby enhancing detection accuracy.
Results and DiscussionsThe experiments were conducted on two datasets: the LOL-Real dataset and the ExDark dataset, FDLIENet with the mainstream low-light image enhancement network on the LOL-Real dataset and the ExDark dataset enhancement effect (Fig.5-Fig.6), the de-enhanced image obtained by the network has lower distortion and chromatic aberration than the other algorithms, and it can retain more important image information in the image. As can be seen from the results of illumination image enhancement experiments (Tab.1-Tab.2), except for SNR-Aware, PSNR is improved by at least 1.7 dB and SSIM is increased by at least 0.011; compared to FECNet and PENet, PSNR is improved by 2.51 dB and 1.7 dB, and SSIM is improved by 0.063 and 0.020, respectively; The average NIQE index on the ExDark data NIQE average index on the ExDark data set, significantly exceeds MBLLEN, ZeroDCE, KinD, and reduces 0.35 and 0.30 compared to FECNet and PENet, respectively; Param is only 0.14 which has a significant advantage over other enhancement networks; The above results show that the image has better enhancement performance. As can be seen in the detection results of the FDLIE-YOLO end-to-end low illumination image target detection method (Fig.7), the detection results of this paper's method can reduce the leakage and misdetection and show better confidence. As can be seen from the illumination image target detection experimental results (Tab.3-Tab.4), the low illumination image target detection results obtained by applying this paper's method, the target detection accuracy of FDLIE-YOLO in low illumination exceeds that of IA-YOLO and PE-YOLO, and the mAP improves by 1.6 percentage points and 1.1 percentage points, respectively; And the Param is only 3.16M, and the FPS is 88.82, which proves the excellent detection performance of this paper's method in low illumination environment.
ConclusionsAn FDLIE-YOLO low-light image target detection method is designed. The method features simple structure, good image enhancement, high degree of target detection accuracy, and achieves more advanced SOTA results with better detection performance. The experimental results show that FDLIENet has a peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) mean value of 23.18 and 0.858 on the LOL-Real dataset, and a natural image quality evaluator (NIQE) mean value of 3.98 on the ExDark dataset, which is superior to the state-of-the-art low-illumination image enhancement networks in recent years. FDLIE -YOLO achieves a mean accuracy (mAP) of 80.6% on the ExDark dataset, which is ahead of YOLOv8 and other end-to-end target detection methods, and the number of parameters (Param) is only 3.16 M, and the number of detected frames per second (FPS) is 88.82, which proves the excellent detection performance of this paper's method in low illumination environment.