Chinese Optics Letters, Volume. 21, Issue 10, 101102(2023)

Enhanced imaging through turbid water based on quadrature lock-in discrimination and retinex aided by adaptive gamma function for illumination correction

Riffat Tehseen1, Amjad Ali1, Mithilesh Mane1, Wenmin Ge1, Yanlong Li1, Zejun Zhang1,2,3, and Jing Xu1,2,3、*
Author Affiliations
  • 1Optical Communication Laboratory, Ocean College, Zhejiang University, Zhoushan 316021, China
  • 2Hainan Institute of Zhejiang University, Sanya 572000, China
  • 3Key Laboratory of Ocean Observation-Imaging Testbed of Zhejiang Province, Ocean College, Zhejiang University, Zhoushan 316021, China
  • show less

    This paper presents an improved method for imaging in turbid water by using the individual strengths of the quadrature lock-in discrimination (QLD) method and the retinex method. At first, the high-speed QLD is performed on images, aiming at capturing the ballistic photons. Then, we perform the retinex image enhancement on the QLD-processed images to enhance the contrast of the image. Next, the effect of uneven illumination is suppressed by using the bilateral gamma function for adaptive illumination correction. The experimental results depict that the proposed approach achieves better enhancement than the existing approaches, even in a high-turbidity environment.

    Keywords

    1. Introduction

    Underwater image restoration plays a crucial role in object detection, object recognition, and video tracking[1]. The visibility of underwater images is degraded by the scattering and absorption of the incident light field. The imaging quality deteriorates with the increased distance between the target and the sensor as well as with the turbidity.

    In recent years, many de-scattering techniques have been put forth to cope with the degradation of the image. These methods are typically divided into two categories: image restoration methods based on the physical model and image recovery methods based on image enhancement[2,3]. The image restoration methods use the atmospheric scattering model or prior knowledge to reverse the degradation caused by the scattering of light, which includes using the dark channel prior (DCP) method[4], the polarization imaging method[5], and the intensity modulation of an active light source[6]. The other is built on image enhancement algorithms, such as the histogram equalization (HE)[7], the contrast limited adaptive histogram equalization (CLAHE)[8], and the retinex algorithms[9]. These algorithms can improve image contrast, but they are ineffective at restoring visibility range.

    A simpler and more competitive approach is to use an intensity modulated continuous-wave light source[6,10,11]. The theory builds on the hypothesis that the modulating frequency and phase of the captured ballistic photons, in contrast to those of the multiply scattered photons, remain the same as that of an incident modulated light source. This method requires demodulation of the received signal at the modulating frequency. Typical ballistic filtering requires modulation at high frequencies[12]. However, low modulation frequencies can be chosen, at the expense of fewer ballistic or snake-like photons, to meet the requirements of available imaging systems[13]. Sudarsanam et al. used low frequencies to demonstrate imaging through spherical polydisperse scatterers, and the demodulation was performed by using quadrature lock-in discrimination (QLD)[13]. An instantaneous all-optical single-shot technique demonstrated demodulation at higher frequencies (5kHz) up to the radio frequency range[14]. However, the aforementioned technique has a few shortcomings, such as a smaller field of view, an increased cost of optical elements, and system complexity.

    Imaging through the real fog has been realized over hectometric distances to validate the performance of the QLD technique[15]. In our previous work, we developed a tracking method for active light beacons to realize underwater docking in highly turbid water. The QLD technique was employed to lock on the blinking frequency of the light beacons located at the docking station and to successfully suppress the effect of unwanted light and stray noise at other frequencies[16]. Recently, imaging through flame and smoke was demonstrated by employing a blue light-emitting diode (LED) and by using the QLD algorithm.[17] Although the QLD algorithm is well studied for imaging using scattering media such as polydisperse scatterers, real-time fog, smoke, and flame, it is still not thoroughly investigated for underwater image restoration where an LED is used to illuminate the target object.

    In this Letter, we presented a novel underwater image recovery method based on a cascade method. It benefits from the strengths of the image restoration method, such as the traditional QLD technique, to improve visibility and mitigate the noise. The high-speed QLD method is proposed to help implement our cascaded approach in real-world systems. The well-known image enhancement method, such as the multiscale retinex (MSR) technique, is employed to recover the contrast of the output image. In the MSR, the multiscale guided filter is used instead of the multiscale Gaussian filter to avoid the halo artifacts at the boundaries, information loss, and the blurring effect of the output image. Additionally, the adaptive illumination correction algorithm is optimized and incorporated to overcome the non-uniform illumination in the output image. The weighted fusion method is then developed to obtain the final enhanced output image.

    2. Proposed Method

    The proposed approach consists of three main steps described in the following sections.

    2.1. High-speed quadrature lock-in discrimination algorithm

    The quadrature lock-in discrimination technique works based on the principle of a lock-in amplifier. Consider the captured light modulated at a frequency of fm (Hz) and modulation index M; the intensity at the receiver is written as Ir(t)=Iavg[1+Msin(2πfmt)] for the average received intensity Iavg. When the signal is multiplied by a sine wave of the known modulating frequency and a relative phase Δϕ followed by the time averaging over a few cycles, one obtains an in-phase component I=AcosΔϕ. Meanwhile, the multiplication of the signal with the cosine wave of the known modulating frequency and the time averaging over a few cycles give rise to the quadrature component Q=AsinΔϕ. The quadrature components can be squared and added to retrieve the amplitude A=I2+Q2, and the relative phase difference Δϕ=arctan(Q/I) between the source and the detector can also be obtained. In our experiments, we used a scientific complementary metal-oxide-semiconductor (sCMOS) camera to capture signals as 2D images of a scene over a certain length of time. The images are then processed offline by the QLD algorithm to reconstruct an output image by computing the amplitude of the received signal at each pixel using MATLAB as a programming tool.

    The frame rate (or sampling frequency fs) of the camera is N times the modulation frequency in the traditional QLD method. Moreover, N should be greater than or equal to 2, i.e., fs=N×fm, N2 in accordance with the Nyquist sampling criterion. When the multiple is four (i.e., N=4), a periodic sequence of sine and cosine signals can be written as S=[0,1,0,−1] and C=[1,0,−1,0], respectively, and the I and Q components can be written in the form of Eqs. (1) and (2)[18]. I=ImM×L×[01010101]1×LT,Q=ImM×L×[10101010]1×LT,where ImM×L is a matrix, and the subscript M indicates the total number of pixels in an image and L is the number of captured images. The sine (S) and cosine (C) sequences are concatenated L/4 times in Eqs. (1) and (2), respectively, to calculate the I and Q components. The number of multiplications has been reduced to half of the traditional QLD method for the same multiple of modulated signal frequency (i.e., N=4) using orthogonal vector arithmetic. Additionally, the central processing unit (CPU) does not need to allocate memory space to save the reference signals of the sine and cosine at the known frequency, which reduces the processor’s burden and results in faster calculations. The higher values of N are found to have a negligible effect on the quality of the QLD processed image.

    2.2. Multiscale retinex method

    Since the underwater image is degraded due to low contrast and uneven illumination, the retinex method is used to overcome these problems[19]. Retinex theory states that the perceived image can be broken down into illumination and reflection images, as shown in Eq. (3), IQLD(x,y)=IL(x,y)×R(x,y),where IQLD(x,y) is the input image to retinex method. IL(x,y) and R(x,y) denote the illumination and reflection images, respectively. The conventional retinex algorithm uses the Gaussian filtering of a perceived image to get the illumination image. The later versions proposed the multiscale retinex (MSR) method, which employs a multiscale Gaussian filter with different weights to recover the local dynamics and contrast of the image more efficiently[20], ILi(x,y)=IQLD(x,y)*Gi(x,y),where * denotes a convolution operator. The illumination image at the ith scale ILi(x,y) is approximated from IQLD(x,y) by convolving it with a Gaussian filter of the ith scale. Gi(x,y) is a multiscale Gaussian filter with a standard deviation σi defined as Gi(x,y)=12πσi2exp(x2y22σi2),rMSR(x,y)=i=1nwi{log[IQLD(x,y)]log[ILi(x,y)]},where rMSR(x,y) is the logarithm of RMSR(x,y); wi is the weighting factor, which should add up to 1.0; and n is the number of scales. We used MSR along with the guided filter to avoid the relatively lower contrast and the halo artifacts at the edges of the image obtained by using the traditional MSR method[21]. Three different window sizes of the guided filter, 15×15, 25×25, and 40×40, corresponding to n=3, are used in our experiments.

    The Gaussian filtering of the resulting illumination component is done at the highest scale to remove the noise. We used the simplest color balancing algorithm[22] as a post-processing method, which clips a certain proportion of pixels on either side of the image histogram to stretch the values of the image to the widest possible range [0, 255].

    2.3. Improved bilateral gamma function for adaptive intensity correction

    The resulting image from the previous step still has an impact of uneven illumination, especially at high turbidity levels. Here, to reduce the impact, we employed an improved bilateral gamma function in the adaptive intensity correction algorithm[23] to adaptively update the illumination component from the previous step. The equations of the improved bilateral gamma function are as follows: Oh(x,y)=[255·(RMSR(x,y)255)]γ,Ol(x,y)=255·[1(255RMSR(x,y)255)]γ,γ=γ0|μIL(x,y)|μ,Iadpt(x,y)=α·Oh(x,y)+(1α)·Ol(x,y),α={1,IL(x,y)μ,0,IL(x,y)>μ,where μ is the mean of the illumination image, and α is a binary subsection correction parameter, which can take the value 0 or 1. When a pixel value (x,y) of the illumination image is less than or equal to μ, the output of the improved bilateral gamma function Ic(x,y) is a gamma function Oh(x,y), which implies that the intensity value of the pixel (x,y) is increased for the low illumination pixels. If the pixel value (x,y) of the illumination image is greater than μ, then the output is a gamma function Ol(x,y), which results in the reduction of intensity values for the high illumination pixels. The parameter γ varies dynamically and is controlled by the distribution characteristics of the illumination image, which enables adaptive correction of the nonuniformly illuminated underwater image. The base γ0 is optimally chosen to be equal to 0.8 since it gives the best illumination distribution for each turbidity level and different target objects in our experiments.

    We proposed an adaptive illumination correction algorithm for underwater images based on the bilateral gamma function by using both the reflection and illumination images. Meanwhile, the gamma-corrected illumination image ILc(x,y) is calculated and added back to the reflection image to restore the naturalness of the image[24]. The corrected reflection image Ic(x,y) is expressed as Ic(x,y)=RMSR(x,y)×ILc(x,y).

    The final output image is the weighted fusion of the two different illumination corrected images, Iout(x,y)=β·Ic(x,y)+(1β)·Iadpt(x,y).

    We chose β=0.5 in our experiments. The flowchart of the proposed method is shown in Fig. 1.

    Flowchart of the proposed method.

    Figure 1.Flowchart of the proposed method.

    3. Experiments and Results

    The experimental setup is shown in Fig. 2. We used a 625-nm red LED (M625L4) as a light source, and the current through the LED is modulated (modulation index M=1.43) using the internal sinusoidal modulation function. Two types of objects, such as a Rubik’s cube and a rubber toy, are used as underwater targets with the corresponding modulation frequencies for the LED being adjusted to 37 Hz and 38 Hz, respectively. The modulated LED illuminates the target, and the image, formed by the reflection of light from the target, is captured using a camera (16-bit Dhyana 400BSI sCMOS camera). The volume of the transparent water tank is 38cm×25cm×26cm. We added up to 21 mL of milk into the water tank to simulate a high-turbidity environment. The distance between the target object and the camera is 90 cm. The frame rate of the camera is adjusted to four times the modulating frequency of the LED. The images are captured for a time duration of 2 seconds. We did not observe any improvement in the final results for a longer time series.

    Experimental setup.

    Figure 2.Experimental setup.

    To demonstrate that our approach can realize image restoration, we used an image of a rubber toy, which is a multi-level gray image and is more prone to degradation caused by noise and turbidity. The performance comparison of our approach with other traditional image restoration and image enhancement methods is shown in Figs. 3 and 5. It is important to mention that the time averaging of 100 images is performed to minimize the effect of noise prior to applying traditional methods. It can be seen in Fig. 3 that the grayscale span of the output image of the CLAHE is more widely distributed, and the overall contrast is enhanced. Despite the enhancement in visibility, the problems of uneven illumination and poor performance for high-turbidity images cannot be solved. The guided filter used in the MSR method contributed to the high contrast and elimination of the halo artifacts along the boundaries. The adaptive gamma correction adjusts the illumination adaptively by increasing the intensity in low illumination areas and vice versa. Thus, the shadow caused by uneven illumination is eliminated considerably, and from the results, it can be seen that our method is more efficient for a high-turbidity environment. We select a zoomed-in region of the high-turbidity image, and the intensity profiles at colored dashed lines in the zoomed-in view of Fig. 3 are plotted and shown in Fig. 4. For a fair comparison, the minimum grayscale intensity value of each curve is subtracted from the original value to shift its lowest point to the horizontal axis. The intensity profile of a clear image (image captured in clear water) is also plotted. It can be seen that the trend of the curve in our method is similar to that of a clear image. Furthermore, it is clear that the proposed approach has the highest contrast and signal-to-noise ratio (SNR) as compared with other methods. The MSR method tends to have a loss of details, bleaching of image information, and lower contrast as compared to our method, and the DCP and DehazeNet[25] suffer from a low contrast and uneven illumination problem.

    Comparison results for the images captured in low-turbidity (first row) and high-turbidity (second row) and their zoomed-in views (of high-turbidity in the third row) of different methods using the rubber toy as a target object.

    Figure 3.Comparison results for the images captured in low-turbidity (first row) and high-turbidity (second row) and their zoomed-in views (of high-turbidity in the third row) of different methods using the rubber toy as a target object.

    Intensity profiles at colored dashed lines in the zoomed-in view of Fig. 3.

    Figure 4.Intensity profiles at colored dashed lines in the zoomed-in view of Fig. 3.

    Comparison results for the images captured in low-turbidity (first row) and high-turbidity (second row) and their zoomed-in views (of high-turbidity in the third row) of different methods using the Rubik’s cube as a target object.

    Figure 5.Comparison results for the images captured in low-turbidity (first row) and high-turbidity (second row) and their zoomed-in views (of high-turbidity in the third row) of different methods using the Rubik’s cube as a target object.

    The universality of the proposed method is verified by imaging the Rubik’s cube with handwritten words on it as a target. Different methods at different turbidities and their zoomed-in views are shown in Fig. 5. We compute various evaluation metrics of the image quality for the zoomed-in view in Fig. 5 to quantify and compare the image quality for various methods in the absence of reference images. The metrics include the standard deviation (STD), the peak signal-to-noise ratio (PSNR), the average gradient (AG)[26], the entropy[27], the value of the measure of enhancement (EME)[28], the blind-reference-less image spatial quality evaluator (BRISQUE)[29], and the natural image quality evaluator (NIQE)[30]. The values of these metrics are reported in Table 1, and the best indicators are marked in bold. One may find that our method achieved the highest values for most of the metrics, which further supports the effectiveness of the proposed method over the other traditional methods.

    • Table 1. Quantitative Comparison of Zoomed-in View of Fig. 5

      Table 1. Quantitative Comparison of Zoomed-in View of Fig. 5

       EntropyAGSTDPSNR (dB)BRISQUENIQEEME
      Intensity image6.693.35 × 10−20.10511.9740.1219.493.86
      QLD6.821.7 × 10−30.12411.0343.2112.340.47
      CLAHE5.9971.6 × 10−20.07212.8435.18211.72.79
      DCP7.111.5 × 10−20.1738.12835.95611.874.877
      MSR7.351.05 × 10−10.19414.4942.30114.3918.95
      DehazeNet7.517.5 × 10−30.19910.31623.5949.6171.952
      Ours7.176.51 × 10−20.2616.9333.284.7986.49

    The performance of the proposed method is also evaluated for the modulated light source with different modulation indexes. The zoomed-in parts of the high-turbidity images of the target objects from Fig. 5 (Rubik’s cube) are used to evaluate the effect by taking the PSNR as an evaluation metric. Two modulation indexes have been chosen, first by taking a modulation index of M=1.43 and second M=0.66. The PSNR values of the recovered images are shown at the top of the corresponding images in Fig. 6. It is clear that the small modulation index leads to poor image recovery results for high-turbidity images. Compared with each other, the PSNR value has been decreased by 1.42 dB. It is deduced that the higher values of the modulation index aid in obtaining a better quality of the recovered image in our experiments.

    Comparison results for different modulation indexes in a high-turbidity environment.

    Figure 6.Comparison results for different modulation indexes in a high-turbidity environment.

    4. Conclusion

    We present a three-stage processing method for recovering underwater images. Initially, we preprocessed the series of images of the scene, illuminated with a modulated source of light, using the high-speed QLD technique at the known modulating frequency. The QLD technique helped reduce the noise caused by turbidity and increased the visibility of the scene by filtering the small number of ballistic photons. Next, we performed retinex enhancement using a guided filter to separate the illumination component, which restores the contrast of the image and reduces uneven illumination at the cost of increased processing noise. The proposed approach makes better use of the retinex method to improve contrast by using a QLD-processed image instead of the original underwater image. Finally, the usage of the bilateral gamma function for adaptive illumination correction aids the visual quality by reducing the over-exposure effects that then preserve details in the image. The results show that our method has distinct benefits in contrast enhancement, detail recovery, uneven illumination correction, and noise reduction in both low and high-turbidity environments.

    [1] F. M. Caimi, D. M. Kocak, F. Dalgleish, J. Watson. Underwater imaging and optics: recent advances. OCEANS, 1(2008).

    [4] K. He, J. Sun, X. Tang. Single image haze removal using dark channel prior. IEEE Conference on Computer Vision and Pattern Recognition, 1956(2009).

    [7] A. K. Jain. Fundamentals of Digital Image Processing(1989).

    [8] K. J. Zuiderveld. Contrast limited adaptive histogram equalization. Graphics Gems, 474(1994).

    [23] D. Wang, W. Yan, T. Zhu, Y. Xie, H. Song, X. Hu. An adaptive correction algorithm for non-uniform illumination panoramic images based on the improved bilateral gamma function. International Conference on Digital Image Computing: Techniques and Applications (DICTA), 1(2017).

    Tools

    Get Citation

    Copy Citation Text

    Riffat Tehseen, Amjad Ali, Mithilesh Mane, Wenmin Ge, Yanlong Li, Zejun Zhang, Jing Xu. Enhanced imaging through turbid water based on quadrature lock-in discrimination and retinex aided by adaptive gamma function for illumination correction[J]. Chinese Optics Letters, 2023, 21(10): 101102

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Imaging Systems and Image Processing

    Received: Mar. 5, 2023

    Accepted: Jun. 9, 2023

    Posted: Jun. 9, 2023

    Published Online: Oct. 11, 2023

    The Author Email: Jing Xu (jxu-optics@zju.edu.cn)

    DOI:10.3788/COL202321.101102

    Topics