Photonics Research, Volume. 9, Issue 5, B168(2021)
Deep-learning based denoising and reconstruction of super-resolution structured illumination microscopy images
Fig. 1. Schematics of the deep learning CNN architecture of the two different SR-SIM image denoising (RED-fairSIM) and image denoising and reconstruction (SR-REDSIM) methods. In both approaches, a stack of 15 raw (noisy) SIM images (three angles with five phases each) is used as input. The output is the reconstructed SR-SIM image. (a) SR-REDSIM is composed of three main blocks. The encoding block contains mainly the convolutional layers whereas the decoding block consists of deconvolutional layers and the upsampling block of deconvolutional upsampling layers. (b) In the RED-fairSIM method, fairSIM is first used to computationally reconstruct noisy SR-SIM images that are then further propagated into the RED-Net for denoising. The architecture of RED-Net is composed of the encoder and the decoder blocks.
Fig. 2. The architecture of the networks used in this work. (a) SR-REDSIM architecture is composed of three different blocks. The encoding and decoding blocks contain 21 convolutional and deconvolutional layers, respectively, whereas the upsampling blocks consist only of two upsampling layers. This architecture was used in the SR-REDSIM method to denoise and reconstruct the raw SIM images. (b) The complete RED-Net architecture contains 15 convolutional and 15 deconvolutional layers along with the additive symmetric skip connection layers. This architecture was used in the RED-fairSIM and preRED-fairSIM methods for denoising (in preRED-fairSIM, the input and output have a size of only
Fig. 3. Super-resolution SIM (SR-SIM) images of three different test samples (U2OS osteosarcoma cells, tubulin cytoskeleton labeled with anti-tubulin-Alexa488) at high-level noise (time stamps 175–199; noise level 4). Each column represents a different reconstruction approach: fairSIM (first column), SR-REDSIM (second column), U-Net-fairSIM (third column), and RED-fairSIM (fourth column). The fifth column depicts the reconstructed reference images which were generated by applying fairSIM image reconstruction to high SNR image data at noise level 0 (lowest noise level; i.e., timestamp 0). All reconstructed SR-SIM images have
Fig. 4. These SR-SIM images show the difference between the output of the SR-REDSIM and RED-fairSIM methods when applied to imaging conditions that the underlying network was not trained for. To evaluate the generalization capabilities of these methods, we again collected tubulin structure (on U2OS cells), but with a different excitation wavelength. Here, the cell is illuminated by light with a wavelength of 642 nm instead of 488 nm (the latter used for the images in the training set). The different wavelength also changes the spatial frequency of the SIM patterns. This cell structure with unseen illumination properties is then propagated through the pretrained models of both SR-REDSIM and RED-fairSIM. The resulting SR-SIM image shows that RED-fairSIM is more robust against changed microscope settings than SR-REDSIM. Scale bar: 4 μm.
Fig. 5. Reconstruction of SR-SIM images of two different test samples with the fairSIM, U-Net-fairSIM, and RED-fairSIM methods. Each column represents the results of the corresponding method. The first and third rows show the resulting SR-SIM images, whereas the second and fourth rows contain the extracted enlarged ROIs from the full-size images in the rows directly above. The cell structures reconstructed by RED-fairSIM are smoother compared to the U-Net-fairSIM and fairSIM (reference) cell structures. Furthermore, they are more faithful than U-Net-fairSIM when taking the reference as the “gold standard” into account. Scale bar: 4 μm.
Fig. 6. The complete pipeline of the preRED-fairSIM method. In this pipeline, the raw SIM images [
Fig. 7. Results for preRED-fairSIM. (a) Three blocks of images where each block consists of six images. The first block depicts the images from phase 0 and orientation 0, the second block from phase 1 and orientation 1, and the third block from phase 2 and orientation 2. The left image in the first row of each block represents a noisy raw SIM image from noise level 4. The second image in each block is the denoised version, whereas the reference image (rightmost in each block) is the ground truth. The Fourier spectra of the images are shown below each image. The dimensions of each image in these blocks are
Fig. 8. The reconstructed SR-SIM images of two test samples with different methods. The Fourier spectrum of each SR-SIM image is shown directly below. Each image contains an enlarged ROI at the lower-left bottom. The analysis of ROI of all the methods clearly shows that the results of RED-fairSIM (sixth column) are smoother and more faithful compared to all other methods. Similarly, the Fourier spectra of the RED-fairSIM do not show any additional artifacts in the Fourier space. The SR-SIM images and ROI of SR-REDSIM (fifth column) also show good results; however, there are some artifacts in the high-frequency region of the Fourier spectrum. The ROIs of Hessian SIM (fourth column) do not outperform RED-fairSIM and SR-REDSIM, but do show better results than BM3D (third column). BM3D produces a suppressed cell structure in both of the resultant images. Furthermore, the Fourier spectrum of the BM3D result for the second test sample shows artifacts in both low and high-frequency regions. Scale bar: 4 μm.
Fig. 9. Reconstructed SR-SIM images at different noise levels with the SR-REDSIM and RED-fairSIM methods for a single field of view. Noise level 0 represents the reference image at timestamp 0, noise level 1 comprises the images from timestamps 25–50, noise level 2 from timestamps 75–100, noise level 3 from timestamps 125–150, and noise level 4 from timestamps 175–200. Each full image contains an enlarged ROI in the bottom left. The images reconstructed by fairSIM in the first row show a significant degradation in quality as the noise level increases. In contrast, the results produced by SR-REDSIM and RED-fairSIM in the second and third columns are far less, depending on the noise level. Scale bar: 4 μm.
|
|
Get Citation
Copy Citation Text
Zafran Hussain Shah, Marcel Müller, Tung-Cheng Wang, Philip Maurice Scheidig, Axel Schneider, Mark Schüttpelz, Thomas Huser, Wolfram Schenck, "Deep-learning based denoising and reconstruction of super-resolution structured illumination microscopy images," Photonics Res. 9, B168 (2021)
Special Issue: DEEP LEARNING IN PHOTONICS
Received: Dec. 1, 2020
Accepted: Jan. 26, 2021
Published Online: Apr. 19, 2021
The Author Email: Thomas Huser (thomas.huser@physik.uni-bielefeld.de), Wolfram Schenck (wolfram.schenck@fh-bielefeld.de)