Advanced Photonics, Volume. 6, Issue 6, 064001(2024)
Cross-modality transformations in biological microscopy enabled by deep learning
Fig. 1. Applications of cross-modality transformations across biological scales. At the largest scales, virtual staining is used to enhance imaging contrast. At intermediate scales, virtual staining is used in conjunction with noise reduction techniques. At the smallest scales, superresolution is used to study systems far beyond the optical diffraction limit. Image created with the assistance of BioRender.
Fig. 2. Contrast between physical and virtual approaches to obtain a stained image. In the physical approach, the sample undergoes a series of complex procedures, including preparation, staining, and imaging. Tissue preparation may involve fixing, embedding, and sectioning, among other steps. Similarly, histological staining of an unstained sample requires permeabilization, chemical dye application, washing, counterstaining, and protocol optimization before imaging. In contrast, virtual staining offers a simplified alternative to these protocols, eliminating the need for physical processing37 or staining of the sample.38 In the virtual approach, an unaltered or unstained sample is processed through a virtual staining network to generate a stained image, with results equivalent to physical staining. Physically stained images serve as training data, or input, for the model, especially when transforming between different stains is the objective. Created with the assistance of BioRender. (Tissue image adapted from Berkshire Community College Bioscience Image Library.)
Fig. 3. Representative applications of cross-modality transformations for tissue imaging using DL. (a) Virtual staining of an unlabeled sample image to obtain the equivalent H&E stained image. Adapted from Rana et al.39 (b) Stain-to-stain translation where the input and output are images from two different staining procedures, in this case H&E to IHC staining for cytokeratin (CK). Adapted from Hong et al.48 (c) Multi-stain model that is able to transform unlabeled tissue images into different staining options simultaneously: H&E, orcein, and PSR. Adapted from Li et al.40 (d) Cross-modality transform to apply a segmentation method, or potentially a stain, in a previously incompatible modality. In this case, an AI segmentation for MRI images is transcribed to CT images. Adapted from Dou et al.49 (e) Biopsy-free cross modality transformation, where not only the staining procedure but also the sample preparation is avoided. Using CRM as a noninvasive technique for
Fig. 4. Virtual cell staining using DL. (a) Helgadottir et al. introduced a cGAN to virtually stain lipid droplets, cytoplasm, and nuclei using bright-field images of human stem-cell-derived fat cells (adipocytes). The U-Net-based generator processes bright-field image stacks captured at various
Fig. 5. Superresolution physical principles. (a) Illustration of the PSF resulting from imaging object of diameter
Fig. 6. Superresolution applied architecture. The superresolution network enhances image resolution by training on pairs of simulated low-resolution (LR) and high-resolution ground-truth images or on wide-field (WF) and STORM images from a STORM microscope. First, the LR/WF image undergoes preprocessing through a subpixel edge detector to generate an edge map, both of which serve as inputs to the network. Training is guided by a multi-component loss function that incorporates the combination of multiscale structure similarity index measure and mean absolute error loss (MS-SSIM L1) to capture pixel-level accuracy between the superresolution (SR) and ground-truth/STORM images through multiscale similarity and mean absolute error, perceptual loss to assess feature map differences via the visual geometry group network, adversarial loss using a U-Net discriminator to differentiate ground-truth/STORM images from SR images, and frequency loss to compare differences in the frequency spectrum between SR and ground-truth/STORM images within a specific frequency range using the fast Fourier transform function. This comprehensive loss function helps the superresolution network model achieve precise and perceptually accurate superresolution imaging. Image adapted from Chen et al.147
Fig. 7. Potential application perspectives of AI on biological samples imaging. Current developments found in the literature are contained in green boxes, while speculative prospects for the future are contained in yellow boxes. Starting from the top left, AI is extensively used in diagnostics such as virtual staining and other cross-modality transforms (image in the green panel adapted from Li et al.187). (a) In the future, this could lead to
|
Get Citation
Copy Citation Text
Dana Hassan, Jesús Domínguez, Benjamin Midtvedt, Henrik Klein Moberg, Jesús Pineda, Christoph Langhammer, Giovanni Volpe, Antoni Homs Corbera, Caroline B. Adiels, "Cross-modality transformations in biological microscopy enabled by deep learning," Adv. Photon. 6, 064001 (2024)
Category: Reviews
Received: Jun. 18, 2024
Accepted: Oct. 28, 2024
Posted: Oct. 28, 2024
Published Online: Nov. 29, 2024
The Author Email: Caroline B. Adiels (caroline.adiels@physics.gu.se)