Chinese Journal of Lasers, Volume. 52, Issue 3, 0307105(2025)

Lightweight Brain Tumor Segmentation Using Semantic Flow and Scale Perception

Chuanqiang Liu1, Xiaoqi Lü1,2、*, Jing Li1, and Yu Gu1
Author Affiliations
  • 1School of Digital and Intelligent Industry, Inner Mongolia University of Science and Technology, Baotou 014010, Inner Mongolia , China
  • 2College of Information Engineering, Inner Mongolia University of Technology, Hohhot 010051, Inner Mongolia , China
  • show less

    Objective

    Brain tumor is a highly lethal cancer that occurs in human brain tissue, with glioma being one of the most prevalent types originating from glial cells. Malignant brain tumors can damage normal brain tissue and constrict key neural pathways, which may lead to symptoms such as headaches, seizures, vision loss, and limb weakness, significantly affecting the quality of life of patients. Therefore, early detection and treatment are crucial for managing the patient’s condition. Traditional manual segmentation methods are time-consuming, labor-intensive, and require professional knowledge. In recent years, due to the superiority of the convolutional neural network (CNN) in image feature extraction, it has rapidly gained attention in medical imaging. Classic models such as U-Net and V-Net perform well in capturing local and global features and processing three-dimensional (3D) data but exhibit significant computational complexity. Improved models enhance segmentation accuracy through attention mechanisms and hybrid architectures but frequently present challenges such as high memory consumption and slow training speed. Lightweight networks significantly reduce computational costs by optimizing the convolutional structures and reducing the parameter quantity, making them suitable for resource-constrained scenarios. However, there are shortcomings in the segmentation details and contextual modeling. Therefore, research on improving brain tumor segmentation networks is crucial for achieving intelligent healthcare with enormous clinical application potential. Given the above reasons, semiautomatic or fully automatic methods for brain tumor segmentation are being actively developed.

    Methods

    This study proposes a lightweight brain tumor segmentation network that balances global and local information, for high-precision segmentation performance with reduced parameters. This network is based on the U-Net architecture and introduces a semantic flow feature alignment mechanism to replace traditional skip connections. By learning the flow field, the feature maps of the encoder and decoder are spatially aligned to preserve semantic information and spatial details during the fusion process. In the feature extraction stage, the network adopts layered decoupled convolution units as the basic module while introducing shallow-scale perception modules as auxiliary branches to integrate multi-scale contextual information and facilitate adaptive adjustment of features. The scale perception module comprises two parts: multi-head mixed convolution and scale perception aggregation. The multi-head hybrid convolution combines the multi-head attention mechanism with multiscale residual convolution operation, effectively combining the global modeling ability of self-attention with the local feature extraction ability of the convolutional network. Scale-aware aggregation dynamically fuses multiscale features and adaptively modulates attention toward large-scale or intricate information according to regional attributes, thereby producing more discriminative feature representations. In deep feature extraction, the improved hierarchical decoupling convolution unit combined with multiscale convolution operation further enhances the feature capture capability while maintaining low computational complexity.

    Results and Discussions

    We performed comparative experiments with other networks on the BraTS2020 dataset and generalization experiments on the BraTS2018 and BraTS2019 datasets. In the BraTS2020 dataset, unlike classical networks (Table 3), our network shows significantly higher Dice index in whole tumor (ET), tunor core (WT), and enhancing tunor (TC) areas than the other two networks. The 95% Hausdorff distance is significantly lower than the other two networks. The comparison of our model with the four popular networks used for brain tumor segmentation show that, the dResUnet model demonstrates the highest accuracy in the Dice index on ET area, the SwinBTS model shows the lowest accuracy, and the proposed method has a moderate effect compared to others. On WT area, the Dice index of SAHNet is 0.36 percentage points higher than the average accuracy, whereas, on TC area, it is about 2.83 percentage points higher than SwinBTS and about 1.08 percentage points lower than ASTNet. The overall effect of our network is at a medium to high level, and regarding the parameter quantity, it is considerably lesser than the other four networks. For lightweight networks, the segmentation performance of our network is superior to the other four segmentation methods. Compared to AD-Net, ET area shows an increase of about 1.12 percentage points in Dice index, while WT area remains the same accuracy. The Dice index in TC area is increased by 2.60 percentage points, and the number of parameters is decreased dramatically. Compared with the DMF network, our model outperforms in three indicators and has a much smaller number of parameters. Compared with HDC network, the number of parameters of our network is greater, whereas the Dice index shows increases of 0.43, 0.36, and 1.92 percentage points in ET, WT, and TC areas, respectively. In ET and WT areas, Dice index of our network is slightly lower than that of HMNet, whereas the accuracy in TC area exceeds that of HMNet by 1.93 percentage points. This indicates that the segmentation performance of the proposed network in TC area is much higher than that of other lightweight networks, while in WT and TC areas are equal. Our network emphasizes on detailed information without sacrificing other accuracies, and the segmentation effect is more accurate than other networks. The average segmentation accuracy of our network achieved on the BraTS2018 and BraTS2019 datasets reached 85.83% and 83.76%, respectively. Our model demonstrates good generalization ability compared with other lightweight segmentation methods (Table 4).

    Conclusions

    The SAHNet model proposed in this study adopts a layered decoupled convolution module as the basic feature extraction module. Compared with the traditional convolutions, it is more lightweight while maintaining a certain level of accuracy. Simultaneously, the hierarchical decoupling convolution is improved by proposing multi-scale hierarchical decoupling convolution to enhance the expressive power of the model. The feature alignment module is enhanced through the guidance of semantic flow and applied to skip connections, effectively improving the ability of feature alignment by generating flow fields and spatially distorting features. The scale perception module expands the receptive field of the convolution through local residual convolution in MHXC, enabling it to capture richer contextual information at different scales while preserving local features. The SAA module divides feature information into multiple groups and performs cross-group information fusion through lightweight 1×1×1 convolution to achieve global information crossover. The experiments using BraTS2018, BraTS2019, and BraTS2020 show that our method not only outperforms other lightweight networks in segmentation accuracy but also offers better deployment potential on resource-limited devices due to its lightweight design, which is expected to provide more efficient solutions for practical clinical applications.

    Keywords
    Tools

    Get Citation

    Copy Citation Text

    Chuanqiang Liu, Xiaoqi Lü, Jing Li, Yu Gu. Lightweight Brain Tumor Segmentation Using Semantic Flow and Scale Perception[J]. Chinese Journal of Lasers, 2025, 52(3): 0307105

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Biomedical Optical Imaging

    Received: Oct. 9, 2024

    Accepted: Nov. 22, 2024

    Published Online: Jan. 17, 2025

    The Author Email: Lü Xiaoqi (lxiaoqi@imut.edu.cn)

    DOI:10.3788/CJL241254

    CSTR:32183.14.CJL241254

    Topics