Journal of Optoelectronics · Laser, Volume. 36, Issue 7, 774(2025)

A colorectal polyp segmentation algorithm integrating Transformer and dual graph convolution

LIANG Liming, KANG Ting*, ZHONG Yi, LI Yulin, and HE Anjun
Author Affiliations
  • School of Electrical Engineering and Automation, Jiangxi University of Science and Technology, Ganzhou, Jiangxi, 341000 China
  • show less
    References(15)

    [3] [3] RONNEBERGER O, FISCHER P, BROX T. U-Net: Convolutional networks for biomedical image segmentation[C]//Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015:18th International Conference, October 5-9, 2015, Munich, Germany. Cham: Springer International Publishing, 2015:234-241.

    [4] [4] DIAKOGIANNIS F I, WALDNER F, CACCETTA P, et al. ResUNet-a: a deep learning framework for semantic segmentation of remotely sensed data[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2020, 162:94-114.

    [5] [5] ZHOU Z, SIDDIQUEE M M R, TAJBAKHSH N, et al. Unet++ redesigning skip connections to exploit multiscale features in image segmentation[J]. IEEE Transaction on Medical Imaging, 2019, 39(16):1856-1867.

    [6] [6] JHA D, RIEGLER M A, JOHANSEN D, et al. DoubleU-Net: a deep convolutional neural network for medical image segmentation[C]//2020 IEEE 33rd International Symposium on Computer-Based Medical Systems (CBMS), July 28-30, 2020, Rochester, MN, USA. New York: IEEE, 2020:558-564.

    [7] [7] IBTEHAZ N, RAHMAN M S. MultiResUNet: Rethinking the U-Net architecture for multimodal biomedical image segmentation[J]. Neural Networks, 2020, 121:74-87.

    [8] [8] DAI Y, GAO Y, LIU F. TransMed: Transformers advance multi-modal medical image classification[J]. Diagnostics, 2021, 11(8):1384.

    [9] [9] CHEN J, LU Y, YU Q, et al. TransUNet: Transformers make strong encoders for medical image segmentation[EB/OL]. (2021-02-08)[2024-05-20]. https://arxiv.org/abs/2102.04306.

    [10] [10] WANG J, HUANG Q, TANG F, et al. Stepwise feature fusion: Local guides global[C]//International Conference on Medical Image Computing and Computer-Assisted Intervention, September 18-22, 2022, Singapore. Cham: Springer Nature, 2022:110-120.

    [11] [11] WU C, LONG C, LI S, et al. MSRAformer: multiscale spatial reverse attention network for polyp segmentation[J]. Computers in Biology and Medicine, 2022, 151:106274.

    [12] [12] WANG W, XIE E, LI X, et al. PVT v2: Improved baselines with pyramid vision transformer[J]. Computational Visual Media, 2022, 8(3):415-424.

    [13] [13] FAN J T, ZENG T Y, WANG D Y, et al. DSFNet: Dual-GCN and location-fused self-attention with weighted fast normalized fusion for polyps segmentation[EB/OL]. (2023-08-15)[2024-05-20]. https://arxiv.org/abs/2308.07946.

    [14] [14] PENG Y P, SONKA M, CHEN D Z. U-Net v2: Rethinking the skip connections of U-Net for medical image segmentation[EB/OL]. (2023-11-29)[2024-05-20]. https://arxiv.org/abs/2311.17791.

    [15] [15] LIU H J, LIU F Q, FAN X Y, et al. Polarized self-attention: towards high-quality pixel-wise regression[EB/OL]. (2021-07-02)[2024-05-20]. https://arxiv.org/abs/2107.00782.

    [16] [16] DU H, WANG J, LIU M, et al. SwinPA-Net: Swin Transformer-based multiscale feature pyramid aggregation network for medical image segmentation[J]. IEEE Transactions on Neural Networks and Learning Systems, 2022, 35(4):5355-5366.

    [17] [17] YANG J, QIU P J, ZHANG Y C, et al. D-Net: dynamic large kernel with dynamic feature fusion for volumetric medical image segmentation[EB/OL]. (2024-03-15)[2024-05-20]. https://arxiv.org/abs/2403.10674.

    Tools

    Get Citation

    Copy Citation Text

    LIANG Liming, KANG Ting, ZHONG Yi, LI Yulin, HE Anjun. A colorectal polyp segmentation algorithm integrating Transformer and dual graph convolution[J]. Journal of Optoelectronics · Laser, 2025, 36(7): 774

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category:

    Received: May. 22, 2024

    Accepted: Jun. 24, 2025

    Published Online: Jun. 24, 2025

    The Author Email: KANG Ting (1833075267@qq.com)

    DOI:10.16136/j.joel.2025.07.0284

    Topics