Journal of Optoelectronics · Laser, Volume. 36, Issue 2, 216(2025)

Research on colon polyp segmentation based on dual path feature multi-scale subtraction

XIONG Wei1,2、*, ZHANG Lizhen1, YANG Qian1, MENG Shengzhe1, and LI Lirong1
Author Affiliations
  • 1School of Electrical & Electronic Engineering, Hubei University of Technology, Wuhan, Hubei 430068, China
  • 2Department of Computer Science & Engineering, University of South Carolina, Columbia, SC 29201, USA
  • show less
    References(27)

    [1] [1] JEMAL A, BRAY F, CENTER M M, et al. Global cancer statistics[J]. CA: A Cancer Journal for Clinicians, 2011, 61(2):69-90.

    [3] [3] BERNAL J, SANCHEZ J, VILARINO F. Towards automatic polyp detection with a polyp appearance model[J]. Pattern Recognition, 2012, 45(9):3166-3182.

    [4] [4] MAMONOV A V, FIGUERIREDO I N, FIGUEIREDO P N, et al. Automated polyp detection in colon capsule endoscopy[J]. IEEE Transactions on Medical Imaging, 2014, 33(7):1488-1502.

    [5] [5] BERNAL J, SANCHEZ F J, FERNANDEZ G, et al. WM-DOVA maps for accurate polyp highlighting in colonoscopy: validation vs. saliency maps from physicians[J]. Computerized Medical Imaging and Graphics, 2015, 43(1):99-111.

    [6] [6] RONNEBERGER O, FISCHER P, BROX T. U-Net: convolutional networks for biomedical image segmentation[EB/OL]. (2015-05-18)[2023-09-25]. https://arxiv.org/abs/1505.04597.

    [7] [7] ZHOU Z, SIDDIQUEE M M R, TAJBAKHSH N, et al. UNet++: a nested U-Net architecture for medical image segmentation[C]//Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, September 20-22, 2018, Granada, Spain. Berlin: Springer, 2018:3-11.

    [8] [8] OKTAY O, SCHLEMPER J, FOLGOC L L, et al. Attention U-Net: learning where to look for the pancreas[EB/OL]. (2018-03-20)[2023-09-25]. https://arxiv.org/abs/1804.03999v3.

    [9] [9] JHA D, SMEDSRUD P H, JOHANSEN D, et al. A comprehensive study on colorectal polyp segmentation with ResUNet++, conditional random field and test-time augmentation[J]. IEEE Journal of Biomedical and Health Informatics, 2021, 25(6):2029-2040.

    [10] [10] JIE H, LI S, SAMUEL A, et al. Squeeze-and-excitation networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 42(8):2011-2023.

    [11] [11] CHEN L C, PAPNDREOU G, SCHROFF F, et al. Rethinking atrous convolution for semantic image segmentation[EB/OL]. (2017-12-05)[2023-09-25]. https://arxiv.org/abs/1706.05587.

    [12] [12] FAN D P, JI G P, ZHOU T, et al. PraNet: parallel reverse attention network for polyp segmentation[EB/OL]. (2020-07-03)[2023-09-25]. https://arxiv.org/abs/2006.11392v4.

    [13] [13] ZHAO X, ZHANG L, LU H. Automatic polyp segmentation via multi-scale subtraction network[EB/OL]. (2021-08-11)[2023-09-25]. https://arxiv.org/abs/2108.05082.

    [14] [14] HUANG C H, WU H Y, LIN Y L. HarDNet-MSEG: a simple encoder-decoder polyp segmentation neural network that achieves over 0.9 mean Dice and 86 FPS[EB/OL]. (2021-01-20)[2023-09-25]. https://arxiv.org/abs/2101.07172v2.

    [15] [15] CHAO P, KAO C Y, RUAN Y S, et al. HarDNet: a low memory traffic network[EB/OL]. (2019-09-03)[2023-09-25]. https://arxiv.org/abs/1909.00948.

    [16] [16] TOMAR N K, SHERGILL A, RIEDERS B, et al. TransResU-Net: Transformer based ResU-Net for real-time colonoscopy polyp segmentation[EB/OL]. (2022-06-17)[2023-09-25]. https://arxiv.org/abs/2206.08985.

    [17] [17] QUAN Y, ZHANG D, ZHANG L, et al. Centralized feature pyramid for object detection[J]. IEEE Transactions on Image Processing, 2023, 32(2):4341-4354.

    [18] [18] HOU Q, ZHANG L, CHENG M M, et al. Strip pooling: rethinking spatial pooling for scene parsing[EB/OL]. (2020-03-30)[2023-09-25]. https://arxiv.org/abs/2003.13328.

    [19] [19] GAO S H, CHENG M M, ZHAO K, et al. Res2Net: a new multi-scale backbone architecture[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 43(2):652-662.

    [20] [20] HOU Q, ZHOU D, FENG J. Coordinate attention for efficient mobile network design[C]//IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 20-25, 2021, Nashville, TN, USA. New York: IEEE, 2021:13708-13717.

    [21] [21] HAN K, WANG Y, TIAN Q, et al. GhostNet: more features from cheap operations[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 13-19, 2020, Seattle, WA, USA. New York: IEEE, 2020:1580-1589.

    [22] [22] LIU Z, MAO H, WU C Y, et al. A convnet for the 2020s[C]//IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 18-24, 2022, New Orleans, LA, USA. New York: IEEE, 2022:11976-11986.

    [23] [23] JHA D, SMEDSURD P H, RIEGLER M A, et al. Kvasir-seg: a segmented polyp dataset[C]//26th International Conference on Multimedia Modeling, January 5-8, 2020, Daejeon, South Korea. Berlin: Springer, 2020:451-462.

    [24] [24] VAZQUEZ D, BERNAL J, SANCHEZ F J, et al. A benchmark for endoluminal scene segmentation of colonoscopy images[EB/OL]. (2016-12-02)[2023-09-25]. https://arxiv.org/abs/1612.00799.

    [25] [25] MARGOLIN R, ZELINIK-MANOR L, TAL A. How to evaluate foreground maps?[C]//IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 23-28, 2014, Columbus, OH, USA. New York: IEEE, 2014:248-255.

    [26] [26] WEI J, HU Y, ZHNG R, et al. Shallow attention network for polyp segmentation[EB/OL]. (2021-08-02)[2023-09-25]. https://arxiv.org/abs/2108.00882.

    [27] [27] ZHAO X, JIA H, PANG Y, et al. M2SNet: multi-scale in multi-scale subtraction network for medical image segmentation[EB/OL]. (2023-03-20)[2023-09-25]. https://arxiv.org/abs/2303.10894.

    [28] [28] WEI J, WANG S, HUANG Q. F3Net: fusion, feedback and focus for salient object detection[EB/OL]. (2019-11-26)[2023-09-25]. https://arxiv.org/abs/1911.11445.

    Tools

    Get Citation

    Copy Citation Text

    XIONG Wei, ZHANG Lizhen, YANG Qian, MENG Shengzhe, LI Lirong. Research on colon polyp segmentation based on dual path feature multi-scale subtraction[J]. Journal of Optoelectronics · Laser, 2025, 36(2): 216

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category:

    Received: Sep. 25, 2023

    Accepted: Jan. 23, 2025

    Published Online: Jan. 23, 2025

    The Author Email: XIONG Wei (xw@mail.hbut.edu.cn)

    DOI:10.16136/j.joel.2025.02.0499

    Topics