Acta Optica Sinica, Volume. 40, Issue 12, 1210001(2020)

AnImproved Method for Retinal Vascular Segmentation in U-Net

Wenxuan Xue1, Jianxia Liu1、*, Ran Liu1, and Xiaohui Yuan2
Author Affiliations
  • 1College of Information and Computer, Taiyuan University of Technology, Jinzhong, Shanxi 0 30600, China
  • 2Computer Science Department, University of North Texas, Denton, Texas 76201, United States
  • show less

    The fine-grained characteristics of blood vessels are difficult to obtain, and the details of the blood vessels are obscured when the current mainstream methods of retinal vascular segmentation are employed. This paper proposes an improved U-Net model algorithm to address these problems. The convolution layer of quadratic-cycle residual difference was used to replace the original convolutional layer in the upper and lower sampling of U-Net to improve the utilization rate of the features. A multichannel attention model was introduced in the decoding part to improve the segmentation effect of small blood vessels with low contrast. Results show that the accuracies of the algorithm in DRIVE (Digital Retinal Images for Vessel Extraction) and STARE (Structured Analysis of the Retina) databases are 96.89% and 97.96%, the sensitivities are 80.28% and 82.27%, and the AUC performances are 98.41% and 98.65%, respectively. All these parameters are higher than those of existing advanced algorithms. The proposed algorithm can effectively improve the segmentation accuracy of fine blood vessels in fundus images.

    Tools

    Get Citation

    Copy Citation Text

    Wenxuan Xue, Jianxia Liu, Ran Liu, Xiaohui Yuan. AnImproved Method for Retinal Vascular Segmentation in U-Net[J]. Acta Optica Sinica, 2020, 40(12): 1210001

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Image Processing

    Received: Feb. 10, 2020

    Accepted: Mar. 23, 2020

    Published Online: Jun. 3, 2020

    The Author Email: Liu Jianxia (tyljx@163.com)

    DOI:10.3788/AOS202040.1210001

    Topics