Optics and Precision Engineering, Volume. 30, Issue 16, 2006(2022)

Fusion of fractal geometric features Resnet remote sensing image building segmentation

Shengjun XU1...2,2, Ruoxuan ZHANG1,2,2,*, Yuebo MENG1,2,2, Guanghui LIU1,2,2 and Jiuqiang HAN1 |Show fewer author(s)
Author Affiliations
  • 1College of Information and Control Engineering,Xi 'an University of Architecture and Technology, Xi'an70055, China
  • 2Faculty of Electronic and Information Engineering, Xi'an Jiaotong University, Xi'an710049, China
  • 2Xi'an Key Labratory of Building Manufactaring Intelligent & Automation Technology, Xi'an710055, China
  • show less
    Figures & Tables(16)
    Resnet101 network and residual module
    Overall structure of the proposed network
    Fractal dimension in atrous spatial pyramid pooling
    Fractal feature extraction process
    Deeply separable convolution attention fusion module
    Graph of loss decline during network training
    Comparison of local results for building extraction
    Local segmentation of building extraction under road interference conditions
    Building extraction local segmentation results under trees interference conditions
    Local segmentation results of buildings extracted under shadow interference
    • Table 1. Improved DBC algorithm steps

      View table
      View in Article

      Table 1. Improved DBC algorithm steps

      算法1 基于改进DBC的分形维数算法

      输入:特征图Fin,将Fin中全体元素从1开始依次编号,设最大编号为smax,用fs表示Fin中第s个元素,G为元素的灰度级;

      输出:分形维数矩阵Ffractal,用ds表示Ffractals个元素;

      Step 1:利用padding填充矩阵Fin的边缘像素;

      Step 2:For s=1 to smax,以fs为中心在Fin中进行区域划分,子区域大小设为W×W,计算该子区域分形维数ds

      Step 2.1:For w=2 to W/2,对该区域利用w×w的网格进行分割,计算分割尺度r=w/W

      Step 2.1.1:对每个网格,将其在子区域内的位置记为(i,j),利用高度为h=w×G/W的盒子进行堆积;

      Step 2.1.2:寻找网格内灰度的最大最小值gmaxgmin,计算网络差分盒维数nr(i,j)=gmax/t'-gmin/t'+1

      Step 2.1.3:利用公式Nr=i,jnr(i,j)求子区域内所有网格的差分盒维数之和Nr

      Step 2.2:用最小二乘法对多组log(Nr)log(1/r)进行直线拟合,该直线斜率即为该点分形维数ds

      Step 3:若s=smax,输出遥感图像的分形维数矩阵Ffractal,算法结束;否则返回Step 2继续迭代。

    • Table 2. Parameter settings and FD-ASPP output

      View table
      View in Article

      Table 2. Parameter settings and FD-ASPP output

      ConvrσFD-ASPP outputOutput size
      Conv1-128×128
      Conv23,5,11,157,11,23,31Fout(1)128×128
      Conv33,5,11,157,11,23,31Fout(2)64×64
      Conv43,5,117,11,23Fout(3)32×32
      Conv53,57,11Fout(4)16×16
    • Table 3. WHU Building Dataset performance comparison

      View table
      View in Article

      Table 3. WHU Building Dataset performance comparison

      ModelPrecisionRecallF1-scoremIoUParams/MFLOPs/G
      FCN690.56%90.40%90.48%88.87%15.380.51
      Segnet792.05%91.43%91.74%90.21%29.4160.56
      Deeplab V3892.83%93.07%92.95%91.89%15.364.59
      U-net993.69%93.14%93.41%92.57%17.2160.33
      SETR1091.95%92.22%92.08%91.92%63.934.76
      AlignSeg1194.25%94.56%94.27%94.19%66.8123.84
      Our model94.48%94.62%94.55%94.15%20495.56
    • Table 4. Influence of different layers of FD-ASPP on segmentation index

      View table
      View in Article

      Table 4. Influence of different layers of FD-ASPP on segmentation index

      Resnet101Layer1Layer2Layer3Layer4PrecisionRecallmIoU
      91.41%92.43%91.27%
      91.89%91.68%91.69%
      92.56%93.63%92.23%
      92.23%93.74%91.88%
      92.81%93.45%92.84%
      93.26%94.03%93.80%
    • Table 5. Ablation of void convolutional modules in the WHU Building Dataset

      View table
      View in Article

      Table 5. Ablation of void convolutional modules in the WHU Building Dataset

      No.Base-lineFD-ASPPDS-CAFPrecisionRecallmIoU
      1××91.41%92.43%91.27%
      2×93.26%94.03%93.80%
      3×92.89%93.56%93.22%
      494.48%94.62%94.15%
    • Table 6. Comparative experimental results in different scenarios

      View table
      View in Article

      Table 6. Comparative experimental results in different scenarios

      干扰类型ModelPrecisionRecallmIoU
      道路干扰FCN688.09%87.72%85.97%
      Segnet789.44%89.25%88.53%
      DeeplabV3890.61%90.30%89.37%
      U-net992.43%92.64%92.15%
      SETR1090.85%91.12%90.43%
      AlignSeg1193.96%94.15%93.26%
      Our model94.27%94.43%93.99%
      树木干扰FCN685.85%86.27%85.73%
      Segnet787.72%86.90%86.34%
      DeeplabV3889.54%89.16%88.42%
      U-net991.14%91.89%91.33%
      SETR1088.29%88.60%87.38%
      AlignSeg1191.44%92.05%91.74%
      Our model93.07%92.41%92.11%
      建筑物阴影干扰FCN687.78%87.02%85.76%
      Segnet789.19%89.34%88.55%
      DeeplabV3889.01%89.47%88.91%
      U-net991.90%92.38%91.29%
      SETR1089.43%90.66%90.02%
      AlignSeg1193.43%94.20%93.25%
      Our model93.50%94.06%93.19%
    Tools

    Get Citation

    Copy Citation Text

    Shengjun XU, Ruoxuan ZHANG, Yuebo MENG, Guanghui LIU, Jiuqiang HAN. Fusion of fractal geometric features Resnet remote sensing image building segmentation[J]. Optics and Precision Engineering, 2022, 30(16): 2006

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Information Sciences

    Received: Apr. 21, 2022

    Accepted: --

    Published Online: Sep. 22, 2022

    The Author Email: ZHANG Ruoxuan (zrx1997_1@sina.com)

    DOI:10.37188/OPE.20223016.2006

    Topics