Journal of Optoelectronics · Laser, Volume. 33, Issue 3, 264(2022)

An image semantic segmentation method effectively fusing multi-scale features

XU Guangyu* and TANG Weijian
Author Affiliations
  • [in Chinese]
  • show less

    Convolutional neural networks show strong feature learning ability in advanced computer vision and have achieved remarkable effect in image semantic segmentation tasks.However,how to use the multi-scale feature information effectively is always a difficulty.This paper proposes an effective image semantic segmentation method which integrates multi-scale features.The proposed method consists of four basic modules,which are feature fusion module (FFM),spatial information module (SIM),global pooling module (GPM) and boundary refinement module (BRM).FFM adopts attention mechanism and residual structure to improve the efficiency of multi-scale feature fusion.SIM includes convolution and average pooling opearaitons,and its purpose is to assist in locating the edge information of the object by providing additional spatial details.GPM extracts the global information of the image,which can significantly improve the performance of the model.BRM takes the residual structure as the core to refine the boundary of the feature map.Four basic modules are added into the full convolutional neural network to effectively utilize the multi-scale feature information.Experimental results on PASCAL VOC 2012 dataset show that mean intersection over union of the proposed method is 8.7% higher than that of full convolutional neural network.The results of comparison with other methods in the same framework also verify the effectiveness of the proposed method.2021-07-05

    Tools

    Get Citation

    Copy Citation Text

    XU Guangyu, TANG Weijian. An image semantic segmentation method effectively fusing multi-scale features[J]. Journal of Optoelectronics · Laser, 2022, 33(3): 264

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Received: Jun. 7, 2021

    Accepted: --

    Published Online: Oct. 9, 2024

    The Author Email: XU Guangyu (xgy761220@163.com)

    DOI:10.16136/j.joel.2022.03.0392

    Topics