Optics and Precision Engineering, Volume. 28, Issue 12, 2665(2020)

IN FNet: D eep in stance featu re ch ain learning netw ork for pan op tic segm en tation

MAO Lin... REN Feng-zhi**, YANG Da-wei and ZHANG Ru-bo |Show fewer author(s)
Author Affiliations
  • [in Chinese]
  • show less
    References(18)

    [1] [1] HE K, GKIOXARI G, DOLLAR P, et al.. Mask R-CNN[C]. IEEE International Conference on Computer Vision. Piscataway, USA: IEEE, 2017: 2980-2988.

    [2] [2] HE K, ZHANG X, REN S, et al.. Deep Residual Learning for Image Recognition[C]. IEEE Confer. ence on Computer Vision and Pattern Recognition. Piscataway, USA: IEEE, 2016: 770-778.

    [3] [3] KIRILLOV A, GIRSHICK R, HE K, et al.. Pan. optic Feature Pyramid Networks[C]. IEEE/CVF Conference on Computer Vision and Pattern Recog. nition. Piscataway, USA: IEEE, 2019: 6392-6401.

    [4] [4] LONG J, SHELHAMER E, DARRELL T. Fully convolutional networks for semantic segmentation[C]. IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, USA: IEEE, 2015: 3431-3440.

    [5] [5] CHEN LC, GEORGE PAPANDREOU, IASO. NAS KOKKINOS, et al.. DeepLab: Semantic im. age segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs[J]. IEEE Transactions on Pattern Analysis and Ma. chine Intelligence, 2018, 40(4): 834-848.

    [8] [8] XIONG Y, LIAO R, ZHAO H, et al.. UPSNet: A Unified Panoptic Segmentation Network[C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway, USA: IEEE, 2019: 8810-8818.

    [9] [9] LIU H, PENG C, YU C, et al. An End-To-End Network for Panoptic Segmentation[C]. IEEE/ CVF Conference on Computer Vision and Pattern Recognition. Piscataway, USA: IEEE, 2019: 6165-6174.

    [10] [10] LI J, RAVENTOS A, BHARGAVA A, et al.. Learning to fuse things and stuff[J]. arXiv pre. print arXiv: 1812. 01192v2.

    [11] [11] LI Y, CHEN X, ZHU Z, et al.. Attention-Guid. ed Unified Network for Panoptic Segmentation[C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway, USA: IEEE, 2019: 7019-7028.

    [12] [12] ZEILER M D, FERGUS R. Visualizing and Un. derstanding Convolutional Neural Networks[C]. European Conference on Computer Vision. Ber. lin, Germany: Srpringer, 2014: 818-833.

    [13] [13] TAN M, LE Q V. EfficientNet: Rethinking mod. el scaling for convolutional neural networks[C]. In. ternational Conference on Machine Learning, 2019: 6105-6114.

    [14] [14] RIPLEY B D. Pattern Recognition and Neural Networks[M]. Cambridge, UK: Cambridge uni. versity press, 1996.

    [15] [15] NAIR V, HINTON G E. Rectified Linear Units Improve Restricted Boltzmann Machines[C]. In. ternational Conference on Machine Learning, 2010: 807-814.

    [16] [16] LIN T Y, MAIRE M, ELONGIE SB, et al.. Microsoft COCO: Common Objects in Context[C]. European Conference on Computer Vision. Berlin, Germany: Springer, 2014: 740-755.

    [17] [17] CORDTS M, OMRAN M, RANOS S, et al.. The Cityscapes Dataset for Semantic Urban Scene Understanding[C]. IEEE Conference on Comput. er Vision and Pattern Recognition. Piscataway, USA: IEEE, 2016: 3213-3223.

    [21] [21] YANG T J, COLLINS M D, ZHU Y, et al.. DeeperLab: Single-shot image parser[J]. arXiv preprint arXiv: 1902. 05093v2.

    Tools

    Get Citation

    Copy Citation Text

    MAO Lin, REN Feng-zhi*, YANG Da-wei, ZHANG Ru-bo. IN FNet: D eep in stance featu re ch ain learning netw ork for pan op tic segm en tation[J]. Optics and Precision Engineering, 2020, 28(12): 2665

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category:

    Received: Apr. 17, 2020

    Accepted: --

    Published Online: Jan. 19, 2021

    The Author Email: Feng-zhi* REN (renfz2019@163.cn)

    DOI:a d oi: 10. 37188/ope. 20202812. 2665

    Topics