Optics and Precision Engineering, Volume. 28, Issue 1, 251(2020)
Multi-type cooperative targets detection using improved YOLOv2 convolutional neural network
[1] [1] HAN J, ZHANG D, CHENG G, et al.. Advanced Deep-Learning techniques for salient and category-specific object detection: a survey[J]. IEEE Signal Processing Magazine, 2018, 35(1):84-100.
[2] [2] FELZENSZWALB P F, GIRSHICK R B, MCALLESTER D, et al.. Object detection with discriminatively trained part-based models[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 32(9): 1627-1645.
[3] [3] Li X D, YANG W D, DEZERT J. An airplane image target's multi-feature fusion recognition method[J]. Acta Automatica Sinica, 2012, 38(8):1298-1307. (in Chinese)
[5] [5] LUO ZH W, YANG Y L, LI ZH H. Design of vision detection algorithm and system for BGA welding balls[J]. Opt. Precision Eng., 2018, 26(9): 63-70. (in Chinese)
[6] [6] WANG H L, ZHU M, LIN CH B, et al.. Ship detection of complex sea background in optical remote sensing image[J]. Opt. Precision Eng., 2018, 26(3): 723-732. (in Chinese)
[7] [7] GIRSHICK R, DONAHUE J, DARRELL T, et al.. Rich feature hierarchies for accurate object detection and semantic segmentation[C]. 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Columbus, OH, USA: IEEE, 2014: 580-587.
[8] [8] GIRSHICK R. Fast R-CNN[C]. 2015 International Conference on Computer Vision (ICCV). Santiago, Chile: IEEE, 2015: 1440-1448.
[9] [9] RE S, HE K, GIRSHICK R, et al.. Faster R-CNN: Towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149.
[10] [10] REDMON J, DIVVALA S, GIRSHICK R, et al.. You Only Look Once: Unified, Real-time Object Detection[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV, USA: IEEE, 2016: 779-788.
[11] [11] LIU W, ANGUELOV D, ERHAN D, et al.. SSD: Single Shot Multi-box Detector[C]. 2016 European Conference on Computer Vision (ECCV). Amsterdam, The Netherlands: Springer, 2016, 9905: 21-37.
[12] [12] REDMON J, FARHADI A. YOLO9000: Better, Faster, Stronger[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, HI, USA: IEEE, 2017: 6517-6525.
[13] [13] FU C-Y, LIU W, RANGA A, et al.. DSSD: Deconvolutional single shot detector[J]. arXiv:1701.06659, 2017.
[14] [14] REDMON J, FARHADI A. YOLOv3: An incremental improvement[J]. arXiv:1804.02767, 2018.
[15] [15] HE K, ZHANG X, REN S, et al..Deep Residual Learning for Image Recognition[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV, USA: IEEE, 2016: 770-778.
[16] [16] ZHOU P, NI B, GENG C, et al.. Scale-Transferrable Object Detection[C]. 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Salt Lake City, Utah, USA: IEEE, 2018: 528-537.
[17] [17] HUANG G, LIU Z, LAURENS V D M, et al.. Densely Connected Convolutional Networks[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, HI, USA: IEEE, 2017: 2261-2269.
[18] [18] JEONG J, PARK H, KWAK N. Enhancement of Ssd by Concatenating Feature Maps for Object Detection[C]. British Machine Vision Conference, 2017
[19] [19] GULRAJANI I, AHMED F, ARJOVSKY M, et al.. Improved training of wasserstein GANs [C].Advances in Neural Information Processing Systems, 2017: 5767-5777.
[20] [20] GOODFELLOW I J, POUGET A J, MIRZA M, et al.. Generative Adversarial Nets[C]. Neural Information Processing Systems, 2014: 2672-2680.
[21] [21] HE K, ZHANG X, REN S, et al.. Spatial pyramid pooling in deep convolutional networks for visual recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(9): 1904-1916.
Get Citation
Copy Citation Text
WANG Jian-lin, FU Xue-song, HUANG Zhan-chao, GUO Yong-qi, WANG Ru-tong, ZHAO Li-qiang. Multi-type cooperative targets detection using improved YOLOv2 convolutional neural network[J]. Optics and Precision Engineering, 2020, 28(1): 251
Category:
Received: Jul. 8, 2019
Accepted: --
Published Online: Mar. 25, 2020
The Author Email: