Laser Technology, Volume. 45, Issue 5, 675(2021)
Herd counting based on VDNet convolutional neural network
[1] [1] TIAN L. Design of sheep number detection system[D]. Hohhot: Inner Mongolia University, 2019: 45-57(in Chinese).
[2] [2] ZHANG L, XU J, TIAN Z, et al. Research and implementation of intelligent counting sheep system in pastoral areas[J]. Telecom Power Technologies, 2017, 34(4): 165-166(in Chinese).
[3] [3] ENZWEILER M, GAVRILA D, GAVRILA D M. Monocular pedestrian detection: Survey and experiments[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2009, 31(12): 2179-2195.
[4] [4] JONES M J, SNOW D. Pedestrian detection using boosted features over many frames[C]// International Conference on Pattern Recognition. New York,USA: IEEE, 2008: 8-11.
[5] [5] WU B, NEVATIA R. Detection and tracking of multiple, partially occluded humans by bayesian combination of edgelet based part detectors[J]. International Journal of Computer Vision, 2007, 75(2): 247-266.
[6] [6] FELZENSZWALB P F, GIRSHICK R B, McALLESTER D, et al. Object detection with discriminatively trained part-based models[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 32(9): 1627-1645.
[7] [7] LIU T, TAO D. On the robustness and generalization of cauchy regression[C]// 2014 4th IEEE International Conference on Information Science and Technology (ICIST). New York,USA: IEEE, 2014: 32-37.
[8] [8] ZHAI J Y, TU L Zh, ZHUANG Y. Saliency detection based on boundary prior and adaptive region merging[J]. Computer Engineering and Applications, 2018, 54(6): 178-182(in Chinese).
[9] [9] ZENG L, XU X, CAI B, et al. Multi-scale convolutional neural networks for crowd counting[C]// 2017 IEEE International Conference on Image Processing (ICIP). New York,USA: IEEE, 2017: 89-91.
[10] [10] HUANG S Y, LI X, CHENG Zh Q, et al. Stacked pooling: Improving crowd counting by boosting scale invariance[J]. Computer Vision and Pattern Recognition, 2018(22): 46-52.
[11] [11] ZHANG Y, ZHOU D, CHEN S, et al. Single-image crowd counting via multi-column convolutional neural network[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). New York,USA: IEEE, 2016: 98-103.
[12] [12] WU X, ZHENG Y, YE H, et al. Adaptive scenario discovery for crowd counting[J]. Computer Vision and Pattern Recognition, 2019(9): 12-16.
[13] [13] OORO-RUBIO D, LPEZ-SASTRE R J. Towards perspective-free object counting with deep learning[C]// European Conference on Computer Vision(ECCV) 2016. New York,USA: IEEE, 2016: 56-64.
[14] [14] LEI H L. Crowd counting algorithm based on multi model deep convolution network fusion[D]. Hohhot: Inner Mongolia University, 2020: 32-37(in Chinese).
[15] [15] TANG S Y, TAO Y, ZHANG L L, et al. A deep crowd counting algorithm based on multi-column feature map fusion[J].Journal of Zhengzhou University (Natural Science Edition), 2018, 50(2): 69-74(in Chinese).
[16] [16] WANG Y J, ZHANG W, LIU Y Y, et al. Two-branch fusion network with attention map for crowd counting[J]. Neurocomputing, 2020, 411: 1-8.
[17] [17] WANG S, LU Y, ZHOU T, et al. SCLNet: Spatial context learning network for congested crowd counting[J]. Neurocomputing, 2020, 404: 227-239.
[18] [18] WU X, ZHENG Y, YE H, et al. Counting crowds with varying densities via adaptive scenario discovery framework[J]. Neurocomputing. 2020, 397: 127-138.
[19] [19] LI Y, ZHANG X, CHEN D. CSRNet: Dilated convolutional neural networks for understanding the highly congested scenes[J]. Computer Vision and Pattern Recognition, 2018 (27): 31-39.
[20] [20] SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[J]. Computer Vision and Pattern Recognition, 2014(4): 19-25.
[21] [21] ZHANG C, LI H, WANG X, et al. Cross-scene crowd counting via deep convolutional neural networks[C]// 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). New York,USA: IEEE, 2015: 17-29.
Get Citation
Copy Citation Text
DU Yongxing, MIAO Xiaowei, QIN Ling, LI Baoshan. Herd counting based on VDNet convolutional neural network[J]. Laser Technology, 2021, 45(5): 675
Category:
Received: Sep. 9, 2020
Accepted: --
Published Online: Sep. 9, 2021
The Author Email: