Optics and Precision Engineering, Volume. 31, Issue 2, 246(2023)

TCS-YOLO model for global oil storage tank inspection

Xiang LI1...2, Rigen TE1,2,*, Feng YI1,2 and Guocheng XU3 |Show fewer author(s)
Author Affiliations
  • 1Chang Guang Satellite Technology CO.,LTD., Changchun30000, China
  • 2Main Laboratory of Satellite Remote Sensing Technology of Jilin Province, Changchun130000, China
  • 3College of Materials Science and Engineering,Jilin University, Changchun10000, China
  • show less
    References(42)

    [1] G CHENG, J W HAN. A survey on object detection in optical remote sensing images. ISPRS Journal of Photogrammetry and Remote Sensing, 117, 11-28(2016).

    [2] Z KALA. The reliability analysis of welded tanks for oil storage(2014).

    [3] R O DUDA, P E HART. Use of the Hough transformation to detect lines and curves in pictures. Communications of the ACM, 15, 11-15(1972).

    [4] T J ATHERTON, D J KERBYSON. Size invariant circle detection. Image and Vision Computing, 17, 795-803(1999).

    [5] A O OK. A new approach for the extraction of aboveground circular structures from near-nadir VHR satellite imagery. IEEE Transactions on Geoscience and Remote Sensing, 52, 3125-3140(2014).

    [6] A O OK, E BAŞESKI. Circular oil tank detection from panchromatic satellite images: a new automated approach. IEEE Geoscience and Remote Sensing Letters, 12, 1347-1351(2015).

    [7] Y Q WANG, M TANG, T N TAN et al. Detection of circular oil tanks based on the fusion of SAR and optical images, 524-527(2005).

    [8] H P XU, W CHEN, B SUN et al. Oil tank detection in synthetic aperture radar images based on quasi-circular shadow and highlighting arcs. Journal of Applied Remote Sensing, 8(2014).

    [9] A KRIZHEVSKY, I SUTSKEVER, G E HINTON. ImageNet classification with deep convolutional neural networks. Communications of the ACM, 60, 84-90(2017).

    [10] [10] 10范丽丽, 赵宏伟, 赵浩宇, 等. 基于深度卷积神经网络的目标检测研究综述[J]. 光学 精密工程, 2020, 28(5): 1152-1164.FANL L, ZHAOH W, ZHAOH Y, et al. Survey of target detection based on deep convolutional neural networks[J]. Opt. Precision Eng., 2020, 28(5): 1152-1164.(in Chinese)

    [11] Z C LI, L ITTI. Saliency and gist features for target detection in satellite images. IEEE Transactions on Image Processing: a Publication of the IEEE Signal Processing Society, 20, 2017-2029(2011).

    [12] A OLIVA, A TORRALBA. Modeling the shape of the scene: a holistic representation of the spatial envelope. International Journal of Computer Vision, 42, 145-175(2001).

    [13] X HUANG, L P ZHANG. An SVM ensemble approach combining spectral, structural, and semantic features for the classification of high-resolution remotely sensed imagery. IEEE Transactions on Geoscience and Remote Sensing, 51, 257-272(2013).

    [14] C X ZHU, B LIU, Y H ZHOU et al. Framework design and implementation for oil tank detection in optical satellite imagery, 6016-6019(2012).

    [15] L ZHANG, Z W SHI, J WU. A hierarchical oil tank detector with deep surrounding features for high-resolution optical satellite imagery. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 8, 4895-4909(2015).

    [16] N DALAL, B TRIGGS. Histograms of oriented gradients for human detection, 886-893(2005).

    [17] X Y CAI, H G SUI, R P LV et al. Automatic circular oil tank detection in high-resolution optical image based on visual saliency and Hough transform, 408-411(2014).

    [18] X W WU, D SAHOO, S C H HOI. Recent advances in deep learning for object detection. Neurocomputing, 396, 39-64(2020).

    [19] K M HE, X Y ZHANG, S Q REN et al. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37, 1904-1916(2015).

    [20] R GIRSHICK. Fast r-cnn, 2015, 1440-1448.

    [21] S Q REN, K M HE, R GIRSHICK et al. Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39, 1137-1149(2017).

    [22] W LIU, D ANGUELOV, D ERHAN et al. SSD single shot MultiBox detector. Computer Vision - ECCV 2016, 21-37(2016).

    [23] J REDMON, S DIVVALA, R GIRSHICK et al. You only look once: unified, real-time object detection, 779-788(2016).

    [24] J REDMON, A FARHADI. YOLO9000: better, faster, stronger, 6517-6525(2017).

    [25] [25] 25方明, 孙腾腾, 邵桢. 基于改进YOLOv2的快速安全帽佩戴情况检测[J]. 光学 精密工程, 2019, 27(5): 1196-1205. doi: 10.3788/ope.20192705.1196FANGM, SUNT T, SHAOZH. Fast helmet-wearing-condition detection based on improved YOLOv2[J]. Opt. Precision Eng., 2019, 27(5): 1196-1205.(in Chinese). doi: 10.3788/ope.20192705.1196

    [26] J REDMON, A FARHADI. Yolov3: An incremental improvement. arXiv preprint arXiv, 2018.

    [27] [27] 27马立, 巩笑天, 欧阳航空. Tiny YOLOV3目标检测改进[J]. 光学 精密工程, 2020, 28(4): 988-995.MAL, GONGX T, OUYANGH K. Improvement of Tiny YOLOV3 target detection[J]. Opt. Precision Eng., 2020, 28(4): 988-995.(in Chinese)

    [29] M ZALPOUR, G AKBARIZADEH, N ALAEI-SHEINI. A new approach for oil tank detection using deep learning features with control false alarm rate in high-resolution satellite imagery. International Journal of Remote Sensing, 41, 2239-2262(2020).

    [30] D Q XU, Y Q WU. Improved YOLO-V3 with DenseNet for multi-scale remote sensing target detection. Sensors (Basel, Switzerland), 20, 4276(2020).

    [31] A VASWANI, N SHAZEER, N PARMAR et al. Attention is all You need, 6000-6010(9).

    [32] J PARK, J Y LEE et al. Cbam: Convolutional block attention module, 3-19(2018).

    [33] S J DU, B F ZHANG, P ZHANG et al. An improved bounding box regression loss function based on CIOU loss for multi-scale object detection, 92-98(2021).

    [34] S ELFWING, E UCHIBE, K DOYA. Sigmoid-weighted linear units for neural network function approximation in reinforcement learning. Neural Networks, 107, 3-11(2018).

    [35] C Y WANG, H Y MARK LIAO, Y H WU et al. CSPNet: a new backbone that can enhance learning capability of CNN, 1571-1580(2020).

    [36] S LIU, L QI, H F QIN et al. Path aggregation network for instance segmentation, 8759-8768(2018).

    [37] R COLLOBERT, J WESTON. A unified architecture for natural language processing: deep neural networks with multitask learning, 160-167(2008).

    [38] N CARION, F MASSA, G SYNNAEVE et al. End-to-end object detection with transformers. Computer Vision - ECCV 2020, 213-229(2020).

    [40] A DOSOVITSKIY, L BEYER, A KOLESNIKOV et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv, 2020.

    [41] Z LIU, Y T LIN, Y CAO et al. Swin transformer: hierarchical vision transformer using shifted windows, 9992-10002(2022).

    [42] B L ZHOU, A KHOSLA, A LAPEDRIZA et al. Learning deep features for discriminative localization, 2921-2929(2016).

    CLP Journals

    [1] Tianliang LI, Yongwen ZHU, Jiajun LI, Jun WANG, Wei MENG, Yuegang TAN. Fiber-optic navigation of approach attitude for transnasal flexible surgical robot[J]. Optics and Precision Engineering, 2024, 32(19): 2861

    Tools

    Get Citation

    Copy Citation Text

    Xiang LI, Rigen TE, Feng YI, Guocheng XU. TCS-YOLO model for global oil storage tank inspection[J]. Optics and Precision Engineering, 2023, 31(2): 246

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Information Sciences

    Received: Jul. 15, 2022

    Accepted: --

    Published Online: Feb. 9, 2023

    The Author Email: TE Rigen (terigen@jl1.cn)

    DOI:10.37188/OPE.20233102.0246

    Topics