Optics and Precision Engineering, Volume. 31, Issue 2, 246(2023)

TCS-YOLO model for global oil storage tank inspection

Xiang LI1,2, Rigen TE1,2、*, Feng YI1,2, and Guocheng XU3
Author Affiliations
  • 1Chang Guang Satellite Technology CO.,LTD., Changchun30000, China
  • 2Main Laboratory of Satellite Remote Sensing Technology of Jilin Province, Changchun130000, China
  • 3College of Materials Science and Engineering,Jilin University, Changchun10000, China
  • show less
    References(42)

    [1] CHENG G, HAN J W. A survey on object detection in optical remote sensing images[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 117, 11-28(2016).

    [2] KALA Z. The reliability analysis of welded tanks for oil storage[J](2014).

    [3] DUDA R O, HART P E. Use of the Hough transformation to detect lines and curves in pictures[J]. Communications of the ACM, 15, 11-15(1972).

    [4] ATHERTON T J, KERBYSON D J. Size invariant circle detection[J]. Image and Vision Computing, 17, 795-803(1999).

    [5] OK A O. A new approach for the extraction of aboveground circular structures from near-nadir VHR satellite imagery[J]. IEEE Transactions on Geoscience and Remote Sensing, 52, 3125-3140(2014).

    [6] OK A O, BAŞESKI E. Circular oil tank detection from panchromatic satellite images: a new automated approach[J]. IEEE Geoscience and Remote Sensing Letters, 12, 1347-1351(2015).

    [7] WANG Y Q, TANG M, TAN T N et al. Detection of circular oil tanks based on the fusion of SAR and optical images[C], 524-527(2005).

    [8] XU H P, CHEN W, SUN B et al. Oil tank detection in synthetic aperture radar images based on quasi-circular shadow and highlighting arcs[J]. Journal of Applied Remote Sensing, 8(2014).

    [9] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks[J]. Communications of the ACM, 60, 84-90(2017).

    [10] [10] 10范丽丽, 赵宏伟, 赵浩宇, 等. 基于深度卷积神经网络的目标检测研究综述[J]. 光学 精密工程, 2020, 28(5): 1152-1164.FANL L, ZHAOH W, ZHAOH Y, et al. Survey of target detection based on deep convolutional neural networks[J]. Opt. Precision Eng., 2020, 28(5): 1152-1164.(in Chinese)

    [11] LI Z C, ITTI L. Saliency and gist features for target detection in satellite images[J]. IEEE Transactions on Image Processing: a Publication of the IEEE Signal Processing Society, 20, 2017-2029(2011).

    [12] OLIVA A, TORRALBA A. Modeling the shape of the scene: a holistic representation of the spatial envelope[J]. International Journal of Computer Vision, 42, 145-175(2001).

    [13] HUANG X, ZHANG L P. An SVM ensemble approach combining spectral, structural, and semantic features for the classification of high-resolution remotely sensed imagery[J]. IEEE Transactions on Geoscience and Remote Sensing, 51, 257-272(2013).

    [14] ZHU C X, LIU B, ZHOU Y H et al. Framework design and implementation for oil tank detection in optical satellite imagery[C], 6016-6019(2012).

    [15] ZHANG L, SHI Z W, WU J. A hierarchical oil tank detector with deep surrounding features for high-resolution optical satellite imagery[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 8, 4895-4909(2015).

    [16] DALAL N, TRIGGS B. Histograms of oriented gradients for human detection[C], 886-893(2005).

    [17] CAI X Y, SUI H G, LV R P et al. Automatic circular oil tank detection in high-resolution optical image based on visual saliency and Hough transform[C], 408-411(2014).

    [18] WU X W, SAHOO D, HOI S C H. Recent advances in deep learning for object detection[J]. Neurocomputing, 396, 39-64(2020).

    [19] HE K M, ZHANG X Y, REN S Q et al. Spatial pyramid pooling in deep convolutional networks for visual recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37, 1904-1916(2015).

    [20] GIRSHICK R. Fast r-cnn[C], 2015, 1440-1448.

    [21] REN S Q, HE K M, GIRSHICK R et al. Faster R-CNN: towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39, 1137-1149(2017).

    [22] LIU W, ANGUELOV D, ERHAN D et al. SSD single shot MultiBox detector[M]. Computer Vision - ECCV 2016, 21-37(2016).

    [23] REDMON J, DIVVALA S, GIRSHICK R et al. You only look once: unified, real-time object detection[C], 779-788(2016).

    [24] REDMON J, FARHADI A. YOLO9000: better, faster, stronger[C], 6517-6525(2017).

    [25] [25] 25方明, 孙腾腾, 邵桢. 基于改进YOLOv2的快速安全帽佩戴情况检测[J]. 光学 精密工程, 2019, 27(5): 1196-1205. doi: 10.3788/ope.20192705.1196FANGM, SUNT T, SHAOZH. Fast helmet-wearing-condition detection based on improved YOLOv2[J]. Opt. Precision Eng., 2019, 27(5): 1196-1205.(in Chinese). doi: 10.3788/ope.20192705.1196

    [26] REDMON J, FARHADI A. Yolov3: An incremental improvement[J]. arXiv preprint arXiv, 2018.

    [27] [27] 27马立, 巩笑天, 欧阳航空. Tiny YOLOV3目标检测改进[J]. 光学 精密工程, 2020, 28(4): 988-995.MAL, GONGX T, OUYANGH K. Improvement of Tiny YOLOV3 target detection[J]. Opt. Precision Eng., 2020, 28(4): 988-995.(in Chinese)

    [29] ZALPOUR M, AKBARIZADEH G, ALAEI-SHEINI N. A new approach for oil tank detection using deep learning features with control false alarm rate in high-resolution satellite imagery[J]. International Journal of Remote Sensing, 41, 2239-2262(2020).

    [30] XU D Q, WU Y Q. Improved YOLO-V3 with DenseNet for multi-scale remote sensing target detection[J]. Sensors (Basel, Switzerland), 20, 4276(2020).

    [31] VASWANI A, SHAZEER N, PARMAR N et al. Attention is all You need[C], 6000-6010(9).

    [32] PARK J, LEE J Y et al. Cbam: Convolutional block attention module[C], 3-19(2018).

    [33] DU S J, ZHANG B F, ZHANG P et al. An improved bounding box regression loss function based on CIOU loss for multi-scale object detection[C], 92-98(2021).

    [34] ELFWING S, UCHIBE E, DOYA K. Sigmoid-weighted linear units for neural network function approximation in reinforcement learning[J]. Neural Networks, 107, 3-11(2018).

    [35] WANG C Y, MARK LIAO H Y, WU Y H et al. CSPNet: a new backbone that can enhance learning capability of CNN[C], 1571-1580(2020).

    [36] LIU S, QI L, QIN H F et al. Path aggregation network for instance segmentation[C], 8759-8768(2018).

    [37] COLLOBERT R, WESTON J. A unified architecture for natural language processing: deep neural networks with multitask learning[C], 160-167(2008).

    [38] CARION N, MASSA F, SYNNAEVE G et al. End-to-end object detection with transformers[M]. Computer Vision - ECCV 2020, 213-229(2020).

    [40] DOSOVITSKIY A, BEYER L, KOLESNIKOV A et al. An image is worth 16x16 words: Transformers for image recognition at scale[J]. arXiv preprint arXiv, 2020.

    [41] LIU Z, LIN Y T, CAO Y et al. Swin transformer: hierarchical vision transformer using shifted windows[C], 9992-10002(2022).

    [42] ZHOU B L, KHOSLA A, LAPEDRIZA A et al. Learning deep features for discriminative localization[C], 2921-2929(2016).

    CLP Journals

    [1] Tianliang LI, Yongwen ZHU, Jiajun LI, Jun WANG, Wei MENG, Yuegang TAN. Fiber-optic navigation of approach attitude for transnasal flexible surgical robot[J]. Optics and Precision Engineering, 2024, 32(19): 2861

    Tools

    Get Citation

    Copy Citation Text

    Xiang LI, Rigen TE, Feng YI, Guocheng XU. TCS-YOLO model for global oil storage tank inspection[J]. Optics and Precision Engineering, 2023, 31(2): 246

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Information Sciences

    Received: Jul. 15, 2022

    Accepted: --

    Published Online: Feb. 9, 2023

    The Author Email: Rigen TE (terigen@jl1.cn)

    DOI:10.37188/OPE.20233102.0246

    Topics