基于深度学习的目标检测技术在遥感领域已广泛应用[
Journal of Applied Optics, Volume. 44, Issue 5, 1010(2023)
Multi-scale oriented object detection based on improved RoI Transformer in remote sensing images
Oriented object detection is a crucial task in remote sensing image processing. The large-scale variations and arbitrary orientations of objects bring challenges to automatic object detection. An improved RoI Transformer detection framework was proposed to address above-mentioned problems. Firstly, RoI Transformer detection framework was used to obtain rotated region of interest (RRoI) for extraction of robust geometric features. Secondly, high-resolution network (HRNet) was introduced in the detector to extract multi-resolution feature maps, which could maintain high-resolution features while adapting to multi-scale changes of the target. Finally, Kullback-Leibler divergence (KLD) loss was introduced to solve angle periodicity problem caused by the standard representation of oriented object, and improve the adaptability of RoI Transformer to targets in arbitrary directions. The object localization accuracy was also improved through the joint optimization of bounding box parameters of oriented object. The proposed method, called HRD-ROI Transformer (HRNet+KLD ROI Transformer), was compared with the typical oriented object detection method on two public datasets, namely DOTAv1.0 and DIOR-R. The results show that the mean-average-precision (mAP) of detection results on DOTAv1.0 and DIOR-R datasets is improved by 3.7% and 4%, respectively.
引言
基于深度学习的目标检测技术在遥感领域已广泛应用[
Figure 1.Comparison between remote sensing images (the first row) and natural images (the second row)
近年来,在基于深度学习的通用目标检测基础上发展出了多种旋转目标检测方法。通用目标检测主要回归目标区域的
基于RoI Transformer的旋转目标检测常用于双阶段(two-stage)目标检测,其包含生成目标候选区域和目标检测分类两个步骤。最近有学者提出了一些单阶段(one-stage)旋转目标检测方法,例如R3Det(refined rotation RetinaNet)[
针对RoI Transformer对多尺度遥感图像旋转目标检测精度不足的问题,本文提出了HRD-ROI Transformer (HRNet + KLD ROI Transformer)方法。首先,采用原始的RoI Transformer检测框架获取RRoI,用于鲁棒的几何特征提取;其次,使用HRNet[
1 HRD-ROI Transformer
HRD-ROI Transformer使用RoI Transformer作为基本框架。其采用HRNet作为骨干网络,将高分辨率卷积和低分辨率卷积流并行连接,可在保持高分辨率特征提取的前提下提升模型对多尺度目标检测的适应能力。KLD损失用来代替Smooth L1损失,解决度目标表示周期性带来的角度边界不连续性和类正方形问题。
1.1 检测网络整体架构
HRD-ROI Transformer的整体架构如
Figure 2.Structure diagram of HRD-ROI Transformer
特征提取模块 采用带有特征金字塔的HRNet提取多层高分辨率特征(见1.2节)。
RPN模块 RPN模块将任意大小的特征图作为输入,生成一系列粗略的HRoIs。
RoI Transformer模块 RoI Transformer模块用于从HRoIs的特征图中生成RRoIs。首先,通过RoI Pooling或RoI Align对不同大小的HRoIs进行RoI提取,得到固定大小(默认为7×7)的RoI特征,然后将每个HRoI特征输入到全连接层中,并对其进行解码,得到相应的粗略RRoIs。
基于KLD损失的RCNN模块 类似于RoI Transformer模块,通过旋转RoI Pooling、旋转RoI warping或旋转RoI Align将不同尺寸的RRoIs进行旋转,RoI提取得到固定尺寸的RoI特征,再输入到全连接层进行分类和更加精细的边界框回归,其中以KLD损失调整边界框回归的结果,最终输出结果。
1.2 高分辨率网络
为了提升检测网络对不同尺度目标的适应性,本文采用高分辨率网络HRNet代替ResNet 作为骨干网络。HRNet的基本结构如
Figure 3.Structure diagram of HRNet[18]
该模型的主要特点是整个过程中特征图始终保持高分辨率,通过在高分辨率特征图主网络中逐渐并行加入低分辨率特征图子网络,不断进行不同网络分支之间的信息交互,同时保持强语义信息和精准位置信息。在RoI Transformer网络的基本结构中,FPN(feature pyramid networks)作为特征提取中重要的一个环节,是将低分辨率强语义的深层特征和高分辨率弱语义的浅层特征通过一种自上而下的方式进行特征融合,使得不同层次的特征增强[
1.3 基于KLD的参数联合优化
尽管RoI Transformer方法在旋转目标检测中具有良好的效率和精度,但由于其旋转目标表示方式带来的角度周期性,会存在角度边界不连续性(
Figure 4.Schematic diagram of angle boundary discontinuity
Figure 5.Schematic diagram of square-like problem
1.3.1 旋转目标表示的角度周期性
对于类正方形的目标(如
1.3.2 KLD损失
为解决ROI Transformer原有的目标表示方式存在角度周期性问题,本文在RoI Transformer框架中引入KLD损失。首先,将目标表示的旋转框
式中:
属性1:
属性2:
属性3:
根据属性1,旋转目标的OpenCV表示方法造成的长短边的交换问题得以避免。根据属性2和3,旋转目标的长边定义法造成的类正方形问题也可以得到解决。综上,角度周期性因高斯分布的三角函数表示方式得以避免,表现出边界连续性。
预测框和真值对应的高斯分布
显然,
最后,为了保证评估测度和回归损失之间的一致性,采用非线性变换将
式中:
上述分析表明,基于KLD的损失可以保证旋转框参数
2 实验和讨论
2.1 数据集
本文使用带有旋转目标标签的DOTA v1.0[
2.2 评估标准
本文的目标检测结果主要采用精度 (precision, P)、召回率(recall, R)、平均精度均值 (mAP)、检测速度作为评价标准。精度及召回率公式如下:
式中:
2.3 实现细节
实验基于i9-10920X 处理器,使用4个NVIDIA GeForce RTX-2080Ti GPU,内存为256 GB,利用mmrotate平台[
对于DOTAv1.0数据集,本文将所有训练集和验证集的原始图像以824的步长裁剪出1 024×1024像素大小的图像块(其中为避免目标在切割图像时被分割,保留图像重叠度为200)。对于DIOR-R数据集,图像大小保持800 × 800像素的原始大小。
训练集的图像块通过一组图像归一化、随机翻转、随机裁剪等数据增强预处理方式之后,输入到模型中用于训练。在DOTAv1.0数据集的实验中,使用训练集对模型进行训练,使用验证集对模型进行评价。对于DIOR-R数据集,则使用训练验证集进行训练,使用测试集对模型进行评价。
2.4 实验结果分析
|
RoI Transformer[
本文用DIOR-R数据集评估HRD-ROI Transformer模型的适应性。根据DIOR-R数据集的特性,将用于DOTAv1.0数据集模型的的输入图像大小调整为800 × 800像素,检测目标类别调整为20,并使用DIOR-R数据集重新训练和测试模型。结果如
|
尽管ReDet采用ReResNet提取旋转不变特征,但它的高分辨率特征语义信息很弱,对于小目标的检测效果不佳。而本文方法中使用的HRNet保持了高分辨率表示,保持强语义信息的同时,提高了网络对各种尺度目标的鲁棒性。如
|
Figure 6.Comparison of detection results (false detection)
Figure 7.Comparison of detection results (missed detection)
此外,RoI Transformer对于大长宽比的目标定位不够精准。如
Figure 8.Comparison of detection results (objects of large aspect ratios)
2.5 消融实验
本文利用消融实验分别测试KLD损失函数和HRNet对模型性能的影响,并对比了GWD、KLD和KFIoU 3种用于旋转目标检测的损失函数的性能。
模型(a)是RoI Transformer框架中仅以KLD损失函数替换Smooth L1损失函数,模型(b)是RoI Transformer框架融合HRNet特征提取网络,模型(c)即为本文提出的HRD-ROI Transformer方法。
|
在DOTAv1.0数据集上,RoI Transformer的mAP达到68.8%,而仅使用KLD损失的模型(a)达到了70.3%,仅使用HRNet的模型(b)达到了71.7%,相比于RoI Transformer分别提升了1.5%和2.9%。这表明这两个部分对于最终的检测结果都有贡献。结合KLD损失和HRNet的模型(c)的mAP进一步达到了72.5%。上述结果充分验证了基于KLD损失和HRNet的有效性。模型(a)、(b)和(c)在DIOR-R数据集上的mAP分别比原始RoI Transformer高0.8%、3.2%和4%,也验证了本文模型的适应性。
|
模型(a)和RoI Transformer的检测结果对比如
Figure 9.Effectiveness of KLD on DIOR-R dataset
GWD、KLD和KFIoU 3种损失函数的性能对比如
|
2.6 HRD-ROI Transformer误检样本分析
Figure 10.Detection results of airport
Figure 11.Detection results of golf course
3 结论
本文提出了一种基于RoI Transformer的遥感图像多尺度旋转目标检测方法HRD-ROI Transformer,该方法采用HRNet作为骨干网络,提高了模型对目标尺度变化的适应性,在小目标检测效果上优于现有典型旋转目标检测方法;此外,本文所提方法引入KLD损失,可对旋转边界框参数进行联合优化,提高了模型对旋转目标,特别是大长宽比旋转目标的检测精度。在两个公共数据集的的比较试验证明了HRD-ROI Transformer可以适应目标尺度变化,并解决了角度周期性问题,在旋转目标的检测精度方面优于当前主流的方法。
本文方法对DIOR-R数据集中的机场(APO)和高尔夫球场(GF)检测效果欠佳,后续将根据这类目标的特性做数据增强,并将SAM(segmenting anything model)嵌入检测模型中[
[1] LIU L, OUYANG W, WANG X G et al. Deep learning for generic object detection: a survey[J]. International Journal of Computer Vision, 128, 261-318.(2020).
[2] FU Changhong, CHEN Kunhui, LU Kunhan et al. Aviation fastener rotation detection for intelligent optical perception with edge computing[J]. Journal of Applied Optics, 43, 472-480(2022).
[3] DING J, XUE N, LONG Y et al. Learning RoI transformer for oriented object detection in aerial images[C], 2849-2858(2019).
[4] QIAN W, YANG X, PENG S L et al. Learning modulated loss for rotated object detection[C], 2458-2466(2021).
[5] MA J Q, SHAO W Y, YE H et al. Arbitrary-oriented scene text detection via rotation proposals[J]. IEEE Transactions on Multimedia, 20, 3111-3122.(2018).
[6] XIE X X, CHENG G, WANG J B et al. Oriented r-cnn for object detection[C], 3520-3529(2021).
[7] HAN J M, DING J, XUE N et al. Redet: a rotation-equivariant detector for aerial object detection[C], 2786-2795(2021).
[8] HE K M, ZHANG X Y, REN S Q et al. Deep residual learning for image recognition[C], 770-778(2016).
[9] YANG X, YAN J C, MING Q et al. Rethinking rotated object detection with Gaussian Wasserstein distance loss[C], 11830-11841(2021).
[11] YANG X, YAN J C, FENG Z M et al. R3det: refined single-stage detector with feature refinement for rotating object[C], 3163-3171(2021).
[12] HOU L, LU K, XUE J et al. Shape-adaptive selection and measurement for oriented object detection[C], 923-932(2022).
[13] LI W, CHEN Y, HU K et al. Oriented reppoints for aerial object detection[C], 1829-1838(2022).
[14] WU Liequan, ZHOU Zhifeng, ZHU Zhiling et al. Surface defect detection of patch diode based on improved YOLO-V4[J]. Journal of Applied Optics, 44, 621-627(2023).
[15] YANG X, YANG X J, YANG J R et al. Learning high-precision bounding box for rotated object detection via kullback leibler divergence[C], 18381-18394(2021).
[18] WANG J D, SUN K, CHENG T S et al. Deep high-resolution representation learning for visual recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43, 3349-3364.(2021).
[19] CAO Jiale, LI Yali, SUN Hanqing et al. A survey on deep learning based visual object detection[J]. Journal of Image and Graphics, 27, 1697-1722(2022).
[20] YANG X, YAN J C, LIAO W L et al. SCRDet++: detecting small, cluttered and rotated objects via instance-level feature denoising and rotation loss smoothing[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45, 2384-2399.(2023).
[21] HAN J, DING J, LI J et al. Align deep features for oriented object detection[J]. IEEE Transactions on Geoscience and Remote Sensing, 60, 1-11(2022).
[22] XIA G S, BAI X, DING J et al. Dota: a large-scale dataset for object detection in aerial images[C], 3974-3983(2018).
[23] CHENG G, WANG J B, LI K et al. Anchor-free oriented proposal generator for object detection[J]. IEEE Transactions on Geoscience and Remote Sensing, 60, 1-11(2022).
[24] LI K, WAN G, CHENG G et al. Object detection in optical remote sensing images: a survey and a new benchmark[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 159, 296-307(2020).
[25] ZHOU Y, YANG X, ZHANG G F et al. Mmrotate: a rotated object detection benchmark using pytorch[C], 7331-7334(2022).
[28] LI J, GONG Y X, MA Z et al. Enhancing feature fusion using attention for small object detection[C], 1859-1863(2022).
[29] YUAN Y, ZhANG Y L. OLCN: an optimized low coupling network for small objects detection[J]. IEEE Geoscience and Remote Sensing Letters, 19, 1-5(2021).
Get Citation
Copy Citation Text
Minhao LIU, Kun WANG, Ruijiao JIN, Tian LU, Zhang LI. Multi-scale oriented object detection based on improved RoI Transformer in remote sensing images[J]. Journal of Applied Optics, 2023, 44(5): 1010
Category: Research Articles
Received: Jul. 7, 2023
Accepted: --
Published Online: Mar. 12, 2024
The Author Email: Zhang LI (李璋)