基于不变特征的多源遥感图像舰船目标检测算法
Invariant Features Based Ship Detection Model for Multi-source Remote Sensing Images
查看参考文献38篇
文摘
|
由于域偏移的存在,多源图像舰船目标检测任务面临着不同源传感器带来的图像风格差异难题.另外,为特定数据源训练特定的检测模型会消耗大量的计算资源,严重限制了其在军民用领域的工程应用.因此,设计一个通用网络以有效检测来自不同源遥感数据的舰船目标成了当下的研究热点.针对该需求,本文提出了一种基于不变特征的通用舰船目标检测方法,通过充分利用多源数据之间的共享知识实现通用遥感目标的网络检测.本方法由2部分组成:图像级的风格转换网络和特征级的域自适应网络.具体地,前者采用风格转换网络生成接近真实分布的伪多源图像,拉近多源数据之间的分布,在图像层面上学习多源数据的不变特征;为学习特征层面上多源数据的不变特征,后者通过适应网络对多源特征进行信息解耦,通过域注意力网络的自适应权重分配实现特征重组.本文在NWPU VHR-10,SSDD,HRSC和SAR-Ship- Dataset数据集上进行实验验证,结果表明:所提方法通过不变特征之间的信息互补,缓解了域偏移问题,可有效检测多源遥感数据.本文方法在上述多源数据集上的平均mAP为90.8%,相比现有主流舰船目标检测方法可以提高1.4%~10.6%. |
其他语种文摘
|
Due to the domain shift, ship detection in multi-source data suffers from image variations caused by different source sensors. In addition, training a specific model for a particular data source consumes high computational cost, which severely limits its practical application in military and civilian fields. Therefore, designing a universal network to effectively detect ship objects from multi-source remote sensing images has become a research hotspot. To this end, the paper proposes a universal ship detection algorithm based on invariant features, which realizes a universal remote sensing object detection network by fully utilizing the shared knowledge among multi-source data. Our method mainly consists of two parts, i.e., an image-level style transfer network and a feature-level domain adaptive network. Specifically, the former employs style transfer network to generate pseudo-multi-source images that are close to the real distribution, narrow the distribution between multi-source data, and extract the invariant features of multi-source data at the image level; To extract invariant features at the feature level, the latter decouples the multi-source features through adaptive network, and realizes feature reorganization through adaptive weight allocation of domain attention network. We evaluate the proposed method using multiple datasets including NWPU VHR-10, SSDD, HRSC and SAR-Ship-Dataset. Experimental results show that the proposed method alleviates the problem of domain shift by complementing the information between invariant features, and can effectively detect multi-source remote sensing data. The average mAP of our method on the above-mentioned multi-source datasets is 90.8%, which exceeds 1.4%-10.6% compared with the existing mainstream ship object detection methods. |
来源
|
电子学报
,2022,50(4):887-899 【核心库】
|
DOI
|
10.12263/DZXB.20210842
|
关键词
|
舰船检测
;
遥感图像
;
深度学习
;
风格转换
;
域自适应
|
地址
|
1.
西安电子科技大学, 综合业务网理论及关键技术国家重点实验室, 陕西, 西安, 710071
2.
重庆邮电大学, 图像认知重庆市重点实验室, 重庆, 400065
|
语种
|
中文 |
文献类型
|
研究性论文 |
ISSN
|
0372-2112 |
学科
|
电子技术、通信技术 |
基金
|
国家自然科学基金
;
陕西省重点研发计划
;
陕西省创新人才推进计划
;
中央高校基本科研业务费
|
文献收藏号
|
CSCD:7190615
|
参考文献 共
38
共2页
|
1.
种劲松. SAR图像舰船及其尾迹检测研究综述.
电子学报,2003,31(9):1356-1360
|
CSCD被引
26
次
|
|
|
|
2.
Wang C. An intensity-space domain CFAR method for ship detection in HR SAR images.
IEEE Geoscience and Remote Seneing Letters,2017,14(4):529-533
|
CSCD被引
20
次
|
|
|
|
3.
Pappas O. Superpixel-level CFAR detectors for ship detection in SAR images.
IEEE Geoscience and Remote Sensing Letters,2018,15(9):1397-1401
|
CSCD被引
13
次
|
|
|
|
4.
Huo W. Ship detection from ocean SAR image based on local contrast variance weighted information entropy.
Sensors,2018,18(4):1196
|
CSCD被引
6
次
|
|
|
|
5.
王明春. Beta分布下基于白化滤波的极化SAR图像海面舰船目标CFAR检测方法.
电子学报,2019,47(9):77-84
|
CSCD被引
1
次
|
|
|
|
6.
Nie J. Enriched feature guided refinement network for object detection.
Proceedings of the IEEE International Conference on Computer Vision,2019:9537-9546
|
CSCD被引
2
次
|
|
|
|
7.
Zhou X.
Objects as points,2019
|
CSCD被引
35
次
|
|
|
|
8.
Ren S. Faster R-CNN: Towards real-time object detection with region proposal networks.
IEEE Transactions on Pattern Analysis & Machine Intelligence,2017,39(6):1137-1149
|
CSCD被引
4541
次
|
|
|
|
9.
Pang J. Libra R-CNN: Towards balanced learning for object detection.
Proceedings of International Conference on Computer Vision and Pattern Recognition,2019:821-830
|
CSCD被引
1
次
|
|
|
|
10.
Zhu J Y. Unpaired image-toimage translation using cycle-consistent adversarial networks.
Proceedings of the IEEE International Conference on Computer Vision,2017:2223-2232
|
CSCD被引
168
次
|
|
|
|
11.
Girshick R. Fast R-CNN.
Proceedings of the IEEE International Conference on Computer Vision,2015:1440-1448
|
CSCD被引
691
次
|
|
|
|
12.
Lin T Y. Feature pyramid networks for object detection.
Proceedings of International Conference on Computer Vision and Pattern Recognition,2017:2117-2125
|
CSCD被引
1
次
|
|
|
|
13.
Jiang B. Acquisition of localization confidence for accurate object detection.
Proceedings of the European Conference on Computer Vision,2018:784-799
|
CSCD被引
6
次
|
|
|
|
14.
Lin T Y. Focal loss for dense object detection.
Proceedings of the IEEE International Conference on Computer Vision,2017:2980-2988
|
CSCD被引
415
次
|
|
|
|
15.
Redmon J. You only look once: Unified, real-time object detection.
Proceedings of International Conference on Computer Vision and Pattern Recognition,2016:779-788
|
CSCD被引
2
次
|
|
|
|
16.
Redmon J. YOLO9000: Better, faster, stronger.
Proceedings of International Conference on Computer Vision and Pattern Recognition,2017:7263-7271
|
CSCD被引
1
次
|
|
|
|
17.
Redmon J.
YOLOv3: An incremental improvement,2018
|
CSCD被引
688
次
|
|
|
|
18.
Bochkovskiy A.
YO-LOv4: Optimal speed and accuracy of object detection,2020
|
CSCD被引
4
次
|
|
|
|
19.
Law H. Cornernet: Detecting objects as paired keypoints.
Proceedings of the European Conference on Computer Vision,2018:734-750
|
CSCD被引
26
次
|
|
|
|
20.
Cui Z. Dense attention pyramid networks for multi-scale ship detection in SAR images.
IEEE Transactions on Geoscience and Remote Sensing,2019,57(11):8983-8997
|
CSCD被引
30
次
|
|
|
|
|