• 中国科技核心期刊
  • JST收录期刊

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于前视三维声呐的轨条砦识别方法

李宝奇 任露露 陈发 钱斌 黄海宁

李宝奇, 任露露, 陈发, 等. 基于前视三维声呐的轨条砦识别方法[J]. 水下无人系统学报, 2022, 30(6): 747-753 doi: 10.11993/j.issn.2096-3920.2022-0016
引用本文: 李宝奇, 任露露, 陈发, 等. 基于前视三维声呐的轨条砦识别方法[J]. 水下无人系统学报, 2022, 30(6): 747-753 doi: 10.11993/j.issn.2096-3920.2022-0016
LI Bao-qi, REN Lu-lu, CHEN Fa, QIAN Bin, HUANG Hai-ning. A Method of Erect Rail Barricade Recognition Based on Forward-Looking 3D Sonar[J]. Journal of Unmanned Undersea Systems, 2022, 30(6): 747-753. doi: 10.11993/j.issn.2096-3920.2022-0016
Citation: LI Bao-qi, REN Lu-lu, CHEN Fa, QIAN Bin, HUANG Hai-ning. A Method of Erect Rail Barricade Recognition Based on Forward-Looking 3D Sonar[J]. Journal of Unmanned Undersea Systems, 2022, 30(6): 747-753. doi: 10.11993/j.issn.2096-3920.2022-0016

基于前视三维声呐的轨条砦识别方法

doi: 10.11993/j.issn.2096-3920.2022-0016
详细信息
    作者简介:

    李宝奇(1985-), 男, 博士, 副研究员, 主要研究方向为水声信号处理, 目标检测、识别和跟踪

  • 中图分类号: TJ630.1; U644

A Method of Erect Rail Barricade Recognition Based on Forward-Looking 3D Sonar

  • 摘要: 针对轨条砦目标探测和识别困难的问题, 文中利用前视三维成像声呐提高轨条砦目标探测效果, 并设计一种基于单步检测(SSD)的三维点云目标识别方法(PCSSD)。该方法首先对原始波束域数据进行阈值滤波和直通滤波处理; 接着, 对滤波后的三维点云数据进行前向投影, 并得到深度灰度图和深度伪彩图; 而后, 利用SSD目标检测模型对深度伪彩图进行检测识别; 随后, 从深度灰度图检测目标特征中计算目标的深度范围; 最后, 结合二维目标检测结果和深度范围对三维点云中的轨条砦目标进行标注。与此同时, 提出了一个基于多尺度注意力机制的特征提取模块, 并利用该模块设计了改进的目标检测模型SSD-MV3ME。在三维点云轨条砦目标检测数据集GTZ上, SSD-MV3ME在检测时间基本相等的条件下, 检测精度比轻量化目标检测模型SSD-MV3提升1.05%, 模型参数减少2 482 KB。实验结果表明, 基于SSD-MV3ME的PCSSD更适合轨条砦目标识别任务。

     

  • 图  1  PCSSD原理图

    Figure  1.  Schematic diagram of PCSSD

    图  2  波束域数据结构

    Figure  2.  Data structure of beam domain

    图  3  阈值滤波数据

    Figure  3.  Threshold filtered data

    图  4  直通滤波数据

    Figure  4.  Pass-through filtered data

    图  5  深度灰度图

    Figure  5.  Depth grayscale image

    图  6  深度伪彩图

    Figure  6.  Pseudo color depth image

    图  7  MEIRB模块

    Figure  7.  MEIRB module

    图  8  SSD-MV3ME模型

    Figure  8.  SSD-MV3ME model

    图  9  轨条砦布放

    Figure  9.  Deployment of erect rail barricades

    图  10  轨条砦目标识别结果

    Figure  10.  Detection results of erect rail barricades

    表  1  轨条砦目标检测数据集

    Table  1.   Dataset of erect rail barricades for target detection

    训练验证测试
    图像数量44414088
    目标数量1 128346203
    下载: 导出CSV

    表  2  目标检测模型性能比较

    Table  2.   Performance comparsion of target detection models

    模型注意力mAP/%参数大小/kB时间/ms
    SSD-MV3SE87.0810 82512.96
    SSD-MV3EECA87.588 33511.96
    SSD-MV3MEMECA88.138 34313.28
    下载: 导出CSV

    表  3  MECA不同数量多尺度卷积核对SSD-MV3ME性能的影响

    Table  3.   Effect of different accounts of MECA multiscale convolutions kernel on the performance of SSD-MV3ME

    数量注意力机制mAP/%参数大小/kB时间/ms
    1MECA87.588 33511.96
    2MECA87.608 33912.79
    3MECA88.138 34313.28
    4MECA88.268 34713.94
    下载: 导出CSV
  • [1] 鲍小恒, 堵永国, 白书欣. 一种新型轨条砦灵巧破障炸弹的设计[J]. 工兵装备研究, 2006, 25(1): 5-8.

    Bao Xiao-heng, Du Yong-guo, Bai Shu-xin. Design on a New Smart Breaching Bomb for Rail Obstacles[J]. Engineer Equipment Research, 2006, 25(1): 5-8.
    [2] 杨子庆, 张婉. 外军登陆作战上陆阶段工程保障装备的现状与特点[J]. 工兵装备研究, 2006, 25(1): 56-59.

    Yang Zi-qing, Zhang Wan. Status Quo and Characteristics of Foreign Engineer Support Equipment during Assault Landing Operation[J]. Engineer Equipment Research, 2006, 25(1): 56-59.
    [3] 张少光, 孙严峰, 蒋洪涛. 台军抗登陆地雷障碍物的发展与研究[J]. 工兵装备研究, 2007, 26(4): 57-60.

    Zhang Shao-guang, Sun Yan-feng, Jiang Hong-tao. Development and Research on Anti-landing Mine Obstacles of Taiwan Army[J]. Engineer Equipment Research, 2007, 26(4): 57-60.
    [4] 冷画屏, 周世华, 肖利辉. 两栖登陆部队指挥信息系统建设研究[J]. 指挥控制与仿真, 2015, 37(1): 11-14.

    Leng Hua-ping, Zhou Shi-hua, Xiao Li-hui. Research on Construction of Amphibious Landing Force Command Information System[J]. Command Control & Simulation, 2015, 37(1): 11-14.
    [5] 程亚楠, 刘晓东, 张东升. 基元扰动对三维前视声纳测向性能的影响分析[J]. 网络新媒体技术, 2020, 9(5): 15-24. doi: 10.3969/j.issn.2095-347X.2020.05.003

    Cheng Ya-nan, Liu Xiao-dong, Zhang Dong-sheng, et al. Influence of Sensor Uncertainty on the Direction Estimation of 3D Forward-looking Sonar[J]. Journal of Network New Media, 2020, 9(5): 15-24. doi: 10.3969/j.issn.2095-347X.2020.05.003
    [6] Yann L C, Bengio Y, Hinton G. Deep Learning[J]. Nature, 2015, 521(7553): 436-444. doi: 10.1038/nature14539
    [7] Kwok R. Deep Learning Powers a Motion-Tracking Revolution[J]. Nature, 2019, 574(7776): 137-138. doi: 10.1038/d41586-019-02942-5
    [8] Zhou Y, Tuzel O. Voxelnet: End-to-end Learning for Point Cloud Based 3d Object Detection[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018: 4490-4499.
    [9] Yan Y, Mao Y, Li B. Second: Sparsely Embedded Convolutional Detection[J]. Sensors, 2018, 18(10): 3337. doi: 10.3390/s18103337
    [10] Lang A H, Vora S, Caesar H, et al. Pointpillars: Fast Encoders for Object Detection from Point Clouds[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE, 2019: 12697-12705.
    [11] Yang Z, Sun Y, Liu S, et al. 3DSSD: Point-based 3d Single Stage Object Detector[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE, 2020: 11040-11048.
    [12] Redmon J, Divvala S, Girshick R, et al. You Only Look Once: Unified, Real-time Object Detection[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016: 779-788.
    [13] Ren S Q, He K M, Girshick R, et al. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149. doi: 10.1109/TPAMI.2016.2577031
    [14] Liu W, Anguelov D, Erhan D, et al. SSD: Single Shot Multibox Detector[C]//European Conference on Computer Vision. Cham: Springer, 2016: 21-37.
    [15] Haward A G, Zhu M L, Chen B, et al. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications[J/OL]. ArXiv Preprint (2017-04-17)[2022-09-22]. https://arxiv.org/abs/1704.04861.
    [16] Sandler M, Howard A, Zhu M, et al. Mobilenetv2: Inverted Residuals and Linear Bottlenecks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018: 4510-4520.
    [17] Hu J, Shen L, Sun G. Squeeze-and-excitation Networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018: 7132-7141.
    [18] Howard A, Sandler M, Chu G, et al. Searching for Mobilenetv3[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. Long Beach, USA: IEEE, 2019: 1314-1324.
    [19] Wang Q, Wu B, Zhu P, et al. ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE, 2020.
  • 加载中
图(10) / 表(3)
计量
  • 文章访问数:  152
  • HTML全文浏览量:  78
  • PDF下载量:  45
  • 被引次数: 0
出版历程
  • 收稿日期:  2022-07-20
  • 修回日期:  2022-09-20
  • 录用日期:  2022-11-02
  • 网络出版日期:  2022-11-07

目录

    /

    返回文章
    返回
    服务号
    订阅号