• 中国科技核心期刊
  • JST收录期刊
  • Scopus收录期刊
  • DOAJ收录期刊
Turn off MathJax
Article Contents
ZHANG Jian, HU Qiao, XIA Yin, SHI Lin, LI Yangyang. Research on Underwater Positioning Method Based on Vision-Inertia-Pressure Fusion[J]. Journal of Unmanned Undersea Systems. doi: 10.11993/j.issn.2096-3920.2024-0061
Citation: ZHANG Jian, HU Qiao, XIA Yin, SHI Lin, LI Yangyang. Research on Underwater Positioning Method Based on Vision-Inertia-Pressure Fusion[J]. Journal of Unmanned Undersea Systems. doi: 10.11993/j.issn.2096-3920.2024-0061

Research on Underwater Positioning Method Based on Vision-Inertia-Pressure Fusion

doi: 10.11993/j.issn.2096-3920.2024-0061
  • Received Date: 2024-04-03
  • Accepted Date: 2024-06-04
  • Rev Recd Date: 2024-05-27
  • Available Online: 2024-11-11
  • In underwater unstructured environments, robots face difficulties in relying on external base stations for localization. Therefore, autonomous localization using multi-sensor fusion has significant application value in such settings. This paper addresses issues such as poor stability in visual localization and substantial drift in inertial navigation within underwater multi-sensor fusion localization. We propose a tightly integrated multi-sensor fusion localization method that combines visual, inertial, and pressure sensors. By utilizing graph optimization techniques for multi-sensor fusion and identifying errors in visual-inertial data based on depth information, the quality of the fused data is enhanced. To address drift and localization loss during the fusion localization process, a depth sensor is employed for weight allocation to provide more detailed system initialization. Additionally, loop closure detection and relocalization methods are introduced to effectively mitigate drift and localization loss. Experimental validation demonstrates that the proposed fusion localization algorithm improves accuracy by 48.4% compared to visual-inertial fusion localization methods, achieving superior precision and robustness.

     

  • loading
  • [1]
    WANG Y, MA X, WANG J, et al. Pseudo-3D vision-inertia based underwater self-localization for AUVs[J]. IEEE Transactions on Vehicular Technology, 2020, 69(7): 7895-7907. doi: 10.1109/TVT.2020.2993715
    [2]
    SMITHANIK J R, ATKINS E M, SANNER R M. Visual positioning system for an underwater space simulation environment[J]. Journal of guidance, control, and Dynamics, 2016, 29(4): 858-869.
    [3]
    CHENG C, WANG C, YANG D, et al. Underwater localization and mapping based on multi-beam forward looking sonar[J]. Frontiers in Neurorobotics, 2022, 15: 801956. doi: 10.3389/fnbot.2021.801956
    [4]
    SHEN Y, ZHAO C, LIU Y, et al. Underwater optical imaging: Key technologies and applications review[J]. IEEE Access, 2021, 9: 85500-85514. doi: 10.1109/ACCESS.2021.3086820
    [5]
    RAVEENDRAN S, PATIL M D, BIRAJDAR G K. Underwater image enhancement: a comprehensive review, recent trends, challenges and applications[J]. Artificial Intelligence Review, 2021, 54: 5413-5467. doi: 10.1007/s10462-021-10025-z
    [6]
    LI C, GUO C, REN W, et al. An underwater image enhancement benchmark dataset and beyond[J]. IEEE transactions on image processing, 2019, 29: 4376-4389.
    [7]
    SHAUKAT N, LI A Q, REKLEITIS I. Svin2: An underwater slam system using sonar, visual, inertial, and depth sensor[C]//2019 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS). Macau: IEEE, 2019: 1861-1868.
    [8]
    WANG X, FAN X, SHI P, et al. An overview of key SLAM technologies for underwater scenes[J]. Remote Sensing, 2023, 15(10): 2496. doi: 10.3390/rs15102496
    [9]
    HU K, WANG T, SHEN C, et al. Overview of underwater 3D reconstruction technology based on optical images[J]. Journal of Marine Science and Engineering, 2023, 11(5): 949. doi: 10.3390/jmse11050949
    [10]
    LEUTENEGGER S, FURGALE P, RABAUD V, et al. Keyframe-based visual-inertial slam using nonlinear optimization[J]. Proceedings of Robotis Science and Systems, 2023, 51: 5213-5667.
    [11]
    QIN T, LI P, SHEN S. Vins-mono: A robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics, 2018, 34(4): 1004-1020. doi: 10.1109/TRO.2018.2853729
    [12]
    MOURIKIS A I, ROUMELIOTIS S I. A multi-state constraint Kalman filter for vision-aided inertial navigation[C]//Proceedings 2007 IEEE International Conference on Robotics and Automation. Roma, Italy: IEEE, 2017.
    [13]
    SIBLEY G, MATTHIES L, SUKHATME G. Sliding window filter with application to planetary landing[J]. J. Field Robot, 2019, 27(5): 587-608.
    [14]
    王霞, 左一凡. 视觉SLAM研究进展[J]. 智能系统学报, 2020, 15(5): 825-834. doi: 10.11992/tis.202004023

    WANG X, ZUO Y F. Advances in visual SLAM research[J]. CAAI Transactions on Intelligent Systems, 2020, 15(5): 825-834. doi: 10.11992/tis.202004023
    [15]
    MEI C, RIVES P. Single view point omnidirectional camera calibration from planar grids[C]//Proceedings 2007 IEEE International Conference on Robotics and Automation. Roma, Italy: IEEE, 2007.
    [16]
    FERRERA M, CREUZE V, MORAS J, et al. AQUALOC: An underwater dataset for visual-inertial-pressure localization[J]. The International Journal of Robotics Research, 2019, 38(14): 1549-1559. doi: 10.1177/0278364919883346
    [17]
    LIN Y, QIN T, GAO W, et al. Autonomous aerial navigation using monocular visual inertial fusion[J]. J. Field Robot, 2020, 35: 23-51.
    [18]
    JIANG H, WANG W, SHEN Y, et al. Efficient planar pose estimation via UWB measurements[C]//2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2023: 1954-1960.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(11)  / Tables(2)

    Article Metrics

    Article Views(49) PDF Downloads(11) Cited by()
    Proportional views
    Related
    Service
    Subscribe

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return