针对水下机器人冰下长期巡航中导航位置校准的需求,本文提出一种基于导航位置计算Aruco标记角点的成像误差,并通过误差反向传播校正导航位置的方法。在AUV航行过程中,本文方法使用多帧图像中标记的成像误差校正导航,有效解决了冰下标记成像亮度波动,造成的标记角点识别位置存在误差问题。仿真试验结果显示,本方法平均误差仅为Aruco常用定位方法IPPE的21.7%。在水池试验和外场试验中,本文方法位姿解算误差分别为IPPE的66%和36.6%,表现出更高的稳定性。此外,本文方法针对GPU平台进行并行优化,平均收敛时间为0.018 s,具有良好的实时性。
In order to match the need for navigation position calibration during long-term underwater robotic operations beneath ice, this paper proposes a method that calculates imaging errors of Aruco marker corners based on navigation position and corrects navigation position through error back-propagation. During AUV navigation, the proposed method utilizes imaging errors from multiple frames of marker data to calibrate navigation, effectively resolving positioning inaccuracies caused by brightness fluctuations in sub-glacial marker imaging. Simulation results demonstrate that the average error of this method is merely 21.7% of that achieved by the conventional IPPE positioning method for Aruco markers. In both pool tests and field trials, the accuracy of positioning using this method reaches 66% and 36.6% of IPPE's performance, exhibiting superior stability. Furthermore, the method proposed in this paper has been parallelly optimized for GPU platforms, with an average convergence time of 0.018 s, demonstrating good real-time performance.
2025,47(22): 94-101 收稿日期:2025-2-11
DOI:10.3404/j.issn.1672-7649.2025.22.014
分类号:U666.11
基金项目:国家重点研发计划资助项目(2021YFC2801101)
作者简介:樊户傲(2000 – ),男,硕士,研究方向为水下视觉定位
参考文献:
[1] 黄学涛, 贾福鑫, 肖泽鸿, 等. 自主水下航行器在极地的应用现状与关键技术[J]. 舰船科学技术, 2024, 46(16): 1-9.
HUANG X T, JA F X, XIAO Z H, et al. The current status and key technologies of autonomous underwater vehicles in polar applications[J]. Ship Science and Technology, 2024, 46(16): 1-9.
[2] 宋德勇, 刘浩. 极地自主水下机器人研究现状和关键技术[J]. 船电技术, 2020, 40(9): 36-39.
SONG D Y, LIU H. Present status and key technology of autonomous underwater vehicle for investigation in polar region[J]. Marine Electric & Electronic Engineering, 2020, 40(9): 36-39.
[3] 丁文东, 徐德, 刘希龙, 等. 移动机器人视觉里程计综述[J]. 自动化学报, 2018, 44(3): 385-400.
DING W D, XU D, LIU X L, et, al. A survey on visual odometry for mobile robots[J]. Journal of Automatica Sinica, 2018, 44(3): 385-400.
[4] PAN G, LIANG A H, LIU J, et al. 3-D Positioning system based QR code and monocular vision[C]//2020 5th International Conference on Robotics and Automation Engineering (ICRAE). 2020.
[5] M. F. ASWAD, P. H. Rusmin and R. N. Fatimah. Marker-based detection and pose estimation of custom pallet using camera and laser rangefinder[C]//2023 International Seminar on Intelligent Technology and Its Applications (ISITIA), Surabaya, Indonesia, 2023.
[6] M. UZUNOGLU, R. B. ŞAHİN and M. MERCİMEK. Vision-based position estimation with markers for quadrotors[C]//2022 International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA), Ankara, Turkey, 2022.
[7] OLSON E. AprilTag: a robust and flexible visual fiducial system[C]//Robotics and Automation (ICRA), 2011 IEEE International Conference on. IEEE, 2011.
[8] KE T, ROUMELIOTIS S I. An efficient algebraic solution to the perspective-three-point problem[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2017.
[9] S. ZHUANG, Z. ZHAO, L. CAO, et al. A robust and fast method to the perspective-n-point problem for camera pose estimation[J]. Sensors Journal, 2023, 23(11): 11892-11906.
[10] M. LOURAKIS, G. TERZAKIS. A globally optimal method for the PnP problem with MRP rotation parameterization[C]//2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 2021.
[11] K. RIKU AND K. MATSUSHIMA. Solving PnP problem using improved sparrow search algorithm[C]//2024 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), Kaohsiung, Taiwan, 2024.
[12] GARRIDO-JURADO S , MU?OZ-SALINAS R , MADRID-CUEVAS F J , et al. Automatic generation and detection of highly reliable fiducial markers under occlusion[J]. Pattern Recognition, 2014, 47(6): 2280-2292.
[13] COLLINS T , BARTOLI A. Infinitesimal plane-based pose estimation[J]. International Journal of Computer Vision, 2014, 109(3): 252-286.