Optoelectronics Letters, Volume. 21, Issue 5, 290(2025)
NeOR: neural exploration with feature-based visual odometry and tracking-failure-reduction policy
[1] [1] YAMAUCHI B. A frontier-based approach for autonomous exploration[C]//Proceedings 1997 IEEE International Symposium on Computational Intelligence in Robotics and Automation CIRA'97. Towards New Computational Principles for Robotics and Automation, July 10-11, 1997, Monterey, USA. New York: IEEE, 1997: 146-151.
[2] [2] CHAPLOT D S, GANDHI D, GUPTA S, et al. Learning to explore using active neural SLAM[EB/OL]. (2020-4-10) [2024-3-11]. https://arxiv.org/pdf/2004.05155.
[3] [3] RAMAKRISHNAN S K, AL-HALAH Z, GRAUMAN K. Occupancy anticipation for efficient exploration and navigation[C]//16th European Conference on Computer Vision, August 23-28, 2020, Glasgow, UK. New York: Springer International Publishing, 2020: 400-418.
[4] [4] BIGAZZI R, LANDI F, CASCIANELLI S, et al. Focus on impact: indoor exploration with intrinsic motivation[J]. IEEE robotics and automation letters, 2022, 7(2): 2985-2992.
[5] [5] GEORGAKIS G, BUCHER B, ARAPIN A, et al. Uncertainty-driven planner for exploration and navigation[C]//2022 International Conference on Robotics and Automation (ICRA), May 23-27, 2022, Philadelphia, USA. New York: IEEE, 2022: 11295-11302.
[6] [6] YANG X, YU C, GAO J, et al. Save: spatial-attention visual exploration[C]//2022 IEEE International Conference on Image Processing (ICIP), October 16-19, 2022, Bordeaux, France. New York: IEEE, 2022: 1356-1360.
[7] [7] LIU S, SUGANUMA M, OKATANI T. Symmetry-aware neural architecture for embodied visual navigation[J]. International journal of computer vision, 2023: 1-17.
[8] [8] PARTSEY R, WIJMANS E, YOKOYAMA N, et al. Is mapping necessary for realistic pointgoal navigation?[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 18-24, 2022, New Orleans, USA. New York: IEEE, 2022: 17232-17241.
[9] [9] CAO Y, ZHANG X, LUO F, et al. Unsupervised visual odometry and action integration for pointgoal navigation in indoor environment[J]. IEEE transactions on circuits and systems for video technology, 2023, 33(10): 6173-6184.
[10] [10] MUR-ARTAL R, TARDS J D. ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras[J]. IEEE transactions on robotics, 2017, 33(5): 1255-1262.
[11] [11] ZHANG J, LIU J, CHEN K, et al. Map recovery and fusion for collaborative augment reality of multiple mobile devices[J]. IEEE transactions on industrial informatics, 2020, 17(3): 2081-2089.
[12] [12] JIN S, MENG Q, DAI X, et al. Safe-nav: learning to prevent pointgoal navigation failure in unknown environments[J]. Complex & intelligent systems, 2022, 8(3): 2273-2290.
[13] [13] NAVEED K, ANJUM M L, HUSSAIN W, et al. Deep introspective SLAM: deep reinforcement learning based approach to avoid tracking failure in visual SLAM[J]. Autonomous robots, 2022, 46(6): 705-724.
[14] [14] DAI X Y, MENG Q H, JIN S, et al. Camera view planning based on generative adversarial imitation learning in indoor active exploration[J]. Applied soft computing, 2022, 129: 109621.
[15] [15] BARTOLOMEI L, TEIXEIRA L, CHLI M. Semantic-aware active perception for UAVs using deep reinforcement learning[C]//2021 IEEE/RSJ International Conference on Intelligent Robots and Systems, September 28-30, 2021, Prague, Czech Republic. New York: IEEE, 2021: 3101-3108.
[16] [16] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Advances in Neural Information Processing Systems, December 4-9, 2017, Long Beach, USA. New York: Curran Associates Inc., 2017: 6000-6010.
[17] [17] RADFORD A, KIM J W, HALLACY C, et al. Learning transferable visual models from natural language supervision[C]//International Conference on Machine Learning, July 18-24, 2021, Virtual. New York: PMLR, 2021: 8748-8763.
[18] [18] PLACED J A, STRADER J, CARRILLO H, et al. A survey on active simultaneous localization and mapping: state of the art and new frontiers[J]. IEEE transactions on robotics, 2023, 39(3): 1686-1705.
[19] [19] BONETTO E, GOLDSCHMID P, PABST M, et al. Irotate: active visual SLAM for omnidirectional robots[J]. Robotics and autonomous systems, 2022, 154: 104102.
[20] [20] ZHANG H, WANG S, LIU Y, et al. EFP: efficient frontier-based autonomous UAV exploration strategy for unknown environments[J]. IEEE robotics and automation letters, 2024, 9(3): 2941-2948.
[21] [21] HUANG J, ZHOU B, FAN Z, et al. FAEL: fast autonomous exploration for large-scale environments with a mobile robot[J]. IEEE robotics and automation letters, 2023, 8(3): 1667-1674.
[22] [22] YUWEN X, ZHANG H, YAN F, et al. Gaze control for active visual SLAM via panoramic cost map[J]. IEEE transactions on intelligent vehicles, 2022, 8(2): 1813-1825.
[23] [23] YANG Y, ZHANG J, QIAN W, et al. Autonomous exploration for mobile robot in three dimensional multi-layer space[C]//International Conference on Intelligent Robotics and Applications, July 5-7, 2023, Hangzhou, China. Singapore: Springer Nature Singapore, 2023: 254-266.
[24] [24] DENG X, ZHANG Z, SINTOV A, et al. Feature-constrained active visual SLAM for mobile robot navigation[C]//2018 IEEE International Conference on Robotics and Automation (ICRA), May 21-25, 2018, Brisbane, Australia. New York: IEEE, 2018: 7233-7238.
[25] [25] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 26-July 1, 2016, Las Vegas, USA. New York: IEEE, 2016: 770-778.
[26] [26] CHO K, VAN MERRINBOER B, GULCEHRE C, et al. Learning phrase representations using RNN encoder-decoder for statistical machine translation[EB/OL]. (2014-9-3) [2024-3-11]. https://arxiv.org/pdf/1406.1078.
[27] [27] YE J, BATRA D, DAS A, et al. Auxiliary tasks and exploration enable objectgoal navigation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, October 10-17, 2021, Montreal, Canada. New York: IEEE, 2021: 16117-16126.
[28] [28] SCHULMAN J, WOLSKI F, DHARIWAL P, et al. Proximal policy optimization algorithms[EB/OL]. (2017-8-28) [2024-3-11]. https://arxiv.org/pdf/1707.06347.
[29] [29] SAVVA M, KADIAN A, MAKSYMETS O, et al. Habitat: a platform for embodied AI research[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, October 27-November 2, 2019, Seoul, Korea. New York: IEEE, 2019: 9339-9347.
[30] [30] XIA F, ZAMIR A R, HE Z, et al. Gibson ENV: real-world perception for embodied agents[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 19-21, 2018, Salt Lake City, USA. New York: IEEE, 2018: 9068-9079.
Get Citation
Copy Citation Text
ZHU Ziheng, LIU Jialing, CHEN Kaiqi, TONG Qiyi, LIU Ruyu. NeOR: neural exploration with feature-based visual odometry and tracking-failure-reduction policy[J]. Optoelectronics Letters, 2025, 21(5): 290
Received: Jan. 29, 2024
Accepted: Apr. 11, 2025
Published Online: Apr. 11, 2025
The Author Email: