Optical Instruments, Volume. 42, Issue 4, 33(2020)

Research on visual odometry using deep convolution neural network

Jianpeng SU, Yingping HUANG*, Bogan ZHAO, and Xing HU
Author Affiliations
  • School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
  • show less

    The visual odometry uses visual cues to estimate the pose parameters of the camera motion and localize an agent. Existing visual odometry employs a complex process including feature extraction, feature matching/tracking, and motion estimation. The processing is complicated. This paper presents an end-to-end monocular visual odometry by using convolution neural network (CNN). The method modifies a classification CNN into a sequential inter-frame variation CNN. In virtue of the deep learning technique, the method extracts the global inter-frame variation feature of video images, and outputs pose parameters through three full-connection convolution layers. It has been tested in the public KITTI database. The experimental results show the proposed Deep-CNN-VO model can estimate the motion trajectory of the camera and the feasibility of the method is proved. On the basis of simplifying the complex model, the accuracy is improved compared with the traditional visual odometry system.

    Tools

    Get Citation

    Copy Citation Text

    Jianpeng SU, Yingping HUANG, Bogan ZHAO, Xing HU. Research on visual odometry using deep convolution neural network[J]. Optical Instruments, 2020, 42(4): 33

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: DESIGN AND RESEARCH

    Received: Oct. 24, 2019

    Accepted: --

    Published Online: Jan. 6, 2021

    The Author Email: HUANG Yingping (huangyingping@usst.edu.cn)

    DOI:10.3969/j.issn.1005-5630.2020.04.006

    Topics