Charika De Alvis, Mao Shan, Stewart Worrall, and Eduardo Nebot
Combining multiple sensors for advanced perception is a crucial requirement for autonomous vehicle navigation. Heterogeneous sensors are used to obtain rich information about the surrounding environment. The combination of the camera and lidar sensors enables a precise range of information that can be projected onto the visual image data. This gives a high-level understanding of the scene which can be used to enable context-based algorithms such as collision avoidance and navigation. The main challenge when combining these sensors is aligning the data into a common domain. This can be difficult due to the errors in the intrinsic calibration of the camera, extrinsic calibration between the camera and the lidar and errors resulting from the motion of the platform. In this paper, we examine the algorithms required to provide motion correction for scanning lidar sensors. The error resulting from the projection of the lidar measurements into a consistent odometry frame is not possible to remove entirely, and as such, it is essential to incorporate the uncertainty of this projection when combining the two different sensor frames. This work proposes a novel framework for the prediction of the uncertainty of lidar measurements (in 3D) projected into the image frame (in 2D) for moving platforms. The proposed approach fuses the uncertainty of the motion correction with uncertainty resulting from errors in the extrinsic and intrinsic calibration. By incorporating the main components of the projection error, the uncertainty of the estimation process is better represented. Experimental results for our motion correction algorithm and the proposed extended uncertainty model are demonstrated using real-world data collected on an electric vehicle equipped with wide-angle cameras covering a 180-degree field of view and a 16-beam scanning lidar.