Abstract
We propose a robust 3D feature description and registration method for 3D models reconstructed from various sensor devices. General 3D feature detectors and descriptors generally show low distinctiveness and repeatability for matching between different data modalities due to differences in noise and errors in geometry. The proposed method considers not only local 3D points but also neighbouring 3D keypoints to improve keypoint matching. The proposed method is tested on various multi-modal datasets including LIDAR scans, multiple photos, spherical images and RGBD videos to evaluate the performance against existing methods.