Optics and Precision Engineering, Volume. 23, Issue 5, 1474(2015)
Multimodality robust local feature descriptors
The intensity-based local feature matching methods are sensitive to image contrast variations, so the performance declines significantly when they are applied in multimodal image registration. To solve the above problem, a multimodality robust local feature descriptor was proposed and the corresponding feature matching method was developed. Firstly, an extraction method for the multimodality robust corner and line segment was proposed based on the phase congruency and local direction information insensitive to contrast variants. Compared with intensity-based method, more equivalent corners and line segments were extracted between multimodal images with more contrast differences. Then, the feature region containing of 48 circular sub-regions was selected by using the corner for a center and the 96 dimensional feature vectors were generated by using the distance values of corners and the length values of line segments located in feature sub-regions. Finally, the feature matching method based on normalized correlation function was proposed and the location constraint-based RANdom SAmple Consensus(RANSAC) algorithm was used to remove false matching point pairs. The experimental results indicate that the precision and repeatability on multimodal image matching of the proposed method reach 80% and 13% respectively. As compared with the other intensity-based image matching methods, the precision and repeatability of proposed method are 2-4 times and 4-7 times respectively those of Symmetric-Scale Invariable Feature Transformation(S-SIFT) and Multimodal-Speeded-up Robust Features(MM-SURF). It concludes that the proposed method outperforms many state-of-the-art methods significantly.
Get Citation
Copy Citation Text
ZHAO Chun-Yang, ZHAO Huai-Ci. Multimodality robust local feature descriptors[J]. Optics and Precision Engineering, 2015, 23(5): 1474
Category:
Received: Dec. 9, 2014
Accepted: --
Published Online: Jun. 11, 2015
The Author Email: Chun-Yang ZHAO (zcyneu@sina.com)