Optics and Precision Engineering, Volume. 27, Issue 3, 680(2019)
Overview of hyperspectral image classification
Hyperspectral image classification comprises the classification of every pixel in an image by applying the combination of hyperspectral data atlas and rich spectral information, which can be employed for achieving high-precision classification and automatic recognition of ground objects. Hyperspectral image classification plays an important role in earth observation. Based on the analysis of the characteristics of hyperspectral images with respect to two aspects of general machine learning and deep learning, the progress in associated research and comparison of the effects of pixel-level classification of hyperspectral images are summarized and discussed in this study. The advantages and disadvantages of various algorithms were visually illustrated by comparing the corresponding results. Research objectives and development prospects of hyperspectral image classification are analyzed with respect to two aspects. Firstly, various algorithms need to be studied. A hyperspectral classification algorithm can guarantee classification accuracy required for reducing the algorithm complexity by incorporating multi-source remote sensing data with multi-feature and multi-scale composites. Such an algorithm can improve the classification accuracy of a small sample of the classification model with few parameters, and it can adapt to the intelligent and rapid development requirements of earth observation. Secondly, market applications need to be closely integrated. Practical applications of hyperspectral images should be considered and efficient classification algorithms with marketable competency should be investigated for enhancing the applicability of hyperspectral image classification in remote sensing applications.
Get Citation
Copy Citation Text
YAN Jing-wen, CHEN Hong-da, LIU Lei. Overview of hyperspectral image classification[J]. Optics and Precision Engineering, 2019, 27(3): 680
Category:
Received: Oct. 30, 2018
Accepted: --
Published Online: May. 30, 2019
The Author Email: Jing-wen YAN (jwyan@stu.edu.cn)