Optics and Precision Engineering, Volume. 31, Issue 18, 2752(2023)
Spatial-spectral Transformer for classification of medical hyperspectral images
The development of hyperspectral imaging (HSI) technology offers new avenues for non-invasive medical imaging. However, medical hyperspectral images are characterized by high dimensionality, high redundancy, and the property of “graph-spectral uniformity,” necessitating the design of high-precision diagnostic algorithms. In recent years, transformer modes have been widely applied in medical hyperspectral image processing. However, medical hyperspectral images obtained using various instruments and acquisition methods have significant differences; this considerably hinders the practical applications of existing transformer-based diagnostic models. To address the aforementioned issues, a spatial–spectral self-attention transformer (S3AT) algorithm is proposed to adaptively mine the intrinsic relations between pixels and bands. First, in the transformer encoder, a spatial–spectral self-attention mechanism, which is designed to obtain key spatial information and important bands on hyperspectral images from different viewpoints, is employed. Thus, the spectral–spectral self-attention obtained from different views is fused. Second, in the classification stage, the predictions from different views are fused according to the learned weights. The experimental result on in-vivo human brain and blood cell HSI datasets indicate that the overall classification accuracies reach 82.25% and 91.74%, respectively. This demonstrates that the proposed S3AT algorithm yields enhanced classification performance on medical hyperspectral images.
Get Citation
Copy Citation Text
Yuan LI, Xu SHI, Zhengchun YANG, Qijuan TAN, Hong HUANG. Spatial-spectral Transformer for classification of medical hyperspectral images[J]. Optics and Precision Engineering, 2023, 31(18): 2752
Category: Information Sciences
Received: Jan. 23, 2023
Accepted: --
Published Online: Oct. 12, 2023
The Author Email: TAN Qijuan (hhuang@cqu.edu.cn), HUANG Hong (jiangliao2000@163.com)