Spectroscopy and Spectral Analysis, Volume. 45, Issue 9, 2428(2025)
Reconstruction of Chinese Paintings Based on Hyperspectral Image Fusion Using Res-CAE Deep Learning
Traditional color reproduction methods often suffer from complex preprocessing steps and reliance on subjective selection of spectral features. Moreover, the exclusive use of spectral reflectance data neglects spatial information, limiting reconstruction to isolated color points rather than full scenes. To overcome these limitations, this study proposes a deep learningbased method using a Residual-Convolutional Autoencoder (Res-CAE) to jointly extract and reconstruct spatial and spectral features from hyperspectral data cubes. The Res-CAE model was trained on the CAVE hyperspectral dataset and evaluated across five testing scenarios: a standard 24-color chart (X-Rite), a custom Chinese painting color chart, in-training and out-of-training random scenes, and a real Chinese painting scene captured under CIE standard observer conditions. Evaluation metrics included color difference (ΔE00), root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM). Experimental results demonstrate that Res-CAE outperforms traditional methods, such as bilinear interpolation and principal component analysis (PCA), in both color fidelity and image quality. On the 24-color chart, the model achieved an average ΔE00 of 0.694 5, RMSE of 0.009 2, PSNR of 35.92, and SSIM of 0.995 6. These results validate the effectiveness of Res-CAE in high-fidelity color reconstruction from hyperspectral data, offering practical value for digital preservation of traditional Chinese paintings.
Get Citation
Copy Citation Text
ZHU Shi-hao, FENG Jie, LI Xin-ting, SUN Li-cun, LIU Jie, YUAN Ping, YANG Ren-xiang, DENG Hong-yang. Reconstruction of Chinese Paintings Based on Hyperspectral Image Fusion Using Res-CAE Deep Learning[J]. Spectroscopy and Spectral Analysis, 2025, 45(9): 2428
Received: Nov. 25, 2024
Accepted: Sep. 19, 2025
Published Online: Sep. 19, 2025
The Author Email: FENG Jie (fengjie_ynnu@126.com)