Optics and Precision Engineering, Volume. 31, Issue 14, 2135(2023)
Image reconstruction based on deep compressive sensing combined with global and local features
Effectively restoring the original signal with high probability and high quality from a very small number of measured values is the core issue of compressive sensing for image reconstruction. Researchers have successively proposed traditional and deep learning-based compressive sensing image reconstruction algorithms. The traditional algorithms are based on mathematical derivation. Although they are comprehensible, their reconstruction quality is relatively poor. On the contrary, deep learning-based algorithms have relatively high reconstruction quality, but they cannot guarantee intelligibility. Inspired by filter flow, this study proposes a global-to-local compressive sensing image reconstruction model called G2LNet, which performs compressed sampling and initial reconstruction processes with convolutional layers using fast Fourier convolution and convolutional filter flow, taking into account the global contextual information of the image and local neighborhood information of the image pixel simultaneously. It learns to jointly optimize the measurement matrix and convolution filter flow and establishes a complete end-to-end trainable deep image reconstruction network. Verification experiments were performed on the Set5, Set11, and BSD68 test datasets commonly used in the field of compressive sensing image reconstruction at a 20% sampling rate. The image reconstruction quality of G2LNet was compared with that of the traditional algorithm MH and algorithm based on deep learning; the average peak signal-to-noise ratio of CSNet increased by 2.29 dB and 0.51 dB, respectively, effectively improving the quality of the reconstructed image.
Get Citation
Copy Citation Text
Yuanhong ZHONG, Qianfeng XU, Yujie ZHOU, Shanshan WANG. Image reconstruction based on deep compressive sensing combined with global and local features[J]. Optics and Precision Engineering, 2023, 31(14): 2135
Category: Information Sciences
Received: Dec. 6, 2022
Accepted: --
Published Online: Aug. 2, 2023
The Author Email: ZHONG Yuanhong (zhongyh@cqu.edu.cn)