Opto-Electronic Engineering, Volume. 50, Issue 2, 220185(2023)
Light field image super-resolution network based on angular difference enhancement
[2] [2] Tao M W, Hadap S, Malik J, et al. Depth from combining defocus and correspondence using light-field Cameras[C]//Proceedingsof2013IEEEInternationalConferenceonComputerVision, Sydney, 2013: 673–680. https://doi.org/10.1109/ICCV.2013.89.
[3] [3] Levoy M, Hanrahan P. Light field rendering[C]//Proceedingsofthe23rdAnnualConferenceonComputerGraphicsandInteractiveTechniques, New York, 1996: 31–42. https://doi.org/10.1145/237170.237199.
[4] [4] Wang Y Q, Wang L G, Yang J G, et al. Spatial-angular interaction for light field image super-resolution[C]//Proceedingsofthe16thEuropeanConferenceonComputerVision, Glasgow, 2020: 290–308. https://doi.org/10.1007/978-3-030-58592-1_18.
[8] [8] Zhang S, Lin Y F, Sheng H. Residual networks for light field image super-resolution[C]//Proceedingsof2019IEEE/CVFConferenceonComputerVisionandPatternRecognition, Long Beach, 2019: 11046–11055. https://doi.org/10.1109/CVPR.2019.01130.
[9] [9] Jin J, Hou J H, Chen J, et al. Light field spatial super-resolution via deep combinatorial geometry embedding and structural consistency regularization[C]//Proceedingsof2020IEEE/CVFConferenceonComputerVisionandPatternRecognition, Seattle, 2020: 2260−2269. https://doi.org/10.1109/CVPR42600.2020.00233.
[12] [12] Liu J, Tang J, Wu G S. Residual feature distillation network for lightweight image super-resolution[C]//ProceedingsoftheEuropeanConferenceonComputerVision, Glasgow, 2020: 41–55. https://doi.org/10.1007/978-3-030-67070-2_2.
[13] [13] Rerabek M, Ebrahimi T. New light field image dataset[C]//Proceedingsofthe8thInternationalConferenceonQualityofMultimediaExperience, Lisbon, 2016.
[14] [14] Honauer K, Johannsen O, Kondermann D, et al. A dataset and evaluation methodology for depth estimation on 4D light fields[C]//Proceedingsofthe13thAsianConferenceonComputerVision, Cham, 2016: 19–34. https://doi.org/10.1007/978-3-319-54187-7_2.
[15] [15] Wanner S, Meister S, Goldluecke B. Datasets and benchmarks for densely sampled 4D light fields[M]//Bronstein M, Favre J, Hormann K. Vision, ModelingandVisualization. Eurographics Association, 2013: 225–226. https://doi.org/10.2312/PE.VMV.VMV13.225-226.
[17] [17] Vaish V, Adams A. The (new) Stanford light field archive, computer graphics laboratory, Stanford University[EB/OL]. 2008. http://lightfield.stanford.edu
[18] [18] Kingma D P, Ba J. Adam: a method for stochastic optimization[C]//Proceedingsofthe3rdInternationalConferenceonLearningRepresentations, San Diego, 2015.
[19] [19] He K M, Zhang X Y, Ren S Q, et al. Delving deep into rectifiers: surpassing human-level performance on ImageNet classification[C]//Proceedingsof2015InternationalConferenceonComputerVision, Santiago, 2015: 1026–1034. https://doi.org/10.1109/ICCV.2015.123.
[20] [20] Lim B, Son S, Kim H, et al. Enhanced deep residual networks for single image super-resolution[C]//Proceedingsof2017IEEEConferenceonComputerVisionandPatternRecognitionWorkshops, Honolulu, 2017: 136−144. https://doi.org/10.1109/CVPRW.2017.151.
[21] [21] Zhang Y L, Li K P, Li K, et al. Image super-resolution using very deep residual channel attention networks[C]//Proceedingsofthe15thEuropeanConferenceonComputerVision, Munich, 2018: 294–310. https://doi.org/10.1007/978-3-030-01234-2_18.
Get Citation
Copy Citation Text
Tianqi Lv, Yingchun Wu, Xianling Zhao. Light field image super-resolution network based on angular difference enhancement[J]. Opto-Electronic Engineering, 2023, 50(2): 220185
Category: Article
Received: Jul. 28, 2022
Accepted: Oct. 21, 2022
Published Online: Apr. 13, 2023
The Author Email: Wu Yingchun (yingchunwu3030@foxmail.com)