Acta Optica Sinica, Volume. 43, Issue 14, 1411001(2023)

Neural Radiance Field-Based Light Field Super-Resolution in Angular Domain

Yuan Miao, Chang Liu, and Jun Qiu*
Author Affiliations
  • Institute of Applied Mathematics, Beijing Information Science and Technology University, Beijing 100101, China
  • show less

    Objective

    In response to the trade-off of spatial-angular resolution in light field data acquisition due to data flux limitations, we propose a neural radiance field-based method to achieve high-quality light field super-resolution in the angular domain. The occlusion, depth variations, and background interference make super-resolution in the angular domain a challenging task, and it is difficult to express the rich details of the texture. In order to address these issues, many solutions are proposed in terms of novel view synthesis based on explicit and implicit scene geometry. However, both explicit and implicit scene geometry methods generate new viewpoint images of the scene from the geometric features of the scene, which are prone to problems such as noise interference and difficult reconstruction of textural details. Therefore, we propose neural radiance field-based light field super-resolution in the angular domain to reconstruct densely sampled light fields from sparse viewpoint sets, which can avoid errors and noises that may be introduced during image acquisition and improve the accuracy and quality of subsequent three-dimensional (3D) reconstruction.

    Methods

    By training the neural network with the light field data, the neural radiance field captures the complete scene information, even for novel viewpoints, and thus enhances the scene representation performance. In order to achieve this, a multilayer perceptron is utilized to express a five-dimensional vector function that describes the geometry and color information of the 3D model. The image color is then predicted using volume rendering. The light field is subsequently represented by a neural radiance field, and dense sampling of the angular dimension is achieved by adjusting the camera pose in the light field to obtain new perspectives between the sub-aperture images. This approach overcomes the limitations of prior techniques, including occlusion, depth variation, and background interference in light field scenes. Additionally, the input variable is mapped to the Fourier features of that variable by positional encoding, effectively addressing the challenge of fitting to the high-frequency textural information of the scene.

    Results and Discussions

    We propose the neural radiance field-based light field super-resolution in the angular domain by representing the light field by the neural radiance field. The main advantage of the proposed method over the selected experimental methods, such as local light field fusion (LLFF) and light field reconstruction using convolutional network on EPI (LFEPICNN) is that the proposed method is based on the neural radiance field to implicitly represent the light field scene, which can fit an accurate implicit function for the high-resolution four-dimensional light field and accurately represent the light field scene with complex conditions. The experimental results show that the super-resolution method based on the neural radiance field proposed can improve the angular resolution from 5×5 to 9×9. The peak signal to noise ratio (PSNR) is improved by 13.8% on average, and the structural similarity (SSIM) is improved by 9.19% on average (Table 1 and Table 2).

    Conclusions

    We propose a novel method of neural radiance field-based light field super-resolution in the angular domain. By representing the light field with the neural radiance field, the new perspective images between sub-aperture images are generated to achieve the dense sampling of the angular dimension. In the implicit representation of the scene, position encoding is utilized to map the input variables to their Fourier features to address the problem of difficult fitting for high-frequency information. Experiments on the HCI simulated light field dataset show that the proposed method achieves the best results in several super-resolution metrics and significantly outperforms other methods. Experimental results on the Stanford light field real dataset demonstrate the effectiveness of the method. Overall, the super-resolution method is not only able to deal with occlusions, depth variations, and background interference but also to obtain high-quality reproduction of rich textural details. In the future, the proposed method will be used for real-time rendering and scene reconstruction of large scenes. As a new paradigm for scene representation, neural radiance fields provide new ideas and methods for computational imaging of light fields, and we will further combine scenes' geometric and physical information to improve computational imaging performance and scene representation performance.

    Tools

    Get Citation

    Copy Citation Text

    Yuan Miao, Chang Liu, Jun Qiu. Neural Radiance Field-Based Light Field Super-Resolution in Angular Domain[J]. Acta Optica Sinica, 2023, 43(14): 1411001

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Imaging Systems

    Received: Feb. 14, 2023

    Accepted: Mar. 24, 2023

    Published Online: Jul. 13, 2023

    The Author Email: Qiu Jun (qiujun@bistu.edu.cn)

    DOI:10.3788/AOS230549

    Topics