Laser & Optoelectronics Progress, Volume. 62, Issue 12, 1211001(2025)
Unsupervised Large Field of View Light Field Content Generation Based on Depth Perception
Owing to the limitation of the optical structure, images captured by light field cameras suffer from the problem of narrow field of view (FOV). This study proposes an unsupervised large FOV light field content generation method based on depth perception to address this problem. First, the proposed method adopts a light field sub-aperture array center warping approach, which is combined with a convolutional neural network to estimate the depth information of the light field image. The light field sub-aperture images are aligned to the center using the depth information, thereby enhancing the correlation among sub-aperture images and effectively reducing angular redundancy. Second, a homography estimation network is developed to extract multi-scale feature pyramids. Deep features are used to predict global homography, and shallow features are used to perform grid-level adjustments to refine image registration. Additionally, a depth-aware loss function is designed to assist the estimation of the homography matrix. Finally, a synthetic mask is leveraged to guide the registered views to achieve natural fusion, and the previously estimated depth information is then used for angular restoration to generate the final large-FOV light field image. Experimental results show that the proposed method achieves high-precision registration while ensuring the angular consistency of the stitched light field image. Moreover, the performance of the proposed method is satisfactory in both subjective and objective indicators on the self-built dataset.
Get Citation
Copy Citation Text
Changxiang Zhong, Yeyao Chen, Zhongjie Zhu, Mei Yu, Gangyi Jiang. Unsupervised Large Field of View Light Field Content Generation Based on Depth Perception[J]. Laser & Optoelectronics Progress, 2025, 62(12): 1211001
Category: Imaging Systems
Received: Jun. 24, 2024
Accepted: Dec. 12, 2024
Published Online: Jun. 13, 2025
The Author Email: Gangyi Jiang (jianggangyi@126.com)
CSTR:32186.14.LOP241534