Laser & Optoelectronics Progress, Volume. 61, Issue 10, 1011007(2024)
Quantitative Model for Dynamic Spatial Distortion in Virtual Environments
Fig. 1. 3D content acquisition, display and perception. (a) Image acquisition by parallel camera system; (b) the observer perceives the virtual scene
Fig. 2. Geometric distortions in different virtual spaces. (a) Distortion free point cloud for accurate perception by viewers ; (b) the distance between the eye pupil and the baseline of the parallel camera does not match; (c) the distance between the viewing distance of the viewer and the convergence distance of the camera does not match; (d) pincushion distortion
Fig. 4. The dynamic distortion value in the set scene. (a) Similarity curve; (b) distorted velocity curve; (c) distorted acceleration curve
Fig. 5. Six reference point clouds selected for the experiment. (a) LongDress; (b) Hhi; (c) Shark; (d) Horse; (e) Dolphin; (f) Cat
|
|
Get Citation
Copy Citation Text
Zixiong Peng, Zhenping Xia, Yueyuan Zhang, Chaochao Li, Yuanshen Zhang. Quantitative Model for Dynamic Spatial Distortion in Virtual Environments[J]. Laser & Optoelectronics Progress, 2024, 61(10): 1011007
Category: Imaging Systems
Received: Oct. 23, 2023
Accepted: Nov. 27, 2023
Published Online: Apr. 29, 2024
The Author Email: Zhenping Xia (xzp@usts.edu.cn)
CSTR:32186.14.LOP232351