Laser & Optoelectronics Progress, Volume. 61, Issue 10, 1011007(2024)

Quantitative Model for Dynamic Spatial Distortion in Virtual Environments

Zixiong Peng1, Zhenping Xia1,2、*, Yueyuan Zhang1, Chaochao Li2, and Yuanshen Zhang1
Author Affiliations
  • 1College of Electronics and Information Engineering, Suzhou University of Science and Technology, Suzhou 215009, Jiangsu, China
  • 2College of Physical Science and Technology, Suzhou University of Science and Technology, Suzhou 215009, Jiangsu, China
  • show less
    Figures & Tables(8)
    3D content acquisition, display and perception. (a) Image acquisition by parallel camera system; (b) the observer perceives the virtual scene
    Geometric distortions in different virtual spaces. (a) Distortion free point cloud for accurate perception by viewers ; (b) the distance between the eye pupil and the baseline of the parallel camera does not match; (c) the distance between the viewing distance of the viewer and the convergence distance of the camera does not match; (d) pincushion distortion
    Flowchart of the distortion quantification algorithm
    The dynamic distortion value in the set scene. (a) Similarity curve; (b) distorted velocity curve; (c) distorted acceleration curve
    Six reference point clouds selected for the experiment. (a) LongDress; (b) Hhi; (c) Shark; (d) Horse; (e) Dolphin; (f) Cat
    Visual perception experiment settings
    • Table 1. Symbols used in the space coordinate transformations

      View table

      Table 1. Symbols used in the space coordinate transformations

      VariableGeometric meaning
      PPoints in the object space
      XoYoZoObject spatial coordinates
      XiYiZiImage spatial coordinates
      XcYcCamera sensor plane coordinates,Xcl for the left camera and Xcr for the right camera
      XsYsDisplay screen plane coordinates,Xsl for the left view and Xsr for the right view
      αSingle camera view field
      βSingle eye view field
      tBaseline length between the cameras
      eInter-Pupillary Distance,IPD
      WcWidth of the camera sensor
      WsWidth of the display screen
      CConvergence distance of the 3D camera system
      VViewing distance of the 3D display system
      fFocal length of the camera
      hCamera sensor offset for convergence
    • Table 2. Performance indicators of different objective methods

      View table

      Table 2. Performance indicators of different objective methods

      MethodIndicator
      PLCCSROCCRMSE
      p2point 110.830.750.34
      p2plane 120.700.540.45
      3D-2D160.750.490.41
      PSNR-geom 140.860.710.34
      Proposed method0.930.860.24
    Tools

    Get Citation

    Copy Citation Text

    Zixiong Peng, Zhenping Xia, Yueyuan Zhang, Chaochao Li, Yuanshen Zhang. Quantitative Model for Dynamic Spatial Distortion in Virtual Environments[J]. Laser & Optoelectronics Progress, 2024, 61(10): 1011007

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Imaging Systems

    Received: Oct. 23, 2023

    Accepted: Nov. 27, 2023

    Published Online: Apr. 29, 2024

    The Author Email: Zhenping Xia (xzp@usts.edu.cn)

    DOI:10.3788/LOP232351

    CSTR:32186.14.LOP232351

    Topics