Laser & Optoelectronics Progress, Volume. 56, Issue 4, 040002(2019)

3D Point Cloud Scene Data Acquisition and Its Key Technologies for Scene Understanding

Yong Li1, Guofeng Tong1、*, Jingchao Yang2, Liqiang Zhang3、**, Hao Peng1, and Huashuai Gao1
Author Affiliations
  • 1 College of Information Science and Engineering, Northeastern University, Shenyang, Liaoning 110819, China
  • 2 Department of Electrical and Information Engineering, Hebei Jiaotong Vocational and Technical College, Shijiazhuang, Hebei 0 50091, China
  • 3 The State Key Laboratory of Remote Sensing Science, Beijing Normal University, Beijing 100875, China
  • show less
    Figures & Tables(12)
    Three-dimensional data acquisition systems. (a) Vehicle system; (b) fixed scanner; (c) trolley type acquisition system; (d) backpack type acquisition system
    Examples of scene understanding. (a) Semantic segmentation of image; (b) target detection of point cloud; (c) semantic segmentation of point cloud
    Examples of 2D images affected by environmental factors. (a) Severe occlusion of target; (b) semantic segmentation of (a); (c) target affected by light; (d) semantic segmentation of (c)
    Semantic segmentation of semantic 3D color point cloud of outdoor scene
    Color point cloud image of community
    Trajectory (yellow lines) of collision-based panorama image
    Complete colorful point cloud of outdoor scene
    Research status of point cloud feature extraction and point cloud segmentation
    • Table 1. Comparison of advantages and disadvantages of different data in scenario

      View table

      Table 1. Comparison of advantages and disadvantages of different data in scenario

      NameInstanceAdvantage characteristicDisadvantage
      2D RGB image2D information,color textures, etc.Image is susceptible toillumination, lacks depthinformation, cannot directlyacquire geometric information.
      Point cloud databased on image3D reconstruction3D scenes, color textures andother sparse point clouds,with depth information, distance.Point cloud is susceptible tolight and environment, andreconstruction information is lost.
      Kinect-basedRGB-D pointcloud data3D indoor small scene, densepoint cloud, only for closerange, with depth.Point cloud is susceptible tolight, only for indoor scenes,small field of view.
      Point cloud databased onvehicle lidarHigh-precision, dense point clouds,and three-dimensional spatialinformation and intensity information.Less affected by environmental factors.Limited by platform, only point cloudscenes with linear road trajectoriescan be generated. Unable to extractscene color. information
      Point cloud databased onstatic lidar3D outdoor scene, high-precision,dense point cloud, 3D spatialinformation and intensity.Collection device cannot bemoved, and point cloudscene is incomplete.
      Point cloud databased onaerial lidar3D outdoor scene, high-precisioninformation, sparse point cloud,suitable for large-scalerough modeling.Unable to reproduce ground details,laser radar cannot extractscene color information.
      Collision-basedpanoramic imageand laser pointcloud fusion data3D outdoor scenes, high-precisioninformation, dense point clouds,mobile modeling, full-frame3D space, color and intensity.Scene color informationcannot be extracted by thelaser point cloud alone.
    • Table 2. Comparison of noise filtering methods

      View table

      Table 2. Comparison of noise filtering methods

      NamePrincipleCharacteristic
      ThroughfilterAccording to point cloud, threshold should be set inrange of X, Y, and Z axes, and then threshold isfiltered to remove points that are out of range.Fast, but not accurate enough,mostly used for rough processingin the first step.
      StatisticalfilteringNoise point is removed according to point density.The denser the points at certain area, the larger theamount of information. The noise information is useless,and amount of information is small. By calculating averagedistance of each point to its nearest k points,Gaussian distribution of distances of all pointsin point cloud is obtained to eliminate noise.The effect is better than the pass-throughfilter, which can accurately filter out noisepoints inside bounding box.
      RadiusfilteringGiven a radius threshold, calculate the number ofpoints in its radius for each point in the point cloud.When the number is greater than a given number ofthresholds, point is retained, otherwise point istaken as a noise point and rejected.Filter out noise points insidebounding box at a faster rate.
    • Table 3. Comparison of ground filter methods

      View table

      Table 3. Comparison of ground filter methods

      NamePrincipleCharacteristic
      Elevation basedfiltering methodAccording to point distribution filtering in point cloud,manually set or adaptively find z-direction threshold,and filter out point where z-value is less than thresholdin point cloud as ground.Fast, low robustness.
      Model basedfiltering methodSelect a model to fit ground (such as RANSAC-basedplanar model, CSF cloth simulation) and usefitted inner point as ground point.The algorithm is suitable forspecific environments, and the robustness ispoor, but the filtering effect is relatively good.
      Filtering methodbased onregion growingTaking normal vector direction of point as criterion forregional growth, first adaptively find the point that ismost likely to be the ground. Based on this, based onangular difference between normal vector direction of itsneighborhood point and its normal vector direction, it isjudged whether it grows or not. Continuous iteration tofind all ground points.When the ground is not undulating,the algorithm can separate the ground well,but the time and space cost are relatively large.
      Method based onwindow movementPoints distributed on ground should be of a continuousnature. Set a suitable window size to find the lowestpoint in current window. Then set threshold by thelowest point calculation model and filter out all pointswhere elevation difference exceeds threshold.Faster, but the window size is toodependent on manual settings, andonly local features are considered.
      Triangulationbased filteringmethodThe discrete points are connected according to a certainrule into a plurality of triangles covering entire areawithout overlapping each other to form an irregulartriangular network. The sparse TIN is generated by theseed point, and slope of model is analyzed to performinitial segmentation, and slope is eliminated. Largetriangular regions, and then through connectivityanalysis to obtain features such as elevationdifferences for each segment.It avoids data redundancy when terrain isflat, but data structure is complexand space complexity is high.
    • Table 4. Common methods and comparison of point cloud feature descriptions

      View table

      Table 4. Common methods and comparison of point cloud feature descriptions

      NameTypePrincipleCharacteristic
      SpinimageLocalfeatureCount projection coordinates of allvertices to base plane to obtain descriptor.Resistance to rigid transformationand background interference;sensitive to density changes.
      3DSCLocalfeatureCounting number of points in different grids ofspherical neighborhoods to obtain descriptors.Strong discrimination, anti-noise.
      PFHLocalfeatureParameterize the spatial difference between pointand neighborhood and form a multi-bit histogram.Highly robust to point cloud densitychanges; high computational complexity.
      FPFHLocalfeatureRecalculate K neighborhood by calculatingtuple of the query point and its neighborscompared to the PFH.Retains most of recognition capabilitiesof PFH, and reduces computationalcomplexity compared to PFH.
      SHOTLocalfeatureSpherical neighborhood rasterization, construct ahistogram according to the angle of normal vector,and then concatenate histogram.Descriptive, anti-noise;large amount of calculation,sensitive to density changes.
      ESFGlobalfeatureDescribe angle, distance, and triangle area ofthree random points in a point cloud.No need for pre-processing,strong feature description.
      VFHGlobalfeatureAdd pilot direction to relative normalcalculation after extending FPFH.Strongly discernible.
    Tools

    Get Citation

    Copy Citation Text

    Yong Li, Guofeng Tong, Jingchao Yang, Liqiang Zhang, Hao Peng, Huashuai Gao. 3D Point Cloud Scene Data Acquisition and Its Key Technologies for Scene Understanding[J]. Laser & Optoelectronics Progress, 2019, 56(4): 040002

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Reviews

    Received: Aug. 6, 2018

    Accepted: Sep. 6, 2018

    Published Online: Jul. 31, 2019

    The Author Email: Tong Guofeng (tongguofeng@ise.neu.edu.cn), Zhang Liqiang (zhanglq@bnu.edu.cn)

    DOI:10.3788/LOP56.040002

    Topics