
To realize the one-to-one correspondence between each wavelength and its intensity information based on the original two-dimensional spectrogram measured by an echelle grating spectrometer, it is necessary to obtain a one-dimensional spectral curve from which the optical signal intensity as a function of wavelength can be read directly.The original spectrogram is analyzed and the accurate correspondence between the position of each wavelength spot on the receiving image surface and the detector pixel is obtained to realize the reduction processing of the spectrogram.A polynomial fitting method was used to fit the position coordinates of the wavelength spots on the dispersion direction of the prism and the grating, thus establishing the relationship between the wavelength and the image surface.Experimental results show that the polynomial fitting method can be used to establish a spectrogram reduction model rapidly and accurately, achieving spectrogram reduction accuracy of better than 1 pixel as indicated by the maximum computational error of the model (0.023 92 mm).The proposed algorithm has strong flexibility and universality, making it suitable for calculating the spectrogram reduction model for a variety of echelle grating spectrometer designs.
To analyze whether the combination of flexible materials with low Young′s moduli and rigid silica with a high Young′s modulus produces any practical problems, such as creep or strain transfer differences caused by rigid-flexible strain coupling when the fiber Bragg grating(FBG) senses shape deformation, four different Young′s modulus soft matrices were prepared by using silica gel (commonly used in soft robots) and polydimethylsiloxane (PDMS). Three FBGs were implanted in each soft matrix to form four flexible sensors with shape measurement capabilities and subjected to bending tests. The consistency between the experimental results and the theoretical derivation was verified theoretically via the strain transfer model.The results show that there is a creep-slip problem caused by rigid-flexible coupling when the soft matrix and FBG are combined, with the wavelength drift tending to be couple stable after about 30 minutes. Following creep stabilization, the wavelength drift of the three FBGs in the four flexible sensors shows good linearity and consistency. In addition, the larger the rigid-flexible difference between the fiber and the substrate, the more severe the coupling creep and the smaller the strain transfer rate. The maximum and minimum strain transfer rate are 0.680 and 0.260, respectively, while the maximum and minimum sensitivities are 56.649 and 35.668, respectively. These results provide a scientific reference for research focusing on the shape measurement technology of soft robots using implanted FBGs.
This study develops a lens-free microscope system and an algorithm to realize microscopic imaging with a relatively large field of view and high resolution. The proposed system consists of an LED light source, a pinhole, a sample holder, and a CMOS imager. The system parameters, such as the pinhole diameter, size of the imager, distance between the LED and sample, and distance between the sample and imager were studied and optimized. Furthermore, an angular spectrum method was developed for recovering sample images from holograms captured by the CMOS imager.Finally, the developed system was employed to demonstrate lens-free microscopic imaging of a micropatterned testing chip with a standard resolution in addition to that of lung cancer cells in suspension.The developed holographic lens-free microscopic system has a resolution of 4.4 μm and an imaging field of view up to 5.7 mm×4.3 mm; furthermore, it is capable of imaging the micropatterns in the test chip as well as individual lung cancer cells.Thus, the proposed holographic lens-free microscopic system demonstrated microscopic imaging with a relatively large field of view and high resolution; moreover, it exhibited the advantages of a simple structure and the avoidance of aberration interference.
Based on the imaging three-mirror aberration theory as the design basis, using a small field angle offset setting and a one-dimensional inclined plane mirror to fold the optical path in front of the main mirror, a coaxial large relative aperture imaging optical system with a high compression ratio was optimized. The focal length of the camera is 2.5 m, the image F is 6.3, the imaging field of view angle is 0.6°×0.3°, and the optical transmission function in the visible-near infrared band of 400-900 nm is greater than 0.41 at the spatial frequency of 91 lp/mm. It is better than 0.6 at 20 lp/mm, the imaging quality is close to the diffraction limit, and the imaging consistency is high over the full field of view. The length of the optical system has a high compression ratio greater than 1/5.6 times the system focal length and 1.1 times the diameter of the main mirror. The three mirrors are all conic without high-order aspheric coefficients and non-off-axis spatial layout. The tolerance analysis shows thatthe optical system is easy to realize by engineering and has a wide application prospect in the field of compact commercial imaging altimetry optical cameras with multi-star networking.
It is necessary to improve the anti-electromagnetic interference ability of unmanned surface vehicle(USV) navigation; one method is to apply polarization sensors in fluctuant water environments and formation coordination.In this study, an integrated navigation system was designed based on a polarization sensor, micro inertial measurement unit (MIMU), and global positioning system (GPS). A gimbal was mounted for the polarization sensor, and a USV experimental platform was built for navigation and formation experiments. First, the principle of polarized light navigation and USV formation were introduced.Subsequently, the integrated navigation system with a polarization sensor was designed based on Kalman filtering. Finally, tracking and formation of USV experiments were performed based on the integrated navigation system. The results of the tracking experiment show that the heading angle error and position error of the polarization sensor/MIMU/GPS integrated navigation system are 6.055° and 0.209 m, respectively.The polarized light integrated navigation system could work normally even if the magnetic compass was disturbed. The results of the formation experiment show that the leader tracking error is 0.425 m, and the follower formation error is 0.707 m. The polarization sensor can be used in fluctuant water circumstance navigation, and the polarized light integrated navigation system can be used in USV navigation and formation.
We propose a method using azimuth markers to detect the optical-axis parallelism that affects the accurate measurement range of photoelectric theodolites. First, images of the azimuth markers were taken in the direct and reverse positions of the imaging system, respectively. Next, the three-dimensional coordinates of the cross hair corresponding to the range of the azimuth markers and the imaging angles of the cross hair were resolved. Finally, the optical-axis parallelism relative to the ideal collimating axis was detected using the formulas derived for optical-axis parallelism,enabling the detection accuracy to be analyzed. Errors associated with the coordinates of the projection center and the range of the azimuth markers influence the detection accuracy,which decreases as the distance to the azimuth markers increases. For anazimuth marker distance of 1 km, and error of coordinates of 1 cm, the optical-axis parallelism detection accuracy is 0.01 mrad. At sufficiently long distances, the accuracy of the optical-axis parallelismdetection has a considerable impact on the error associated with the imaging angle, thus satisfying the requirements of photoelectric theodolites. Thus, the problem of detecting the optical-axis parallelism affecting the measurement range of photoelectric theodolitesis resolved.
Wind and temperature measurements in near-space (20-100 km) play a prominent part in the development of atmospheric physics and space science, which are of considerable academic and application value. The atmospheric wind and temperature fields in the stratosphere, mesosphere, and lower thermosphere (40-80 km) can be simultaneously detected using the wide-angle Michelson interferometer with the radiation source observation of the limb-viewing O2(a1Δg) airglow near 1.27 μm. Hence, a near-space wind and temperature sensing interferometer was designed in this study, and its modeling and forward simulation were conducted. Based on the characteristics of the radiation spectrum and principle of spectral line selection, two sets of different intensity lines were employed for wind and temperature detection.The weak group was used for low altitude measurement to avoid the influence of self absorption on the measurement results; the strong line was used for high altitude detection to achieve high measurement accuracy. The forward model was composed of the system parameters of atmosphere radiation transmission module, Michelson interferometer module, filter module, optical system, sensor array, and infrared focal plane. Through forward modeling, the limb-viewing image was obtained, and the uncertainty of wind velocity and temperature measurement was analyzed. The numerical simulation results show that the wind measurement accuracy is 1-3 m/s and temperature measurement accuracy is 1-3 K in the height range of 40-80 km, which meet the requirements of wind temperature detection accuracy in adjacent space.
To meet the requirement of the collinearity of laser beams during thickness measurements, a vision detection method for sensor optical axis position is proposed based on a double-splitting prism.According to the light propagation path and spot variation rules of a single laser beam in two spectroscopic prisms, the mathematical model between the relative positions of the two lasers and the spot centers is constructed according to the spectroscope coordinate system. With the help of the conversion relationship between the prism coordinate system and image coordinate system, the relative positions of the laser beams can be quickly calculated using the four fitting spot center coordinates, which can convert the collinearity measurement of the laser beam into the fitting and provides a comparison of spot center coordinates.The experimental results show that using the proposed method, the maximumangle between the two laser beams is 0.17°, and the maximum distance is 0.05 mm in the measuring range.Under these conditions,the measurement of the gauge block before and after adjustment shows that the thickness measurement error is reduced from 12 μm to 4 μm, which indirectly verifies the effectiveness of the collinear detection method.The proposed method realizes digitalization and visualization in the laser beam adjustment process, enables metrological traceability of the measurement results, and helps to quantitatively analyze the thickness measurement errors caused by the non-coincidence of the measuring lines of the two laser sensors.
In this work, a magnetic field sensor based on a no-core-three-core-no-core fiber structure was proposed and fabricated. Two segments of 2-mm no-core fiber were spliced at both ends of a 50-mm three-core fiber, and the structure was inserted into a 70-mm-long capillary tube; in addition, magnetic fluid was injected into the capillary tube using a needle, such that the no-core-three-core-no-core structure was completely immersed in the magnetic fluid. The no-core fiber was used to excite the cladding mode of the three-core fiber and achieve inter-mode interference. The magnetic field intensity can be determined by measuring the wavelength shift of the transmission spectral dip or by detecting the intensity loss of the transmission spectral dip. The experiment shows that the wavelength shift of the interference spectrum near 1 606 nm has a linear relationship with the change in intensity of the magnetic field, with a corresponding wavelength shift sensitivity of 68.57 pm/mT when the magnetic field intensity is within the range of 8-16 mT. Within the same range of magnetic field intensity, the intensity loss of the interference spectrum near this wavelength shows good linearity, and the corresponding intensity sensitivity is 0.828 7 dB/mT. The proposed sensor structure has the advantages of a simple structure, high sensitivity, and low cost, with potential application in magnetic field detection.
An electromechanical actuator (EMA) is the main component for the attitude control of robotic systems. This study focuses on solving the problems of a robot′s incapability in measuring small-size EMAs, inability to reflect the machine performance comprehensively, and single measurement item. According to the characteristics of small-size EMAs, a testing machine for the comprehensive performance test was developed to measure the transmission accuracy, electric parameters, and mechanical characteristic of small-size EMAs. The testing machine mainly comprises a precision mechanical system, control hardware system, and measurement and testing software. The testing machine adopts a horizontal structure. A special fixture is designed to satisfy the measurement requirements of different types of EMAs while ensuring the accuracy of the measurement system. The developed test software not only satisfies the comprehensive measurement of multiple performance indicators of small-size EMAs, but can also analyze the measured results. Test results show that the measuring machine operated stably, with a repeatable measurement deviation of less than 3%, thereby encompassing a wide range of small-size EMAs. The precision conforms to the design requirements, and the testing machine can yield automatic precision measurements. The development of the testing machine facilitates the establishment of an overall quality evaluation system for small-size EMAs.
Silicon-glass bonding technology is critical to the development of durable micro-inertial devices. Because the thermal expansion coefficients of silicon and glass are different, thermal stress is produced on the silicon-glass contact surface when the operating temperature of the inertial device changes.This can have a serious effect on the performance of the device. Therefore, an understanding of the extent of thermal mismatch stress between these heterogeneous materials and the effect of bonding anchor point size on stress are important for the improvement of the structure and process design of these devices. In this paper, a method of anchor deformation measurement and data processing using a cantilever beam as the test structure is proposed to characterize the process thermal mismatch stress of the device. The simulation results indicate that anchor points designed in block form reduce the maximum stress and structural deformation. For anchor points with side lengths of 600 μm, 400 μm, and 200 μm, the average off-plane displacements of the cantilever beam relative to the anchor points are 0.43 nm/℃, 0.30 nm/℃ and 0.20 nm/℃, respectively. These results have good repeatability. Our results show that the thermal deformation of anchor points is directly related to the size of the anchor points, and this has important research significance for the improvement of MEMS inertial structure and process design.
The traditional framework structure of airborne photoelectric platforms limits their load ratio. A new series spherical mechanism could solve this problem, but there is no kinematic model to support the design. Based on the D-H parameter method, the forward and inverse kinematics models of the series spherical mechanism were established. The kinematic model was solved using MATLAB, and the correctness of the model was verified by comparison with ADAMS, a multi-body dynamic analysis software.The experimental device was set up to collect and perform the kinematic model verification experiment.The results show that the experimental error between the calculated value and the measured value of the kinematics model in the experimental equipment is within 5% and is mostly a result of machining adjustment error and sensor measurement error in the experimental system. The model established provides a solid foundation for application in research on optoelectronic platform visual axis pointing control based on a spherical mechanism.
The underwater path planning of amphibious spherical robots is currently a research challenge in the field of amphibious robot motion control. In this study, two types of robot motion control algorithms, namely Generalized Constraint Optimization (GCOP) and Sequential Quadratic Programming (SQP) algorithms based on visual servo, were compared and analyzed.The optimal path planning of the amphibious spherical robot was realized, combinated with visual servo sensors.Dynamic target calibration, moving target monitoring, underwater obstacle recognition, and target trackingfunctions were also developed. Furthermore, this study considered the symmetrical structure of spherical robots(using Archimedes′ buoyancy principle) and combined fuzzy control algorithms to control the water level of the water tank so that spherical amphibious robots can achieve multi-DOF underwater motion. Finally, algorithm simulations and underwater motion experiments were performed to verify the feasibility of the proposed method.The results show that path planning by the SQP algorithm is more reasonable considering the distance between the GCOP and SQP algorithms, relative to the obstacle. In reaching the target coordinate position, the error between the two algorithms reaches to 167.5 mm, showing that the SQP algorithm is superior in underwater path planning than the GCOP algorithm.
Pathogen detectionrequires additional driving witha peristaltic pump and centrifuge. Hence, a self-driven microfluidic chip was designed with micro-V-groove channels, and its topologic structure and precision grinding were studied in relation to flow.Because it is difficult for physical processing, such as laser processing, to ensure topologic microform accuracy, diamond grinding was employed to machine the micro-V-groove channelsprecisely on a quartz glass surface. The key was to develop efficient and precision on-machine truing of a wheel-V-tip with the same grain-tip angle through multiaxis control and mechanical physical removal and subsequently to perform ductile-modemicrogrinding with mechanical precision copy. Furthermore, the influence of themicro-V-groove angle, roughness, gradient, etc. on microliquid flowing velocity were experimentally investigated. Finally, a microfluidic chip was manufactured for pathogen detection. It was found that larger gradient, smaller angle, finer surface roughness, and at-V-tip distributed nanochannels lead to a much larger flow velocity in the microfluidic chip. Accordingly, the micro-V-grooves can be ground to attain a surface roughness of 30 nm and tip radius of 15 μm, which induces microliquid flow. As a result,the developed self-driven microfluidic chip can detect Brucella pathogen nucleic acids with a detection accuracy of 100 ag/μLor lesser without a centrifuge.
To solve the problem regardingthe stress concentration offlexible supporting structures of omnidirectional accelerometers, an optimization model was established herein, considering the constraint of the stress distribution. By means of optimizing the curvatures and diameters of different parts of a piezoelectric curved beam, a novel axially polarized optimized omnidirectional accelerometer with a curved beam of variable crosssection was obtained using the genetic algorithm. The numerical results show that the omnidirectional sensitivity of the optimized curved-beam accelerometer is 2.5 times that of the initial straight-beam accelerometer, and its maximum stress is 0.7-0.9 times that of the initial straight-beam accelerometer. Therefore, the linear measurement range of the curved-beam accelerometer is 1.2 times that of the initial straight-beam accelerometer. The omnidirectional sensitivity of the curved-beam accelerometer is 0.7-1.3 V/g under acceleration in the range of 1-81 Hz. Overall, the design method forthe curved-beam accelerometer can overcome the difficultiespresented regardingoptimization of the sensitivity and stress distribution simultaneously.
To reveal the factors responsible for generation of the nano-vibration of aerostatic bearings, computational fluid dynamics and three-dimensional numerical large eddy simulations were employed herein for the analysis of the air film flow field from the perspective of the microscopic flow field. First, five simulated experimental groups were designed to investigate the effect of the internal gas volume on the microscopic flow field using different single variables and under different relative gas capacities. Subsequently, the simulation results of different structure parameters were analyzed, according to which the excitation sources that can result in generation of nano-vibrations were determined. Finally, the influence of internal pressure on the flow field was illustrated via the simulation of different supply pressures. A certain internal gas volume leads to the nano-vibration of aerostatic bearings when the relative gas capacity is approximately 1%. The pressure fluctuation near the equalizing cavity may be the excitation source responsible for inducing nano-vibration; the internal pressure also influences the amplitude of the vibration to a certain extent. In conclusion, the changes in the microscopic flow field directly in fluence nano-vibration, while the vortex generated uponflow field transition is the major factor causing it.
To measure the relative pose of moving non-cooperation textured objects in real time, a monocular simultaneous modeling and pose calculation method was proposed. A 3D covisibility model was incrementally constructed with frames containing the highest covisibility of features and best distribution to achieve cooperation between non-cooperation objects. Subsequently, the relative pose of the object was calculated via feature tracking by the motion prediction model. The mesh of the model was used to restore the 3D information of feature points that were distributed in an unknown area of the object surface. To reduce model error and improve the accuracy of pose estimation, bundle adjustment optimization was performed using a facet normal field, and the scale drift was reduced using closed-loop optimization. Experiments show that the method isa real-time online system that can recover 3D information of object in unstructured environments and accurately estimate relative poses in unstructured environments to provide technical support for 3D sensing and measurement modeling based on monocular vision. The mean reprojection error(MRE) of the features using the proposed method is less than 1.5 pixels, and the mean absolute error(MAE) of pose calculation is 4.29 mm and 1.54° while the average time consumption is less than 120 ms.
To address the limitations of visible image feature-driven flotation performance recognition method, a new flotation performance recognition method based on dual-modality multiscale images CNN features and adaptive deep autoencoder kernel extreme learning machine was proposed.First, the visible and infrared images of foam were decomposed by nonsubsampled shearlet multiscale transform, and a two-channel CNN network was developed to extract and fuse the features of the dual-modality multiscale images.Then, the CNN features were abstracted layer-by-layer in the deep learning network, which was connected by a series of two hidden layer autoencoder extreme learning machine.Then, the decision was made by mapping to a higher dimensional space through the kernel extreme learning machine.Finally, the quantum bacterial foraging algorithm was improved and applied to optimize the recognition model parameters. The experimental results show that the recognition accuracy using dual-modality multiscale CNN features is clearly better than that of single modality multiscale and dual-modality single scale CNN features at a confidence level of 2.65%. Further,the adaptive deep autoencoder kernel extreme learning machine has better classification accuracy and generalization performance. The average recognition accuracy of each working condition reaches 95.98%. The accuracy and stability of flotation performance recognition is considerably improved compared with the existing methods.
In traditional target detection methods,there is a trade-off that exists between target detection accuracy and real-time detection, and the recognition accuracy is inferior under actual, complex production scenarios. To address this, a deep learning detection method based on the Inception-SSD framework was herein proposed.In this framework, an inception network structure was introduced into the extra layer of the SSD network, and batch normalization (BN) and residual structure connection were used to capture target information without increasing network complexity. Owing to this, detection accuracy was improved without the real-time detection performance being affected and the algorithm also becomes more robust. Subsequently, the exclusion loss term based on the original loss function increases, which in turn improves the loss function. Furthermore, a non-maximum suppression weighting method was used to overcome the shortcomings of insufficient expression ability of the model. Finally, the improved SSD algorithm was trained and tested on a self-made dataset and compared with the original and the latest inception-SSD algorithms.Experimental results show that the detection accuracy of the proposed method is 97.8% in an actual production process, which is an improvement of 11.7 percentage points over the original SSD algorithm, and the detection speed is 41 fps. Therefore, the proposed method exhibits superior real-time performance, thereby meeting actual production demands.
The KAZE algorithm typically extracts feature points of low accuracy and mismatches in remote sensing images.Thus, this paper proposed a preprocessing algorithm to accelerate KAZE feature extraction. The proposed algorithm preprocessed the remote sensing image based on entropy constrained and KAZE feature extraction. The method first used a non-overlapping sliding window to traverse the remote sensing image and segmented the window area, and the entropy of the segmented window area was sequentially calculated.According to the histogram formed by the obtained entropy, an appropriate threshold was then selected to retain the local area of the image with high entropy for the KAZE algorithm feature extraction.Finally, the RANSAC algorithm was used to remove mismatches to optimize matching results. Experiments on the SPOT, GH-2 satellite data indicate that compared with the KAZE algorithm alone, the accuracy of the KAZE algorithm coupled with the proposed algorithm is improved by 0.2%, 0.3%, and the performance time of the algorithm is reduced by 70%, 53%, respectively.
To improve the contrast, clarity, and color fidelity of the traffic image in haze weather and reduce the negative impact of image degradation, a haze traffic image enhancement algorithm based on fast-guided filtering smoothing constraint for Retinex and adaptive fractional differential was proposed herein. First, an original image was converted from RGB space to YCbCr space, and the brightness component Y was extracted to construct the initial image . Second, a variational model was constructed, and the fast-guided filter was used to construct the smoothing constraints of the objective function to accurately estimate the initial irradiation component . Then, the Retinex model was used to obtain the initial reflection component , and the adaptive fractional differential mask was applied to enhance in order to obtain ,which was the enhancement result of the Y component. The method showed good performance in terms of image noise suppression and detail enhancement. Finally, the reflection component was converted from YCbCr to RGB space by combining with the color difference information of Cb and Cr, to obtain the final enhanced image . In this study, the contrast experiments of different haze traffic images were tested. The experimental results indicate that the standard deviationand average gradient of the new method are at least 1.12 times and 4 times higher than those of the original image, and the information entropyis at least 4.76% higher.The comprehensive performance of the proposed method is better than that of other comparison algorithms. Thus, the method is satisfactory for image enhancement and detail retention.It effectively improves the color fidelity, intensity contrast, and texture clarity forroad traffic images in haze weather.Moreover, it makes the image more visible, and the color more natural.
Images taken under low-light conditions are affected by low visible light and noise, which reduce the visual quality and also result in loss of important information.This article proposed a low light image enhancement method that combined smooth clustering and the improved Retinex algorithm to estimate images taken under low-light conditions. An image was separated into the detail layer and the base layer via smooth clustering.Then, max-RGB was used to find the maximum value of each channel to estimate the value of each pixel, construct the initial illumination map, and optimize this map based on local consistency and alternating direction minimization techniques. Adaptive Gamma correction performed non-linear relabeling on the optimized illumination map,providing the final illumination map.The input image could be enhanced by using the information of the final illumination map, and the enhanced image was fused with the detail layer to obtain a clearer and more detailed image.The proposed model exhibited better performance compared with the LE algorithm, GC algorithm, HE algorithm, SSR algorithm, MSR algorithm, MSRCR algorithm, and MSRCP algorithm: the edge intensity is 1.00e + 02, average gradient is 10.520 6, and spatial frequency is 52.050 8. The highest image definition achieved is 14.656 2, which is superior to other algorithms considered in this study, in both subjective and objective evaluations. The experimental results show that the proposed algorithm can generate imageswith higher definition, clearer edges, and richer textures.
To address the limited detection targets, slow processing speed, and low accuracyof existing methods for driving obstacle prediction, this paper proposed an improved convolutional neural network called Coll-Net merged with spatial attention, a suitable speed control policy, and an obstacle direction determination method based on Coll-Net. Coll-Net imitated the vision mechanism of judging obstacles during driving, preprocessed the input monocular vision images to obtain the region of interest, and extracted the spatial features using a deep residual network framework. After collecting the spatial features, Coll-Net recalibrated the original features on the spatial feature channels by using the mechanism of spatial attention, which evaluated the features of every channel,improved the important ones, and then rescaled the weights of every channel and assigned the normalized weights to the corresponding spatial features in order to select critical features. The output feature map is connected by a fullyconnected layer; then,a normalized obstacle probability range of 0 to 1 is generated by a sigmoid function. Moreover, this paper proposes a driving policy, that controls the driving speed and predicts the obstacle direction according to the generated probability by Coll-Net. Experiment results indicate that Coll-Net prediction accuracy on standard datasets reaches 96.01% and the f1 score reaches 0.915. Coll-Net performs well in detecting diverse obstacles such as cars, pedestrians, guardrails, and walls in real time(24 ms for inference), as well as in low-contrast conditions. Moreover, the driving policy based on Coll-Net is validated using Udacity Self-Driving datasets.
It is challenging to reconstruct 3D terrain modelsbased on sketches drawn in the first-person pointofview. Owing to the lack of one-dimensional information, most studies focus on terrain-independent design and texture mapping methods to improve the realism; however, the spatial location is not considered in such methods.To address this limitation, we propose a methodto generate 3D modelsthat conform tothe real space position. First,based on the established shape rules and visual occlusion relation, the relative depth relationship is determined according to the geometric rules derived from photographs; then, a contour map is established by sampling the contour data points of different depth layers for ellipse expansion.Furthermore, the fractal interpolation of circumferential sampling points is performed to calculate terrain units, and the non-rigid deformation of spatial terrain elements is realized via rotation mapping. Finally, Berlin noise is added to improve the random effect of the 3D terrain display. Experimental results show that for the same 2D sketch perspective, 3D terrain models corresponding todifferent real spatial locations can be constructed; in this regard, the proposed method is different from the other three methods.Furthermore, it solves the problem of real space location reconstruction. Our method is more suitable for expressing the intended design purpose of users.It realizes the 3D terrain real space reconstruction of any 2D sketch; moreover, it obtains a variety of real space layouts of multi-layer occlusion terrainsin addition to displayingthe terrain features,which bridges the gap between artistic intent and the actual result.