
The loss measurement error caused by difference between the test cavity length and initial cavity length and by detection coupling scheme are analyzed. Measurement results show that even in class 1 000 clean-room laboratory, at 635 nm wavelength, the air loss coefficient is approximately 10 ppm/m. When the length of the test cavity is approximately 20 cm longer than that of the initial cavity, the air loss caused approximately 2 ppm error to the measurement of the optical loss of the high-reflectivity optics. For CRD signal detection, the use of Alternating Current (AC) coupling scheme distorts the measured ring-down signal waveform, results in loss measurement error of approximately 2 ppm as compared to the Direct Current (DC) coupling scheme. The analysis and measurement results show that for the measurement of the optical loss of ultra-high reflectivity optics with CRD technique, the same lengths for the initial cavity and test cavity should be used and the DC coupling detection scheme with sufficient bandwidth should be employed.
In order to meet the requirement of high precision when large annular trust plane is gridding, surface sampling is required for flatness. Joint model of measurement based on structured light was propounded. The projector transmitted parallel fringes onto the measured plane, CCD camera caught the images including fringes. When the measurement plane was perfect, CCD camera would catch straight fringes. When there was convex and concave in the palne, the fringes CCD caught would be distorted. There was a set proportional relationship between the distortion and the relative height of the measured point. This paper derived combined model of Structured Light, analyzed the effect of incident angle to the precision. Result of experiment showed: the more incident angle, the higher the precision. As the incident angle increased, the precision would meet the required precision. However, as the incident angle increased, the image was twisted severely and hardly dealt with. A develop model was propounded. CCD camera was located above the measured plane, and array laser scanned the measured plane from a set incident angle. Helped by sub-pix, when the plane with size 400 mm×400 mm was measured, CCD 4 096 pixel×4 096 pixel applied, incident angle 85°, the precision will be the level of submicron. Key words: large annular plane; flatness; surface sampling; line structured light
An improved illumination is developed to reduce the false acceptance rate of the existing steel ball surface inspection system. A mathematical model was established to help finding out the factors having influence on the illumination uniformity. And Lighttools simulation was applied to simulate the influence of the light source shielding screen shape and guide rail transmittance on illumination uniformity. In our experiment, a new illumination device based on the analysis of simulation performance was designed and implemented into the inspection system. The result shows that the improved illumination reduces the false acceptance rate to 0.02% and increases the inspection accuracy. The improved illumination meets the theoretical derivation and expectation of computer simulation.
To reduce the effect of main lens installation error and diffuse consistency on the accuracy of micro-lens array center calibration, we present a micro-lens array center calibration method in a coarse-to-fine manner via local searching. It can determine the ownership of pixels and micro-lens precisely. On the basis of the coarse center, we acquire alternative centers by locally searching its neighbor pixels. To accurately locate the fined-tuning center, we calculate the Euclidean distance between the coarse center and alternative centers. The calibration result demonstrate that compared with other advanced calibration algorithms, the distance error of our approach is smaller, and the accuracy is improved by 3.88%. Further, we conduct color correction and refocusing experiments. Experimental results demonstrate that the colorful image by our method is more real and natural, and its information entropy is higher. Besides, the refocused image has higher image clarity for both indoor and outdoor complex natural scenes.
In order to solve the problem of edge diffusion existing in binary patterns, an improved 3D measurement method based on structured light was proposed. Edges with pixel accuracy were obtained by adopting an optimized preprocessing approach according to pixel neighborhood relation and logic relation of images. Considering the influence of edge diffusion for edge detection, a line-shift strategy was applied which translated line-shift patterns symmetrically based on the centerline of the last Gray code pattern’s stripes. At same time, the width of line-shift stripes was equal to the minimum of Gray code stripes width and its direction was same as Gray code patterns’. Finally, codes of the whole-field were calculated by projecting vertical and horizontal patterns respectively. Experiments show that the relative error of reconstructing a plane is 0.07% and the reconstruction time is 5.41 s, which meets the accuracy and real-time demands. Applying this method in measuring surface of mixed targets, results prove that the proposed strategy has good adaptability of different parts.
We propose a new kind of 3D laser Doppler motion measurement technology. The technology is based on the Doppler, He-Ne laser makes polarized light, through the carrier modulation and demodulation signal, gets the vibration information of the object, and through the vibration unit, automatic focusing system, 3D information algorithm of five beam signal and synchronous data acquisition subsystem, finally realizes the non-contact, across scales, high precision 3D measurement. The technology overcomes weaknesses of the traditional non-contact sensor narrow band range, low vibration precision, not suitable for testing of lightweight, soft defect of the object, and can realize the 3D vibration measurement of the narrow space, electromagnetic interference, micro vibration, impact.
To solve the problem that motion information in the video is combined to the detection process of the shot boundary, which is to prevent the impact of target movement, camera movement for detection, a shot boundary detection method based on optical flow is proposed. Firstly, motion information in video using optical flow method was detected, which is used to calculate the entropy of adjacent image correction, and a method of optical flow quantization is proposed. The quantization value and corrected entropy is used to determine the continuity of the image, and candidate boundaries are reserved for the presence of a lens switch. Secondly, the image of the photometric information and mutual information between the images are extracted using the difference between the lens structure, and a model matching method is proposed to identify the type of shot. Significant areas of the image are extracted for computing mutual information, which increases the difference between the images and ensures the accuracy of the model matching. The results show that the algorithm can accurately detect the shot boundary, and effectively eliminate the interference of motion.
In cavity ring-down technique, a signal smoothing method based on spatial filters which have been used in digital image processing is presented for accurate cavity decay time extraction of low signal-to-noise ratio decay signal. Its smoothing procedure was derived and its smoothing efficiency was compared with other smoothing methods. Derivation showed the mean value spatial filter had the highest smoothing efficiency. The application of decay signal smoothing was also analyzed. The combination of the mean value spatial filter with the weighted least square methods was recommended when processing low signal-to-noise ratio decay signals. This method was tested in experiment and achieved almost the same results with the Levenberg-Marquardt algorithm.
In order to improve the robustness of digital image watermarking, a digital image watermarking algorithm is proposed based on discrete cosine transform and Hamming code. Firstly, Arnold transform and Hamming code are used to scrambling encryption and encoding of the binary watermarking image. Then, the carrier image is divided to 8×8 blocks, and each sub-block is carried on two-dimensional discrete cosine transform respectively. Finally, the encrypted and encoded watermarking is embedded into the middle frequency coefficients of the two-dimensional discrete cosine transform, the embedding strength is adaptively determined according to the characteristics of the carrier image. The experimental results show that the security and the imperceptibility of the watermarking are very good, not only it can effectively resist the attack of noise, resampling, filtering, compression, rotation, shearing and so on, but has good robustness against the many kinds of combined attack as well, and implements adaptive embedding and blind extracting of watermarking.
In view of hyperspectral remote sensing image classification, this paper introduces Limit learning theory and proposes a novel classification approach for a hyperspectral image (HSI) using a hierarchical local receptive field (LRF) based extreme learning machine (ELM). Considering the local correlations of spectral features, hierarchical architectures with two layers can potentially extract abstract representation and invariant features for better classification performance. Simultaneously, the influence of different parameters of the algorithm on classification performance is also analyzed. Experimental results on two widely used real hyperspectral data sets confirm that the comparison with the current some advanced methods, and the proposed HSI classification approach has faster training speed and better classification performance.
In order to overcome the problems that the dictionary training process is time-consuming and the reconstruction quality couldn't meet the applications, we propose a super resolution reconstruction algorithm which based on a supervised KSVD multi-dictionary learning and class-anchored neighborhood regression. Firstly, the Gaussian mixture model clustering algorithm is employed to cluster the low resolution training features; Then we use the supervised KSVD algorithm to generate each subclass dictionary and a discriminative-linear classifier simultaneously; Finally, each input feature block is categorized by the classifier and reconstructed by the corresponding subclass dictionary and class-anchored neighborhood regression. Experimental results show that our method obtains a better result both on subjective and objective compare with other methods, and has a better adaptability to face image.
How to evaluate the stereoscopic video quality accurately and effectively plays an important role for the development of video communication system. We propose a stereoscopic video quality assessment method which focuses on the structure information extracted from adjacent frames of single view. Spatio-temporal structural information is sensitive to both spatial distortions and temporal distortions. For two views of stereoscopic video, we firstly calculate spatio-temporal structure based local quality according to spatio-temporal gradient characteristic and chrominance information, and these local quality scores are integrated to obtain frame level scores for single view. Then, we use energy ratio map of two views as weight values to fuse the two view scores into single frame score. Finally, all the frame level scores are combined via asymmetric tracking effect. Experiments on NAMA3DS1-COSPAD1 database demonstrate that the proposed method achieves highly competitive prediction accuracy and delivers very low computational complexity.
Traditional color night vision fusion methods usually suffered from the problems of blurry visual effects and the low color contrast between the target and the background, in order to obtain the more ideal color fusion effect, an improved color fusion method based on Non-subsampled Shearlet Transform (NSST) and color contrast enhancement was proposed. Firstly, NSST was employed to decompose the infrared and visible source images, respectively, and then the gray-level fusion image was obtained according to the self-adaptive fusion rules based on the S function and the local directional contrast. Secondly, the gray fusion image was assigned to the Y component, and the difference of the source images was respectively assigned to the U and V component, and then the false color fusion image was generated in YUV space. Finally, a natural daylight color image with similar color feature to the gray fusion image was selected as the reference image, meanwhile, transferring the color feature of the reference image to the false color fusion image based on the nonlinear color transfer technique in the uncorrelated YUV space, so as to enhance the color contrast of the hot target and cold background. Compared with the methods in recent years, Experimental results showed that the color fusion result based on ours contained more abundant details, and the hot target was highlighted. This method is applied to the field of color night vision that can make for enhancing the situation awareness and improve the target detectability.