
Ultrafast laser filamentation is an attractive nonlinear phenomenon as a consequence of dynamic balance between Kerr self-focusing and defocusing effect in the electron plasma generated through the ioniza-tion process. Achieving the regulation of the non-diffractive ultra-long transmission will play an important role in the development of novel ultrafast laser material processing technology. In this paper, the investigation on the research of ultrafast laser industrial application based on filamentation was introduced. From the physical feature, basic mechanism and characteristic advantages of filamentation effects, the representative research achieve-ments on the laser applications of filamentary propagation induced by gas, liquid and solid different media were presented. The development problem and prospect of the technique were also considered and discussed.
Seven common optical methods for gas concentration detection are described. The basic principles, advantages and disadvantages of each method are given in detail. The improvement work and some novel ide-as are presented. The applications of combined methods are discussed. These optical methods include some conventional gas concentration detection technologies, such as optical interferential method, photoacoustic de-tection (PAS), correlation spectroscopy, and some novel gas concentration detection technologies, such as tuna-ble diode laser absorption spectroscopy (TDLAS), evanescent wave field sensing technology, hollow core pho-tonic bandgap fiber (HC-PBF) sensing technology and fiber loop ring-down spectroscopy (FLRDS). The prospect of optical gas sensing is listed at the end of the paper, which mainly refers to miniaturization, intelligence, porta-bility, low power consumption, high accuracy, fast response and distributed multi-component telemetry technolo-gy.
To deal with low-contrast and high-noisy natural images, an image enhancement method based on internal generative mechanism (IGM) and improved pulse coupled neural network (PCNN) is proposed. First, the original image is decomposed into rough sub-graph and detail sub-graph by the theory of IGM. And then, an im-proved PCNN method is adopted to make the rough sub-graph more clearly. At the same time, the enhancement method which PCNN incorporates with fuzzy sets is introduced for the detail sub-graph so as to sharpen the im-age edge and remove outliers. Finally, the final image is reconstructed by processed rough sub-graph and detail sub-graph. Experimental results show that the proposed algorithm can effectively enhance the image contrast and image contour, as well as filter out some noise without any loss of image edges.
It is difficult to select an appropriate evaluation index for current image fusion. In order to solve the problem, a synthesis evaluation index is proposed based on the correlation between subjective and objective evaluations. First, a variety of fusion results are evaluated subjectively from the edge of clarity, natural sense, in-formation and comprehensive evaluation, respectively. Secondly, 14 commonly objective indexes are used to evaluate the fusion results. Then, the subjective and objective results are normalized, and the Spearman correla-tion coefficient is used to analyze the correlation between the four subjective evaluations and each objective evaluation. Finally, according to the correlation, a comprehensive index is constructed through 14 objective in-dexes in 4 aspects. The experimental results show that the synthesis index is more relevant to subjective evalua-tion than the individual evaluation index and any other comprehensive indexes.
A subdividing method for non-orthogonal grating moiré signals is studied, and a circuit scheme of the non-orthogonal grating moiré signal digital subdividing system is proposed based on signal sampling, pre-processing, and subdivision structure to complete 32~512 times signal subdivision on the FPGA platform. Analysis is made of signal amplitude ratio and sampling rate,and two factors in the circuitry. A model of signal amplitude deviance is constructed, and the quantitative relation is established between signal amplitude ratio k and subdivision value N. Test results suggest that required compensation for signal amplitude deviance be-comes higher steadily when the subdivision value N grows. A model of signal frequency/sampling frequency is constructed, and the quantitative relation is established between the quotient of signal frequency and sampling frequency f/fs and subdivision value N. Test results suggest that required sampling frequency becomes lower steadily when the subdivision value N grows.
In order to detect high-precision far-field spots centroid, a multiple spots centroid detecting method is proposed. Using two-dimension orthogonal diffraction gratings, a single spot on the far-field focal plane is developed into a multiple spots array. By increasing the input information of the far-field detected spots, the centroid detection accuracy can be improved. The experimental results show that the centroid detecting accuracy of multiple spots is 4 times larger than that of single spot. The root mean square (RMS) of single spot centroid detecting error is 0.0385 pixels and the RMS of 16 spots centroid detecting error is 0.0099 pixels. Compared with the conventional method of the centroid detecting, the far-field multiple spots centroid detecting method proposed is simpler and more convenient.
The machine vision is introduced into the field of the plug-in robot system as a new type of sensor, and the function of the environment visual information (color, shape and attitude of the target workpiece) is realized by machine vi-sion to achieve fast crawling and precise positioning. This method for the realization of fully automated plug, reduc-es the insertion error rate, improves the efficiency of plug-in workpiece, which is of great significance. The system uses SCARA robot, mechanical jaw and camera CCD as the hardware base, building a SCARA robot automatic identification and positioning plug-in system platform based on monocular vision, which mixed with a variety of colors of the insurance piece in a circular feeding tray. Under the vibration of the disk motor, the insurance piece is sent to the linear feeder in turn, and then the CCD camera is used to obtain the image information of the insurance piece, the contour shape and coordinate information are extracted from the image and the camera parameters are calibrated and parameterized model is established. The workpiece image coordinate information is transformed into the robot coordinate system under the crawl position information. The Visual Studio software is used as the devel-opment platform, and the visual recognition and positioning algorithm is developed by using the OpenCV visual da-tabase function. The visual algorithm prepares the image of the fuse piece, the image segmentation, the color recog-nition, the corner detection and the center point extraction. The center point of the workpiece is determined. Finally, the coordinates of the target point are obtained by calculating the scale ratio and the conversion of the coordinates. The visual algorithm can realize the color recognition of the workpiece and obtain the position information of the workpiece, and control the robot jaws to grasp the target workpiece accurately, which meets the general industrial production in the real-time requirements of the workpiece. In the field debugging, the visual algorithm can identify the color of the workpiece, get the workpiece coordinate information, and control the robot jaws for fast target posi-tioning and accurate crawling. The results show that the system has high positioning accuracy, fastness and stability, and can meet the high precision and high reliability requirements of automatic plug-in inserts under robot operation. It can achieve a variety of colors and multi-station fully automated plug-in operations, without manual participation, reduces the number of recycling, improves the efficiency of the plug and has the advantages of high efficiency.
The machine vision is introduced into the field of the plug-in robot system as a new type of sensor, and the function of the environment visual information (color, shape and attitude of the target workpiece) is realized by machine vi-sion to achieve fast crawling and precise positioning. This method for the realization of fully automated plug, reduc-es the insertion error rate, improves the efficiency of plug-in workpiece, which is of great significance. The system uses SCARA robot, mechanical jaw and camera CCD as the hardware base, building a SCARA robot automatic identification and positioning plug-in system platform based on monocular vision, which mixed with a variety of colors of the insurance piece in a circular feeding tray. Under the vibration of the disk motor, the insurance piece is sent to the linear feeder in turn, and then the CCD camera is used to obtain the image information of the insurance piece, the contour shape and coordinate information are extracted from the image and the camera parameters are calibrated and parameterized model is established. The workpiece image coordinate information is transformed into the robot coordinate system under the crawl position information. The Visual Studio software is used as the devel-opment platform, and the visual recognition and positioning algorithm is developed by using the OpenCV visual da-tabase function. The visual algorithm prepares the image of the fuse piece, the image segmentation, the color recog-nition, the corner detection and the center point extraction. The center point of the workpiece is determined. Finally, the coordinates of the target point are obtained by calculating the scale ratio and the conversion of the coordinates. The visual algorithm can realize the color recognition of the workpiece and obtain the position information of the workpiece, and control the robot jaws to grasp the target workpiece accurately, which meets the general industrial production in the real-time requirements of the workpiece. In the field debugging, the visual algorithm can identify the color of the workpiece, get the workpiece coordinate information, and control the robot jaws for fast target posi-tioning and accurate crawling. The results show that the system has high positioning accuracy, fastness and stability, and can meet the high precision and high reliability requirements of automatic plug-in inserts under robot operation. It can achieve a variety of colors and multi-station fully automated plug-in operations, without manual participation, reduces the number of recycling, improves the efficiency of the plug and has the advantages of high efficiency.
In order to improve the detection accuracy of far-field spots centroid, a new optical path structure on the basis of a two-dimension orthogonal diffraction grating is proposed. Using two orthogonal one-dimension diffraction gratings, a single spot on the far-field focal plane is developed into a multiple spots array. A corresponding experimental setup was built to compare the centroid detection accuracy of the new method and the conventional method under the same experiment conditions. The experimental results show that by increasing the input information of the far-field detected spots, the centroid detection accuracy can be improved. First, the far-field imaging principle of the two-dimension orthogonal diffraction gratings is introduced. In this paper, the beam splitting characteristic of the two-dimensional diffraction grating is used to improve the detection accuracy of the incidence optical axis. The two-dimensional diffraction grating is composed of two orthogonal one-dimensional diffraction gratings. The inci-dence beam is divided into number of beams with the same phase and different intensity. A set of diffraction spots, which has different intensities but has the same ranks distance and distribution, are formed. These images are cap-tured by the CCD camera. Second, the spot centroid detecting error is analyzed in theory and results show that the centroid random error is one of the main errors source. Third, the formula to decrease the centroid detecting random error based on far-field multiple spots is established. By increasing the input information of the far-field detected spots, the centroid detection accuracy can be improved. Finally, the experiments of centroid detecting are carried out. The experimental results show that the centroid detecting accuracy of multiple spots is 4 times larger than that of single spot. The root mean square of single spot centroid detecting error is 0.0385 pixels and the RMS of 16 spots centroid detecting error is 0.0099 pixels. In conclusion, a modified method with a two-dimensional orthogonal dif-fraction grating is proposed to improve the detection accuracy of far-field spots centroid. The basic principle of pro-posed method and the processing of the new method are described. The experimental setup with the diffraction grating is also illustrated in detail. It is only necessary to place the diffraction grating in the front of image lens, and the structure is simple. Under the same condition, the experiments are done to validate the high-precision centroid detection. Compared with the far-field single spot centroid detecting method, the far-field multiple spots centroid detecting method proposed is simpler and more convenient, especially for the optical axis alignment with bigger axis offset. It can be used as optical axis detection in an adaptive optics system.
Grating is a kind of photoelectric sensor which is widely used in defense technology, industrial production and social life. In order to improve the measuring resolution, subdivision is used to deal with the grating moiré signals. Tradi-tional subdivision methods such as phase-shifting resistance chain method, lock phase frequency method, carrier modulation method, amplitude segmentation method, etc., all require that the two signals output by the grating reading head are strictly orthogonal.Actually, because of the influence of the precision of the grating and the adjustment error, the two signals usually cannot be completely orthogonal, and the phase difference is fluctuant. Therefore, the non-orthogonal deviation of grating moiré signal is a key factor affecting grating measurement accuracy. A subdividing method for non-orthogonal grating moiré signals is studied, and a circuit scheme of the non-orthogonal grating moiré signal digi-tal subdividing system is proposed to complete 32~512 times signal subdivision on the FPGA platform.In the process of subdivision, the amplitude of the two grating signals is collected to determine whether the interval of the signal sampling point is changed, and the dynamic tracking of the intersection of the signal amplitude is real-ized. And then according to the amplitude of the starting point and the end point of the measurement signal, the cor-responding phase points are calculated and the interval is recorded. Combining the intersection of two signals, the phase change can be calculated.Targeted to signal amplitude ratio and sampling rate parameters in the circuit system, mathematical modeling and quantitative analysis were performed, and the validity of the model was demonstrated by experiment. The results of the study are as following.1) A circuit realization scheme based on signal collection, pre-processing and subdivision is presented, and the formulas for calculating the phase changing capacity of non-orthogonal grating moiré signals are given.2) A model of signal amplitude deviance is constructed, and the quantitative relation is established between signal amplitude ratio k and subdivision value N. Test results suggest that required compensation for signal amplitude de-viance becomes higher steadily when the subdivision value N grows.3) A model of signal frequency/sampling frequency is constructed, and the quantitative relation is established be-tween the quotient of signal frequency and sampling frequency f/fs and subdivision value N. Test results suggest that required sampling frequency becomes lower steadily when the subdivision value N grows.Proved by experiment, the method has good adaptability to the non-orthogonal deviation in the actual working condition. The study results have guiding significance and reference value on design and realization of the grating moiré signal subdividing system.
Grating is a kind of photoelectric sensor which is widely used in defense technology, industrial production and social life. In order to improve the measuring resolution, subdivision is used to deal with the grating moiré signals. Tradi-tional subdivision methods such as phase-shifting resistance chain method, lock phase frequency method, carrier modulation method, amplitude segmentation method, etc., all require that the two signals output by the grating reading head are strictly orthogonal.Actually, because of the influence of the precision of the grating and the adjustment error, the two signals usually cannot be completely orthogonal, and the phase difference is fluctuant. Therefore, the non-orthogonal deviation of grating moiré signal is a key factor affecting grating measurement accuracy. A subdividing method for non-orthogonal grating moiré signals is studied, and a circuit scheme of the non-orthogonal grating moiré signal digi-tal subdividing system is proposed to complete 32~512 times signal subdivision on the FPGA platform.In the process of subdivision, the amplitude of the two grating signals is collected to determine whether the interval of the signal sampling point is changed, and the dynamic tracking of the intersection of the signal amplitude is real-ized. And then according to the amplitude of the starting point and the end point of the measurement signal, the cor-responding phase points are calculated and the interval is recorded. Combining the intersection of two signals, the phase change can be calculated.Targeted to signal amplitude ratio and sampling rate parameters in the circuit system, mathematical modeling and quantitative analysis were performed, and the validity of the model was demonstrated by experiment. The results of the study are as following.1) A circuit realization scheme based on signal collection, pre-processing and subdivision is presented, and the formulas for calculating the phase changing capacity of non-orthogonal grating moiré signals are given.2) A model of signal amplitude deviance is constructed, and the quantitative relation is established between signal amplitude ratio k and subdivision value N. Test results suggest that required compensation for signal amplitude de-viance becomes higher steadily when the subdivision value N grows.3) A model of signal frequency/sampling frequency is constructed, and the quantitative relation is established be-tween the quotient of signal frequency and sampling frequency f/fs and subdivision value N. Test results suggest that required sampling frequency becomes lower steadily when the subdivision value N grows.Proved by experiment, the method has good adaptability to the non-orthogonal deviation in the actual working condition. The study results have guiding significance and reference value on design and realization of the grating moiré signal subdividing system.
Image fusion is an important branch of multi-sensor information fusion, which is to synthesize several images or sequential detective images about one scene into a more complete and thorough image. At present, this technology has achieved a universal usage in remote sense detection, computer vision, target detection and recognition, etc. However, because of the variances of fusion image type, there is no standard evaluation method. Researchers have to select some appropriate evaluation indicators from a number of objective evaluation indicators by experience. The result is that different studies select different evaluation indicators, and it is hard to compare, which leads to lower persuasion in theory study. The hot issue on nowadays study is to choose relative evaluation indicators ac-cording to evaluation targets, and synthesize the chosen evaluation indicators to a comprehensive indicator. Indica-tor accuracy can be achieved through complementary advantages among indicators. An evaluation method of multiband fusion image is proposed based on the correlation of subjective and objective evaluations. This evalua-tion method includes the following steps. First, subjectively evaluate a variety of fusion results from four aspects. They are the clarity of edge, natural sense, information quantity and comprehensive evaluation. The evaluation lev-el is divided into five levels: "good", "better", "normal", "poor" and "bad". Secondly, calculate the 14 objective evalu-ation indicators of the fusion results. Thirdly, normalize the subjective and objective evaluation results. Fourthly, use relative Spearman coefficient to calculate the correlation among each evaluation aspect and the 14 objective eval-uation indicators. Fifthly, use the correlation to calculate the occupation weight of each objective evaluation indica-tor in the comprehensive evaluation indicator. Finally, construct a comprehensive index based on the correlation of the 14 indexes for every objective evaluation.The experimental results show that the synthesis indicator based on correlation between subject and object evalu-ation is more relevant to the objective evaluations than the individual evaluation indicator, CMSVD (complex ma-trix singular value decomposition) and MSA (multi-hierarchical synthesis analysis). The correlation of clarity of edge, natural sense, information quantity and comprehensive evaluation are 0.634, 0.630, 0.737, and 0.661, respectively. As for different evaluation aspects, the correlations between the objective evaluation and subjective evaluation are different. However, the correlations of AG (average gradient), SF (spatial frequency) and VIFF (visual information fidelity for fusion) are relatively higher than other aspects.
Image fusion is an important branch of multi-sensor information fusion, which is to synthesize several images or sequential detective images about one scene into a more complete and thorough image. At present, this technology has achieved a universal usage in remote sense detection, computer vision, target detection and recognition, etc. However, because of the variances of fusion image type, there is no standard evaluation method. Researchers have to select some appropriate evaluation indicators from a number of objective evaluation indicators by experience. The result is that different studies select different evaluation indicators, and it is hard to compare, which leads to lower persuasion in theory study. The hot issue on nowadays study is to choose relative evaluation indicators ac-cording to evaluation targets, and synthesize the chosen evaluation indicators to a comprehensive indicator. Indica-tor accuracy can be achieved through complementary advantages among indicators. An evaluation method of multiband fusion image is proposed based on the correlation of subjective and objective evaluations. This evalua-tion method includes the following steps. First, subjectively evaluate a variety of fusion results from four aspects. They are the clarity of edge, natural sense, information quantity and comprehensive evaluation. The evaluation lev-el is divided into five levels: "good", "better", "normal", "poor" and "bad". Secondly, calculate the 14 objective evalu-ation indicators of the fusion results. Thirdly, normalize the subjective and objective evaluation results. Fourthly, use relative Spearman coefficient to calculate the correlation among each evaluation aspect and the 14 objective eval-uation indicators. Fifthly, use the correlation to calculate the occupation weight of each objective evaluation indica-tor in the comprehensive evaluation indicator. Finally, construct a comprehensive index based on the correlation of the 14 indexes for every objective evaluation.The experimental results show that the synthesis indicator based on correlation between subject and object evalu-ation is more relevant to the objective evaluations than the individual evaluation indicator, CMSVD (complex ma-trix singular value decomposition) and MSA (multi-hierarchical synthesis analysis). The correlation of clarity of edge, natural sense, information quantity and comprehensive evaluation are 0.634, 0.630, 0.737, and 0.661, respectively. As for different evaluation aspects, the correlations between the objective evaluation and subjective evaluation are different. However, the correlations of AG (average gradient), SF (spatial frequency) and VIFF (visual information fidelity for fusion) are relatively higher than other aspects.
Image enhancement is an important and fundamental problem for image processing. However, there are some im-ages that the visual system obtained with a mass of effective features loss, appearing to be low contrast and high noise, which will affect the image enhancement and the subsequent processing of computer vision applications. To deal with the low-contrast and high-noisy natural images, an image enhancement method based on internal genera-tive mechanism (IGM) and improved pulse coupled neural network (PCNN) is proposed. First, in the division opera-tion, an image is segmented into two parts using the theory of IGM. One part is a rough sub-graph, which contains the basic information of the images, and the other is a detail sub-graph, which contains the image details. Second, in order to make the rough sub-graph more clearly, an improved PCNN enhancement method with fuzzy sets is adopt-ed. As we all know, the lij in PCNN represents the working state of each neuron and every neuron has its own lij. So we use the lij as the input of the fuzzy function to obtain the fuzzy membership. Subsequently, through the successive iteration of the fuzzy membership, we have achieved the purpose of using this information to non- linearly extend the lij, and then, the image contrast of the target and background is enhanced accordingly. At the same time, βij in PCNN affects the ignition cycle between the central neuron and the neighborhood neurons, which in turn affects the gray value of the pixels. By improving the calculation method of βij in PCNN, we have achieved the purpose of sharpening the image edge and removing the noises of the detail sub-graph. Finally, the final image is reconstructed by the processed rough sub-graph and detail sub-graph. To verify the effectiveness and superiority, we design three sets of controlled experiments which are performed on some PCNN enhancement algorithms, including the original PCNN method in Ref.[7], the improved PCNN methods in Ref.[8] and Ref.[9]. Meanwhile, we choose three classic images to show the experiment results qualitatively, and the results are shown in Fig. 4, Fig. 5 and Fig. 6. After that, in order to show the quantitative experiment results, we also chose five reference and no-reference image quality as-sessment methods, such as the DV/BV, SSIM, entropy, SNR, and EPI, to compare the effect of various image en-hancement methods. Experimental results show that the proposed algorithm can effectively enhance the image con-trast and image contour, as well as filter out some noise without any loss of image edges.
Image enhancement is an important and fundamental problem for image processing. However, there are some im-ages that the visual system obtained with a mass of effective features loss, appearing to be low contrast and high noise, which will affect the image enhancement and the subsequent processing of computer vision applications. To deal with the low-contrast and high-noisy natural images, an image enhancement method based on internal genera-tive mechanism (IGM) and improved pulse coupled neural network (PCNN) is proposed. First, in the division opera-tion, an image is segmented into two parts using the theory of IGM. One part is a rough sub-graph, which contains the basic information of the images, and the other is a detail sub-graph, which contains the image details. Second, in order to make the rough sub-graph more clearly, an improved PCNN enhancement method with fuzzy sets is adopt-ed. As we all know, the lij in PCNN represents the working state of each neuron and every neuron has its own lij. So we use the lij as the input of the fuzzy function to obtain the fuzzy membership. Subsequently, through the successive iteration of the fuzzy membership, we have achieved the purpose of using this information to non- linearly extend the lij, and then, the image contrast of the target and background is enhanced accordingly. At the same time, βij in PCNN affects the ignition cycle between the central neuron and the neighborhood neurons, which in turn affects the gray value of the pixels. By improving the calculation method of βij in PCNN, we have achieved the purpose of sharpening the image edge and removing the noises of the detail sub-graph. Finally, the final image is reconstructed by the processed rough sub-graph and detail sub-graph. To verify the effectiveness and superiority, we design three sets of controlled experiments which are performed on some PCNN enhancement algorithms, including the original PCNN method in Ref.[7], the improved PCNN methods in Ref.[8] and Ref.[9]. Meanwhile, we choose three classic images to show the experiment results qualitatively, and the results are shown in Fig. 4, Fig. 5 and Fig. 6. After that, in order to show the quantitative experiment results, we also chose five reference and no-reference image quality as-sessment methods, such as the DV/BV, SSIM, entropy, SNR, and EPI, to compare the effect of various image en-hancement methods. Experimental results show that the proposed algorithm can effectively enhance the image con-trast and image contour, as well as filter out some noise without any loss of image edges.
Pedestrian detection is the principal technique for various applications, such as surveillance, tracking system and autonomous driving. Although the topic has been intensively investigated and significant improvement has been achieved in recent years, pedestrian detection is still a challenging task, limited by occluded appearances, cluttered backgrounds, and low image resolution. Besides, since most of recent researches focus on the detection of pedestri-ans in visible spectrum images, they are very likely to be stuck with images captured at night or bad lighting. However, ambient lighting has less effect on thermal imaging. Thermal images usually present clear silhouettes of human, but lose fine visual details of pedestrian, which can be captured by visual cameras. To overcome the drawbacks of visi-ble images, it’s helpful to fuse the information of visible images and long wave length infrared images. Aggregate channel feature is an easy but useful way to detect pedestrian. However, it only uses the information of visible spec-trum images. For the above reasons, an improved pedestrian detection algorithm based on multispectral aggregate channel features is proposed. First, the aggregate channel features of the visible image and the infrared image are extracted, respectively. Specifically, the channel features extracted from the visible images include three LUV color channel features, one normalized gradient magnitude channel feature, and six histogram of oriented gradients channel (HOG) features. The channel features extracted from infrared images include one brightness channel feature and nine HOG features. All the channel features make up the aggregate multispectral channel features. Then, to use the symmetry information of pedestrian in infrared images, the improved central symmetric local binary pattern is proposed. The improved pattern feature is achieved by changing the pixel contrast rule and comparing the contrast result with the adaptive threshold. The improved central symmetric local binary pattern feature is added to feature channels to get the aggregate multispectral channel features. Next, to learn more local features and observe the ef-fect of filters, different filter banks are designed to filter the aggregate multispectral channel features. Finally, the real adaptive boosting learning method is used to train the classifier to realize the multispectral pedestrian detection. Ex-periments show that the improved local binary pattern feature can better describe the symmetry of pedestrians of infrared images and the intermediate filter layer enriches the candidate feature pool. The algorithm makes use of the complementary information provided by color and thermal images, which can effectively detect pedestrians in var-ious scenes and improve pedestrian detection accuracy. Compared with the previous multispectral aggregate channel detection work, the algorithm reduces the log-average miss rate.
Pedestrian detection is the principal technique for various applications, such as surveillance, tracking system and autonomous driving. Although the topic has been intensively investigated and significant improvement has been achieved in recent years, pedestrian detection is still a challenging task, limited by occluded appearances, cluttered backgrounds, and low image resolution. Besides, since most of recent researches focus on the detection of pedestri-ans in visible spectrum images, they are very likely to be stuck with images captured at night or bad lighting. However, ambient lighting has less effect on thermal imaging. Thermal images usually present clear silhouettes of human, but lose fine visual details of pedestrian, which can be captured by visual cameras. To overcome the drawbacks of visi-ble images, it’s helpful to fuse the information of visible images and long wave length infrared images. Aggregate channel feature is an easy but useful way to detect pedestrian. However, it only uses the information of visible spec-trum images. For the above reasons, an improved pedestrian detection algorithm based on multispectral aggregate channel features is proposed. First, the aggregate channel features of the visible image and the infrared image are extracted, respectively. Specifically, the channel features extracted from the visible images include three LUV color channel features, one normalized gradient magnitude channel feature, and six histogram of oriented gradients channel (HOG) features. The channel features extracted from infrared images include one brightness channel feature and nine HOG features. All the channel features make up the aggregate multispectral channel features. Then, to use the symmetry information of pedestrian in infrared images, the improved central symmetric local binary pattern is proposed. The improved pattern feature is achieved by changing the pixel contrast rule and comparing the contrast result with the adaptive threshold. The improved central symmetric local binary pattern feature is added to feature channels to get the aggregate multispectral channel features. Next, to learn more local features and observe the ef-fect of filters, different filter banks are designed to filter the aggregate multispectral channel features. Finally, the real adaptive boosting learning method is used to train the classifier to realize the multispectral pedestrian detection. Ex-periments show that the improved local binary pattern feature can better describe the symmetry of pedestrians of infrared images and the intermediate filter layer enriches the candidate feature pool. The algorithm makes use of the complementary information provided by color and thermal images, which can effectively detect pedestrians in var-ious scenes and improve pedestrian detection accuracy. Compared with the previous multispectral aggregate channel detection work, the algorithm reduces the log-average miss rate.
Satellite cloud image processing is widely used in meteorology, and convective cloud attracts great attentions in me-teorological monitoring. Generally speaking, convective cloud plays a pivotal role in governing the rainfall, and they are also responsible for modulating the radiation budget of earth atmosphere system. Especially, the emergence of cumulonimbus which generates at the beginning of convection is often indicating thunder and lightning, torrential rains or even accompanies typhoons and other natural disasters. Hence, the convective clouds detection is a key factor for weather forecasting, climate monitoring and helps to prevent natural disasters.In this paper, a modified Support vector machine (SVM) was proposed to detect convective clouds. The traditional SVM is easily affected by noises and outliers, and its training time will dramatically increase with the growing in number of training samples. On the other hand, satellite cloud image may easily be deteriorated by noises and inten-sity non-uniformity with a huge amount of data needs to be processed regularly, so it is hard to detect convective clouds in satellite image using traditional SVM. To deal with this problem, a novel method for detection of convec-tive clouds based on a fast fuzzy support vector machine (FFSVM) was proposed. FFSVM was constructed by elim-inating feeble samples and designing new membership function as two aspects. First, according to the distribution characteristics of fuzzy inseparable sample-set and the fact that the classification hyper-plane is only determined by support vectors, this paper uses SVDD, Gaussian model and border vector extraction model comprehensively to de-sign a sample selection method in three steps, which can eliminate most of redundant samples and keep possible support vectors. Then, by defining adaptive parameters related to attenuation rate and critical membership on the basis of the distribution characteristics of training set, an adaptive membership function is designed. Finally, the FFSVM was trained by the remaining samples using adaptive membership function to detect convective clouds. The experiments on FY-2D satellite images show that the proposed method, compared with traditional FSVM where no samples were eliminated, not only remarkably reduces training time, but also further improves the accuracy of con-vective clouds detection.
With the problem of air pollution and the improvement of life quality, people are increasingly anxious about the sur-rounding air quality in recent years, which also promotes the development of gas concentration detection technolo-gies. At present, these technologies have focused on electrochemical method, catalytic combustion, gas chromatog-raphy and optical methods. Among them, the optical method of gas concentration detection has its unique ad-vantages, such as high sensitivity and high accuracy. Through the combination of optical fiber sensing technology, this method can realize the detection of gas concentration in extreme environment, with the advantages of an-ti-electromagnetic interference, flame retardant, intrinsically safe, and so on. In contrast, the non-optical detection methods make some bad performance, such as poor sensitivity, bad accuracy and low reproducibility, which are unable to be applied to the industrial site.And seven common optical methods for gas concentration detection are described, which contains 3 conventional gas concentration detection technologies and 4 novel methods. The former is composed of optical interferential method, photoacoustic detection (PAS), and correlation spectroscopy. The latter consists of tunable diode laser ab-sorption spectroscopy(TDLAS), evanescent wave field sensing technology, hollow core photonic bandgap fi-ber(HC-PBF) sensing technology and fiber loop ring-down spectroscopy(FLRDS). The basic principles, advantages and disadvantages of each method are given and compared in detail. The improvement work and some novel ideas are presented. The applications of combined methods are also discussed. The prospect of optical gas sensing is listed, which mainly refers to miniaturization, intelligence, portability, low power consumption, high accuracy, fast response and distributed multi-component telemetry technology.At last, some new ideas and technologies have been pointed out for gas concentration sensing, such as seeking sensitive films that can interact with the measured gas, adopting special fibers, or using the cladding doped rare earth elements. In addition, on the basis of evanescent field sensing, researchers combine physics with optics to form sur-face plasmon resonance fiber gas sensing. And the use of hollow core photonic bandgap fiber for the gas chamber, sharing the way with the optical path, increases the utilization of optical power and can achieve distributed sensing, but the gas diffusion cycle time should be considered. The emergence of laser technology makes the optical detec-tion method more excellent, and with the development of wireless communication technology and the awareness of people's health, harmful gas detection devices will be more and more popular with the family. Simultaneously, the telemetry distributed sensing technology for gas concentration will also become increasingly common in factories.
The research progress of ultrafast laser industrial application based on filamentation effect is introduced. Ultrafast laser filamentation is an attractive nonlinear phenomenon as a consequence of dynamic balance between Kerr self-focusing and defocusing effect in the electron plasma generated through the ionization process. It has been ob-served for various laser wavelengths from the ultraviolet to the infrared domain and for the pulse durations from several tenth of femtosecond to picosecond. The optical intensity in the filamentary volume can become high enough to induce permanent structural modifications which can be utilized in material processing with high precision and some special features. The basic characteristics and the theoretical modes of the filament propagation were de-scribed briefly for better understanding the effect. However, the main emphasis of the paper is on the laser industrial application from filamentation effect which is found as a promising and exploring research field in recent years. To achieve non-diffractive ultra-long transmission of filament propagation will play an important role in the develop-ment of the novel ultrafast laser material processing technology. From the physical feature, basic mechanism and characteristic advantages of filamentation effects, the representative research achievements on the laser applica-tions of filamentation induced in gas, liquid and solid different media were presented. It is demonstrated that laser filamentation induced in gas provides high intensity plasma strings of micrometric diameters and lengths of tens of centimeters which can achieve remotely drilling, cutting and milling of metals, biological materials, ceramics and single crystal (sapphire). Complex 3D shapes can be machined without any adjustment of the technique because the processing is carried out under defocusing condition. Micromachining techniques of cutting and welding by water acting as a medium for filament formation were introduced afterwards. Filament formation in water leads to de-crease of the focal spot diameter and increases of fluence and axial focal length, which is capable of drilling holes in thick soda-lime and hardened glasses, even for complex –shape fabrication. Filament formation at the interface of two glass samples was also used for welding applications. By varying repetition rate, scanning speed and focal posi-tion optimal conditions, strong glass welding via filamentation were obtained. The development problem and pro-spect of the technique were also considered and discussed. Ultrafast laser processing using filamentation must be a versatile technique in the future industrial material machining because the material modification is initiated by non-linear absorption with the advantages which is quite different from common ablation.
Low-cost, high-efficiency fiber-optic communication requires a simple and compact solution that enables ul-tra-high bandwidth modulation. A direct detection sys-tem constructed by a directly modulated semiconductor laser provides such a solution.
Classical dielectric waveguides (DWs) offer a single- mode operation over a very broad bandwidth with low propagation loss. The main shortcoming of conventional DWs, especially those comprised of light-weight materi-als with a low dielectric constant, is the poor power con-finement,which causes significant crosstalk and inter-ference between neighboring DWs, thereby hampering their deployment in a densely packed layout for effec-tive system integration and miniaturization.
Quantum key distribution (QKD) uses individual light quanta in quantum superposition states to guarantee unconditional communication security between distant parties. In practice, the achievable distance for QKD has been limited to a few hundred kilometers, owing to the channel loss of fibers or terrestrial free space that exponentially reduced the photon rate. Satellite-based QKD promises to establish a global-scale quantum network by exploiting the negligible photon loss and decoherence in the empty out space.