
In order to achieve effective and reliable video transmission, a video joint coding scheme based on dictionary learning and the concatenation of LT code and LDPC code is proposed for underwater single-photon communication system. Sparse coding based on dictionary learning greatly compresses the amount of video data. According to the deletion characteristic of underwater single-photon channel, using the LT-LDPC channel concatenated coding method can overcome the disadvantage of excessive decoding overhead of LT code. Aiming at the problem of decoding failure probability of LT coding, a double feedback mechanism for decoding success is proposed. The experimental results show that when the channel error rate is in the order of 10-2 and the video compression rate is 75.6%, the video frames can be reconstructed with an average peak signal-to-noise ratio (PSNR) of 37.4921 dB.
Atmospheric polarization has broad application prospects in navigation and other fields. However, due to the limitation of the physical characteristics of the atmospheric polarization information acquisition device, only local and discontinuous polarization information can be obtained at the same time, which has an impact on the practical application. In order to solve this problem, by mining the continuity of atmospheric polarization mode distribution, this paper proposes a network for generating atmospheric polarization mode from local polarization information. In addition, polarization information is often affected by different weather conditions, geographic environment and other factors, and these polarization data are difficult to collect in the real environment. To solve this problem, this paper mines the diversity relationship between the few-shot data under different weather and geographic conditions, by which the generated atmospheric polarization mode is generalized to different conditions. In this paper, experiments are carried out on the simulated data and measured data. Compared with other new methods, the experimental results prove the superiority and robustness of this proposed method.
A stacked liquid lens based on electrowetting-on-dielectric (EWOD) is designed to analyze the ability of correcting the distorted wavefront caused by curvature, tilt, and piston. The model of the stacked liquid lens is constructed by COMSOL software which is used to simulate the change of liquid interface with different voltage combinations, and the change range of the interface. The correction ability of the stacked liquid lens at a certain point in the wavefront is assessed from wavefront image and point spread function (PSF) is got by ZEMAX software. The results show that different types of distorted wavefront can be compensated via the stacked liquid lens. The peak-to-valley (PV) value decreases from 19.7853? to 0.18?, and the root mean square (RMS) value is down from 5.6638? to 0.0355?. Concurrently, the Strehl ratio (SR) increased from near 0 to 0.962. The related research results have broad prospects in the field of wavefront correction.
We demonstrate a new mode-locking method: multimode interference mode-locking. This method is simple and convenient in construction. It is only necessary to fuse two short pieces of graded-index multimode fiber in a single-mode fiber laser, which uses the mode interference effect of single-mode multimode single-mode (SMS) structure to achieve saturable absorption mechanism. In order to realize the mode-locking of the SMS structure, it is necessary to precisely control the length of multimode fiber. We propose to coil the SMS structure into the polarization controller. By theoretically deriving the polarization controller to adjust the phase of transmission light in a multimode fiber, the saturable absorption effect can be achieved. Under the 263 mV pump power, a stable 24.83 MHz repetition frequency fundamental frequency mode-locked pulse output was realized, where the pulse interval was 40.12 ns, the signal-to-noise ratio was 50.8 dB, and the center wavelength was 1881.7 nm. The conversion between soliton molecules and traditional soliton can be realized by adjusting the polarization controller and pump power. Under the pump threshold of 410 mW, a stable 25 MHz repetition frequency soliton molecular mode-locked pulse output was realized, where the pulse interval was 40.3 ns, the signal-to-noise ratio was 54.4 dB, and the center wavelength was 1887.60 nm.
An all-silicon PIN photodetector based on black silicon microstructure is reported. The device combines the characteristics of broad spectrum and high absorption of black silicon structure and the characteristics of high quantum efficiency and high response speed of PIN photodetectors. By adding a black silicon microstructure layer based on the traditional silicon PIN photodetector structure, the response characteristics of the detector in the near-infrared band are improved without affecting the response speed. A method is proposed to solve the contradiction between quantum efficiency and response speed in the vertical structure of the PIN photodetector. The test results show that the quantum efficiency of the device can reach 80%, and the peak wavelength is 940 nm. The light responsivity reaches 0.55 A/W, and the dark current is about 700 pA. The response time is 200 ns.
With the development and application of blue semiconductor lasers, it has become a research hotspot to obtain high brightness blue light source by beam combining technology. In order to obtain high brightness blue light output, 48 single tube semiconductor lasers with wavelength of 450 nm and output power of 3.5 W are focused and coupled into 105 μm/0.22 NA fiber by fast slow axis collimation and spatial beam combination. The blue light with power of 144.7 W and brightness of 11 MW/(cm2?str) is obtained. The coupling efficiency is 93.78%, and the optical to optical conversion efficiency of the whole system is 86.13%.
In the task of person re-identification, there are problems such as difficulty in labeling datasets, small sample size, and detail feature missing after feature extraction. The joint discriminative and generative learning for person re-identification of the deep dual attention is proposed against the above issues. Firstly, the author constructs a joint learning framework and embeds the discriminative module into the generative module to realize the end-to-end training of image generative and discriminative. Then, the generated pictures are sent to the discriminative module to optimize the generative module and the discriminative module simultaneously. Secondly, according to the connection between the channels of the attention modules and the connection between the attention modules in spaces, it merges all the channel features and spatial features and constructs a deep dual attention module. By embedding the models in the teacher model, the model can better extract the fine-grained features of the objects and improve the recognition ability. The experimental results show that the algorithm has better robustness and discriminative capability on the Market-1501 and the DukeMTMC-ReID datasets.
Addressing on the issues like varying object scale, complicated illumination conditions, and lack of reliable distance information in driverless applications, this paper proposes a multi-modal fusion method for object detection by using convolutional neural networks. The depth map is generated by mapping LiDAR point cloud onto the image plane and taken as input data together with the RGB image. The input data is also processed by the sliding window to reduce information loss. Two feature extracting networks are used to extract features of the image and the depth map respectively. The generated feature maps are fused through a connection layer. The objects are detected by processing the fused feature map through position regression and object classification. Non-maximal suppression is used to optimize the detection results. The experimental results on the KITTI dataset show that the proposed method is robust in various illumination conditions and especially effective on detecting small objects. Compared with other methods, the proposed method exhibits integrated advantages in terms of detection accuracy and speed.
Apodization has found many important applications in imaging and optical communication. Traditional apodization methods are based on the phase or amplitude modulation, suffering from either narrow working bandwidth, or reduced spatial resolution. Here, a broadband achromatic metasurface filter is proposed to realize apodization imaging without sacrificing the spatial resolution. With this filter, a nearly dispersionless phase modulation in the entire visible waveband can be achieved. The simulated results indicate that the focusing efficiency of the metasurface filter is twice larger than that of the phase filter and the imaging contrast can be improved by three times with the metasurface filter compared to the Gaussian filter. The sidelobes in the point spread function can also be efficiently suppressed to the scale of 10-5 in the whole visible spectrum ranging from 400 nm to 700 nm with our design. Additionally, the resolution of diffraction limit or even sub-diffraction can be achieved with this method.
In order to improve the nonuniformity caused by the process bias of sCMOS readout circuit, an adaptive multipoint nonuniformity correction method is presented. The algorithm first determines the location of the optimal segment point and the optimal number of segments by searching for the minimum norm point and threshold comparison, then corrects two points in each interval segment according to the segment information. This adaptive method can effectively improve the correction performance of traditional multipoint methods, which is caused by improper selection of segment parameters. At the same time, in order to achieve real-time non-uniformity correction, a matching embedded data series correction scheme is proposed based on the algorithm characteristics of adaptive multipoint method, which can achieve non-uniformity correction without affecting the existing camera acquisition structure and acquisition rate.