Polymer-dispersed liquid crystal (PDLC) materials have been widely applied in smart windows, privacy glass, automotive sunroofs, and display technologies due to their unique electro-optical properties. However, conventional PDLC films, typically white or gray, only switch between transparent and scattering states, limiting their potential in personalized designs and high-value applications. To enrich PDLC coloration and enhance performance, this study incorporates three anthraquinone dyes (Disperse Orange, Solvent Green 28, and Solvent Blue 104) and investigates their concentration-dependent effects. Key findings demonstrate that Disperse Orange increases driving voltage while significantly improving overall performance. At 0.5% dye concentration (mass fraction) and 15 μm cell thickness, the PDLC achieves a contrast ratio (CR) of 203.53, on-state transmittance of 81%, and peel strength of 194.74 N/m. Solvent Green 28 and Solvent Blue 104 reduce driving voltages, with Solvent Green 28 decreasing saturation voltage (Vsat) from 28.67 V to 17.13 V, while Solvent Blue 104 lowers Vsat from 28.67 V to 17.14 V. However, excessive dye loading degrades mechanical properties, reducing peel strength from 150.45 N/m (pristine) to 124.47 N/m and 133.07 N/m, respectively. These results establish a dual-optimization strategy for PDLC systems, simultaneously enhancing electro-optical and mechanical performance through rational dye selection and concentration control.
To generate an electric field distribution that perfectly matches a parabola and enable a more ideal phase distribution in the liquid crystal (LC) lens, an electrode structure was designed. First, the electrode structure was designed based on the resistance law and Ohm’s law. The influence of the electrode line length on the LC lens was discussed. Electric field simulations of the electrode structure were conducted, and four groups of LC lenses with different electrode line lengths and a 50 μm-thick LC layer were fabricated. Next, the wavefront patterns and oblique fringes of the LC lenses were acquired using the polarization interference method. One-dimensional phase information of the wavefront patterns was collected, and the spherical aberration was calculated from the oblique fringes. The variation of the lens power with the electric field was also determined. Finally, the response time of the lens was measured, and imaging experiments were performed on the lens. The experimental results show that: the acquired lens phase information fits the parabola with a degree of fit greater than 99.5%, which can better achieve the parabolic phase distribution required for liquid crystal lenses. Within the power range of -3.74 D to +3.86 D, the spherical aberration of the LC lens is less than 0.05λ, demonstrating a good lensing effect.
OLED flexible screens have become the core development direction of mobile terminal display technology due to their ultra-thin, high contrast, and excellent color performance. However, during its assembly process, it is prone to producing “impression” defects due to external forces, which directly affect product yield and user experience. This article takes flexible watches as an example to systematically analyze the causes of two typical impressions (FPC edge pulling type and device compressive stress type), and verifies the suppression effect of different materials, laminated structures, and foam combinations on impressions through two rounds of experiments. Experiments show that using harder materials (such as PET replacing PI), optimizing the stacking sequence (hard materials closer to the stress point), and enhancing the thickness of NFC substrates and cover film design can significantly improve the anti impression ability. The proposed stacking optimization scheme increases the impression pressure threshold by up to 128.99%, providing a theoretical basis for the industrial design of flexible screens.
Traditional backlight acquisition designs in Mini-LEDs suffer from several drawbacks, including excessive halo effects, overly bright backlighting that undermines energy-saving performance, and suboptimal display quality. To address these issues, we propose a low-partition backlight hierarchical weight assignment design. Based on the traditional average dimming method, this design introduces a hierarchical weight optimization for backlight brightness control in each partition. It involves comparing the image pixels within each partition against a predetermined threshold and applying binary labeling to the original image pixels. Subsequently, the proportion of pixels within the partition exceeding the preset grayscale threshold is calculated and categorized into multiple levels, and the grading weight values are assigned according to the reverse compensation strategy. By utilizing the assigned level weights of each partition, the backlight brightness value of the partition is updated on the basis of the average brightness value, thereby achieving brightness adjustment for image display. The effectiveness of this design has been verified through Matlab simulation. Furthermore, the design has been implemented on a Gowin FPGA. Results from 10 sets of practical tests indicate that the average structural similarity (SSIM) of the images in this design is 0.967 19, which is closer to 1 than the 0.931 26 obtained by the average method and the 0.938 96 obtained by the maximum method. Compared with the average method and the maximum method, the average peak signal-to-noise ratio (PSNR) of the images in this design is increased by 6.000 69 dB and 13.842 22 dB respectively, and the average energy consumption is reduced by 0.63 W and 2.71 W respectively. This design not only reduces energy consumption but also significantly enhances the display quality.
In order to explore the glasses-free 3D display effect on the mobile terminal of mobile phones, based on the glasses-free 3D display technology of lenticular lens grating, a glasses-free 3D display solution is proposed that is suitable for the sub-pixel configuration of mainstream OLED screen mobile phones with diamond pentile sub-pixel configuration. Lens grating design, pixel mapping analysis, display system design, crosstalk reduction, and color balance correction are included in the solution. A method is introduced which can reduce color deviation by turning off half of the red sub-pixels or half of the blue sub-pixels when using a lenticular lens grating to cover the OLED display screen of a mobile phone for glasses-free 3D display. Finally, a mobile phone was used to verify the effect of the display method. The solution which illuminate 50% of the red or blue sub-pixels, and retain 70% of the green sub-pixels can effectively improve the glasses-free 3D display color deviation of OLED mobile phones are showed in the evaluation results.
Due to space constraints, the primary and secondary drivers typically view the in-vehicle control system from an oblique angle, and in-vehicle displays are more susceptible to interference from ambient light, which often reduces the display quality, makes it difficult for the drivers to discern the displayed information and increases the driving risk. To this end, this paper proposes a dual-perspective directional optimization optical micro-nano film(OMNF) technology based on in-vehicle displays, aiming to optimize the viewing angle characteristics for primary and secondary drivers viewing in-vehicle displays. By designing a specific structure of OMNF and attaching it to the surface of the in-vehicle display, this technology can modulate the main light emitted by the display into the viewing angle range of the primary and secondary drivers. Simulation results reveal that the brightness of the display with this technology increased by approximately 35% for the primary and secondary drivers’ perspectives; whereas the dark state leakage light is reduced by 50%; and the contrast ratio is improved by more than 1.5 times. These results indicate that the technology proposed in this paper has a significant effect in improving the display quality from the driver’s perspective, providing drivers with clearer and more readable in-vehicle display experiences, and enhancing driving safety.
In order to solve the issues of the zoom process stuck and image quality inconsistency of visible light multi-lens camera caused by the lens switch, a new dedicated test chart and measurement method is proposed. The dedicated test chart contains several specific digital patches with different colors. The color information of each individual frame in the entire zoom process video is analyzed, and the mapping relationship of pixel coordinates and actual distance in real space is constructed. By calculating the changes of brightness, white balance, color rendering, and actual spatial distance which represented by each unit pixel between frames during the entire zoom process, the smoothness performance and image quality consistency evaluation are obtained. Experimental results demonstrate the effectivity of this new measurement method. When evaluating the zoom image quality, the subjective observation evaluation method of human eyes is commonly used in the industry. The measurement method proposed in this paper was compared and verified with the results of human eye observation experiments on more than 40 zoom videos from various mobile terminal devices, with an efficiency of over 97.3%. At present, this method had been applied in the testing of Xiaomi smartphones and tablets, and the cumulative sales of products applying this solution have exceeded million devices.
In particle image velocimetry (PIV), neural network-based methods often face challenges when handling high-speed or complex nonlinear flows. These challenges include rapid changes in particle positions, which lead to difficulties in tracking and matching, limited feature scale extraction, and insufficient ability to capture effective features. To address these issues, a novel flow field estimation and dynamic particle tracking enhancement model LiteFlowNet-CL is proposed, based on the combination of ConvLSTM and the LiteFlowNet architecture. The study firstly enhances the ability of the LiteFlowNet model to identify and represent complex flow patterns, and then leverages the temporal modeling advantages of the ConvLSTM network to effectively suppress tracking errors of high-speed moving particles across different time steps, thereby significantly reducing the likelihood of particle image feature tracking loss. To validate the effectiveness of the proposed model, this paper conducted comparative performance tests and ablation experiments by using simulated particle images. Experimental results show that the improved velocity field estimation model achieved a root mean square error of 0.100 4. Compared with the classical LiteFlowNet optical flow estimation model, the error was reduced by 10.52%, while a further error reduction of 1.463% was observed when benchmarked against the widely adopted high-performance LiteFlowNet-en model in PIV domain. The proposed model was verified to effectively enhance the capability of capturing complex flow field characteristics in particle image velocimetry, with its error precision meeting experimental requirements for turbulence analysis. This achievement was recognized as providing a new technical pathway for PIV algorithm optimization, and its application value was confirmed in promoting the development of fluid mechanics experimental measurement technologies toward higher spatiotemporal resolution. The research methodology and implementation process were systematically described, with comprehensive quantitative comparisons presented to validate the performance improvements.
The in-situ aberration detection of optical systems is of great significance for the processing and alignment of optical systems, the development of lithography machines, and the on-orbit adjustment of space cameras. Traditional in-situ detection methods for optical systems, such as Phase Retrieval (PR) and Phase Diversity (PD), perform excellently under specific conditions. But they have limitations when facing complex conditions such as large numerical apertures or when the lower bound of the Nyquist frequency of the optical system is not satisfied. Therefore, a method is proposed to combine the extended Nijboer-Zernike diffraction physical model with a deep neural network. Firstly, a deep residual network with the squeeze-and-excitation (SE) attention mechanism is constructed. Secondly, a mapping relationship from the intensity image point spread function (PSF) to the phase distribution is established to achieve feature extraction of the diffracted light intensity and prediction of the coefficients for phase description. Finally, the predicted coefficients are combined with the ENZ diffraction model to obtain the predicted PSF image, so as to realize the wavefront detection of the optical system. Experimental results show that when the numerical aperture (NA) of the optical system is large and the Nyquist sampling is not satisfied, the residual wavefront RMS between the real wavefront image and the reconstructed wavefront image is about 0.02λ, which is better than other methods. In comparison with other deep learning methods, this method is an unsupervised method, which not only reduces the dependence on a large amount of training data but also improves the accuracy of wavefront detection.
For the task of HDR (High Dynamic Range) image generation, in order to solve the problems of long acquisition time of multi-exposure images, inter-frame offset in dynamic scenes, and large number of algorithm parameters and computation of existing methods, this paper proposes a lightweight HDR image fusion algorithm based on the linear logarithmic response camera, and acquires a multi-gain grayscale image dataset. Firstly, the improved multi-scale residual module is used to extract the multilevel features of the input image and enhance the feature dimension. Secondly, the multilevel feature input is introduced into the Attention-Unet structure with depth-separable convolution to extract the multilevel information in the features and fuse the features. Thirdly, the point-by-point convolution is used to fuse the depth features of the image, and to output high dynamic range images compatible with standard display devices without additional tone mapping. Finally, the performance of each ablation structure is compared with the number of parameters and computation, and the optimal solution that guarantees the fusion effect while keeping the network lightweight is obtained. The experimental results show that the algorithm proposed in this paper has better performance in both subjective visual effect and objective evaluation index, with MEF-SSIM of 0.986 6, VIF of 1.76, AG of 3.94, and SF of 14.32.The high dynamic image fusion algorithm proposed in this paper maintains the excellent fusion effect and robustness in the case of the significant difference between multi-gain images, and has the lightweight, the number of model parameters is only 0.612M, and the computational complexity is 7.254 GFLOPs.
Existing single-image super-resolution reconstruction methods based on diffusion probabilistic model are deficient in spatial feature information extraction, failing to fully mine the relevant information, as well as redundancy in the computational process. In this paper, a single-image super-resolution reconstruction method incorporating multidimensional attention network is designed. First, multidimensional attention is proposed on the basis of the SRDiff diffusion model, which combines channel attention, self-attention and spatial attention to enhance the model's ability to capture features at different scales, so that more details and better global consistency can be retained at the same time when recovering high-resolution images. Second, PConv partial convolution is introduced to accurately extract the spatial features of the image, improve the quality of the super-resolution results, and significantly reduce the amount of computation, thus improving the operational efficiency of the model. Under the condition of magnification factor of 4, this paper's method is compared with other methods on five test sets, and the results show that the peak signal-to-noise ratio of this paper's method is improved by 0.762 dB compared with the average of the other compared methods, and the structural similarity is improved by 0.082 compared with the average of the other compared methods.The proposed method in this paper possesses subjectively more delicate details and more excellent visual effects, objectively has higher peak SNR values and structural similarity values.
A remote sensing image super-resolution reconstruction algorithm based on conditional prior enhancement and diffusion model is proposed to address the problems of blurry reconstruction effect of small targets in remote sensing images and loss of high-frequency details during the reconstruction process. Firstly, a shallow feature enhancement module that integrates multi branch standard convolution, dilated convolution, and coordinate attention is used to enhance the perception ability of small targets. Secondly, by stacking residual dense blocks, more representational features can be extracted while maintaining training stability; Subsequently, a multi-scale depth separable convolution module was designed to extract multi-scale prior information and prevent the loss of high-frequency details; Finally, the combination of the above modules is input as prior information into the diffusion model, guiding it to iteratively refine and generate high-resolution images. The experimental results on the publicly available remote sensing image dataset RSCNN7 and NWPU-RESISC45 show that good performance is achieved when the scale factor is ×2, ×4, and ×8. Among them, on the RSCNN7,when the scale factor is ×4, compared with methods with different network architectures, the proposed model significantly reduces the PI and FID, compared to SOTA algorithm based on diffusion model, it reduces 1.43 and 20.56, respectively. In terms of subjective visual effects, it is closer to the true value compared to the comparison algorithm.
For diagnosis of diabetic retinopathy based on domain adaptation methods in deep learning, the diffusion-enhanced domain-attention transfer learning model proposed in this paper consists of two main modules. Firstly, the denoising diffusion probabilistic diabetic retinopathy generation module generates abundant and diverse target domain samples, enabling the model to learn more comprehensive target domain features. Secondly, our model designs a multi-source-free attention ensemble module, which achieves weighted attention integration of multiple source domain pre-trained models, without the need to access source domain data. Therefore, this model obtains a good balance between instance-specific features and domain-consistent features. Experimental results demonstrate that the model achieves an accuracy of 90.66%, a precision of 87.47%, a sensitivity of 85.41%, a specificity of 91.63%, and an F1 score of 86.42% in the referable diabetic retinopathy diagnosis task. Meanwhile, in the normal/abnormal retinopathy recognition task, the model reaches an accuracy of 96.75%, a precision of 99.23%, a sensitivity of 90.47%, a specificity of 99.27%, and an F1 score of 94.65%. The model proposed in this paper can conduct effective retinopathy diagnosis without accessing source domain data and without target domain labels.