Acta Optica Sinica, Volume. 44, Issue 9, 0933001(2024)

Hue-Subregion Weighted Constrained Hue-Plane Preserving Camera Characterization

Yongheng Yin1, Long Ma2、*, and Peng Li1
Author Affiliations
  • 1School of Computer Science and Engineering, Shenyang Jianzhu University, Shenyang 110168, Liaoning, China
  • 2School of Science, Shenyang Jianzhu University, Shenyang 110168, Liaoning, China
  • show less

    Objective

    Color reproduction plays a very important role in textile, printing, telemedicine, and other industries, but affected by the manufacturing process or color rendering mechanism of digital image acquisition equipment, color image transmission between digital devices often has color distortion. Meanwhile, once the distortion appears, the above-mentioned industries will suffer losses or even irreversible damage. During color image acquisition, the most commonly employed acquisition equipment is the digital camera, which is an important method to convert the color image collected by the digital camera into the image seen by the human eye (or the camera characteristic method). Although the existing nonlinear camera characterization methods have the best camera characterization performance at present, these methods have hue distortion. To retain the important properties of the hue-plane preserving and further improve the camera characterization performance, we propose a hue-subregion weighted constrained hue-plane preserving camera characterization (HPPCC-NWCM) method.

    Methods

    The proposed method improves weighted constrained hue-plane preserving camera characterization from the perspective of optimizing the hue-subregion. First, the camera response value RGBs and the colorimetric value XYZs of the training samples are synchronously preprocessed, with the hue angles calculated and hue subregions preliminarily divided. Then, by operating in the hue subregion, the minimum hue angle differences between each training sample and the samples in the hue subregion are employed as the weighted power function, and the pre-calculation camera characterization matrices (pre-calculation matrices) are calculated for each sample respectively. Additionally, the weighted constrained normalized camera characterization matrix in the hue subregion is obtained by weighted averaging of the pre-calculation matrices using the weighted power function. Combined with the characterization results of samples within the hue subregion and all samples, the number and position of the hue subregions are optimized, and those under the best performance are obtained. To verify the performance improvement of this method, we conduct simulation experiments. Firstly, the hue-subregion number selection experiment is carried out by combining three cameras and three groups of object reflectance datasets under the D65 light illuminant. Then, the two cameras from the previous experimental data are compared with existing methods for further experiments and the exposure independence of each method is verified by changing the exposure level. Finally, the SFU dataset is compared with the existing methods repeatedly with 42 cameras under three light illuminants.

    Results and Discussions

    Verified by many simulation experiments and real camera experiments, in the simulation experiment of selecting the hue-subregion number, the camera characterization performance of this method is generally enhanced with the increasing hue-subregion number (Fig. 7), tends to stabilize when the number is 6, and yields the best performance when the number is 9. The performance of the subregion number 2 is worse than that of 1, and the analysis is that the small subregion number results in poor universality and low specificity of the characterization matrix in the hue subregion, which affects the characterization performance of the camera. After comparing the simulation experiment with the existing methods, the performance of this method is about 10% to 20% higher than those of the existing hue-plane preserving camera characterization methods, and it is better than or close to the nonlinear method (Table 1). In the variable exposure experiment, the performance of each method is close to that of the fixed exposure experiment, and that of the linear method and the root-polynomial method is close, which can prove the exposure independence. While the polynomial method is obviously worse, exposure independence does not exist (Tables 1 and 2). In the simulation experiments of supplementary light illuminants and cameras, the comparison trend of the results is basically the same as that of the previous experiment, and this method performs better in the supplementary experiment. In addition to being better than the existing camera characterization methods, it can be better than or equal to the nonlinear methods in many environments (Table 3).

    Conclusions

    By optimizing the hue subregion to improve the weighted constrained hue-plane preserving camera characterization method, the number and position of the hue subregion are optimized to achieve a more accurate camera characterization transformation for different hue subregions. By adopting the theoretical derivation and experimental verification of camera characterization transformation, this method features exposure independence, excellent hue-plane preservation properties, and the combination of the stability of low-order methods and the accuracy of high-order methods. In simulation experiments, it can be better than the existing hue-plane preservation methods, and better than or close to other nonlinear methods. In multi-camera supplementary experiments, the 95 percentile error improvement shows that this method has strong robustness and practical significance.

    Tools

    Get Citation

    Copy Citation Text

    Yongheng Yin, Long Ma, Peng Li. Hue-Subregion Weighted Constrained Hue-Plane Preserving Camera Characterization[J]. Acta Optica Sinica, 2024, 44(9): 0933001

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Vision, Color, and Visual Optics

    Received: Sep. 11, 2023

    Accepted: Feb. 5, 2024

    Published Online: May. 6, 2024

    The Author Email: Ma Long (malong1229@gmail.com)

    DOI:10.3788/AOS231545

    Topics