Laser & Optoelectronics Progress, Volume. 59, Issue 14, 1415014(2022)

Efficient Material Editing of Single Image Based on Inverse Rendering

Kunliang Xie1, Renjiao Yi1, Haifang Zhou1, Chenyang Zhu1, Yuwan Liu2, and Kai Xu1、*
Author Affiliations
  • 1School of Computer Science, National University of Defense Technology, Changsha 410005, Hunan , China
  • 2Dongbu Zhanqu Zhanqinju, Nanjing 210000, Jiangsu , China
  • show less

    As an inverse rendering problem, image-based material editing is essential for augmented reality and interaction design. Herein, we propose a method to edit the object material in a single image and convert it into a series of new materials with widely varying material characteristics. This approach involves specular highlight separation, intrinsic image decomposition, and specular highlight editing. Using a parametric material model, we synthesized a large-scale dataset of objects under various illumination conditions and material shininess parameters. Based on this dataset, we converted the source material into the target material using deep convolutional network. Experiments illustrate that the three parts of the approach are effective on various qualitative and quantitative evaluations, showing the material editing effect on the synthetic image and real test image. This novel material-editing method based on directly editing the specular highlight layer of a single image supports a variety of materials, such as plastic, wood, stone, and metal and efficiently produces realistic results for both synthetic and real pictures.

    Tools

    Get Citation

    Copy Citation Text

    Kunliang Xie, Renjiao Yi, Haifang Zhou, Chenyang Zhu, Yuwan Liu, Kai Xu. Efficient Material Editing of Single Image Based on Inverse Rendering[J]. Laser & Optoelectronics Progress, 2022, 59(14): 1415014

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Machine Vision

    Received: Dec. 24, 2021

    Accepted: Feb. 21, 2022

    Published Online: Jul. 1, 2022

    The Author Email: Xu Kai (kevin.kai.xu@gmail.com)

    DOI:10.3788/LOP202259.1415014

    Topics