Chinese Journal of Lasers, Volume. 51, Issue 10, 1002319(2024)

Powder‑Spreading Defect Detection in Laser Powder Bed Fusion Based on Large Vision Model

Kunpeng Tan1, Jiafeng Tang1, Zhibin Zhao1、*, Chenxi Wang1, Xingwu Zhang1, Weifeng He2, and Xuefeng Chen1
Author Affiliations
  • 1Institute of Aero-Engine, School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an 710049, Shaanxi, China
  • 2National Key Lab of Aerospace Power System and Plasma Technology, Air Force Engineering University, Xi’an 710038, Shaanxi, China
  • show less


    To date, laser powder bed fusion (LPBF) is considered as the most advanced metal additive manufacturing technology. It has been widely adopted for the production of critical metal components in aerospace and healthcare industries. However, realizing quality stability and consistency is challenging because of the coupled effects of various factors during LPBF. The powder-spreading quality is a crucial characteristic of LPBF process monitoring. Defects during powder spreading can introduce defects into the formed components. In recent years, the application of computer vision in powder-spreading defect detection has shown promising results. However, the limited availability of annotated data constrains its performance. Large vision models, such as the segment anything model (SAM), exhibit remarkable generalization capabilities owing to pre-training on an extremely large dataset. This allows its transfer to various downstream tasks with minimal training data. However, owing to the lack of defect knowledge, absence of category information, and dependence on manual prompts, SAM cannot be directly applied to powder-spreading defect segmentation. This study addresses the requirements for powder-spreading defect segmentation by improving SAM, achieving excellent defect segmentation performance with minimal training samples, and exploring the potential application of large vision models in monitoring the additive manufacturing process.


    In this study, the powder-spreading defect segment anything model (PSAM), based on SAM, was introduced. The overall structure of PSAM was similar to that of SAM, which consisted of an image encoder, an auto-prompt generator, and a mask decoder. Compared to the original SAM, PSAM incorporated the following improvements: To address the issue of knowledge transfer concerning SAM's pre-trained parameters, four Adapter modules were introduced into the SAM image encoder structure. These Adapter modules enabled efficient adjustment of image feature encoding. They were inserted behind the multi-head attention layer in the transformer module and comprised linear and convolutional layers. To satisfy the requirements for category information in the powder-spreading segmentation task, PSAM utilized an improved mask decoder. This decoder outputted segmentation masks that were equal to the number of categories in a single pass. Each output corresponded to a specific category-segmentation result. These outputs were then integrated to obtain a classification output. To overcome the challenges of manual prompting in industrial settings, an autoprompt generator was designed. This generator is a fully convolutional neural network with residual connections that extracts features from input images and generates prompt embeddings that can be used by a mask decoder. A combination of cross-entropy loss, focal loss, and Dice loss was used as the final loss function. Specifically, mean intersection over union (mIoU) was employed as the evaluation metric.

    Results and Discussions

    This study utilizes an off-axis industrial camera to acquire powder-spreading images during the formation of several components. A subset of these images is selected for pixel-level annotation and is categorized into six classes: background, super-elevation, incomplete, hopping, streaking, and lattice. The images and their corresponding labels are organized into a dataset and are divided according to certain proportions. The model is trained using the training set, and even with only 50 training images, PSAM exhibits excellent segmentation performance. The evaluation of PSAM using the test set yields an mIoU of 65.02%, representing an improvement of 8.51 percentage points over Deeplab v3 and 5.31 percentage points over U-Net (Table 1). The limited amount of data restricts the ability of the model to perform deeper feature learning, thus hindering its capacity for richer feature representation. However, the pretrained SAM possesses strong image-feature extraction capabilities. Therefore, excellent defect segmentation performance can be realized without extensive data training. Ablation experiments are conducted to evaluate the proposed improvements. The results indicate that the introduction of the Adapter module successfully transfers feature representation capabilities, and the automatic prompting effectively guides the mask decoder's output. Compared with the original SAM, PSAM realizes an mIoU improvement of 11.33 percentage points (Table 2).


    The design of PSAM based on SAM realizes excellent segmentation performance even with a small amount of training data. Firstly, the model achieves transferring image encoder feature extraction capabilities from natural images to powder- spreading images by introducing the Adapter modules. Second, the mask decoder is modified to output the category masks. Finally, by incorporating an auto-prompt generator to encode the input images, automatic generation of visual prompt embeddings is achieved. Large artificial intelligence (AI) models are rapidly advancing. However, owing to the unique characteristics and complex operating conditions of industrial settings, these models still face challenges in practical applications in industrial scenarios. This study provides a preliminary exploration of the application of large vision models for powder-spreading defect detection in additive manufacturing. However, the full potential of large vision models has yet to be fully realized. The outstanding feature extraction, zero-shot generalization, and multimodal knowledge fusion capabilities of these large models can provide new solutions and approaches for additive manufacturing process monitoring, which are worth further exploration in future research.


    Get Citation

    Copy Citation Text

    Kunpeng Tan, Jiafeng Tang, Zhibin Zhao, Chenxi Wang, Xingwu Zhang, Weifeng He, Xuefeng Chen. Powder‑Spreading Defect Detection in Laser Powder Bed Fusion Based on Large Vision Model[J]. Chinese Journal of Lasers, 2024, 51(10): 1002319

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Laser Additive Manufacturing

    Received: Jan. 2, 2024

    Accepted: Mar. 6, 2024

    Published Online: Apr. 27, 2024

    The Author Email: Zhao Zhibin (