Chinese Journal of Liquid Crystals and Displays, Volume. 35, Issue 12, 1270(2020)
TFT array defect detection based on AttentionGAN and morphological reconstruction
[4] [4] CHEN L F, SU C T, CHEN M H, et al. A neural-network approach for defect recognition in TFT-LCD photolithography process [J]. IEEE Transactions on Electronics Packaging Manufacturing, 2009, 32(1): 1-8.
[8] [8] MA L, LU Y, NAN X F, et al. Defect detection of mobile phone surface based on convolution neural network [J]. DEStech Transactions on Computer Science and Engineering, 2018.
[9] [9] AKCAY S, ATAPOUR-ABARGHOUEI A, BRECKON T P, et al. GANomaly: semi-supervised anomaly detection via adversarial training [C]//Proceedings of the 14th Asian Conference on Computer Vision. Perth: Springer, 2018.
[10] [10] The CIFAR-10 dataset [EB/OL]. https: //www.cs.toronto.edu/~kriz/cifar.html.
[11] [11] LECUN Y, CORTES C, BURGES C J C. The MNIST database of handwritten digits [EB/OL]. http: //yann.lecun.com/exdb/mnist/.
[12] [12] ZHAO Z X, LI B, DONG R, et al. A surface defect detection method based on positive samples [C]//Proceedings of the 15th Pacific Rim International Conference on Artificial Intelligence. Nanjing: Springer, 2018: 473-481.
[13] [13] HU G H, HUANG J F, WANG Q H, et al. Unsupervised fabric defect detection based on a deep convolutional generative adversarial network [J]. Textile Research Journal, 2020, 90(3-4): 247-270.
[14] [14] TANG H, XU D, SEBE N, et al. Attention-guided generative adversarial networks for unsupervised image-to-image translation [C]//2009 International Joint Conference on Neural Networks. Budapest: IEEE, 2019: 1-8.
[15] [15] TANG H, LIU H, XU D,et al. AttentionGAN: unpaired image-to-image translation using attention-guided generative adversarial networks [J]. arXiv preprint arXiv: 1911.11897, 2019.
[16] [16] ITTI L, KOCH C, NIEBUR E, et al. A model of saliency-based visual attention for rapid scene analysis [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1998, 20(11): 1254-1259.
[17] [17] VINCENT L. Morphological grayscale reconstruction in image analysis: applications and efficient algorithms [J]. IEEE Transactions on Image Processing, 1993, 2(2): 176-201.
[20] [20] GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets [C]//Proceedings of the 27th International Conference on Neural Information Processing Systems. Cambridge: ACM, 2014: 2672-2680.
[21] [21] ISOLA P, ZHU J Y, ZHOU T H, et al. Image-to-image translation with conditional adversarial networks [C]//2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017: 5967-5976.
[22] [22] ZHU J Y, PARK T, ISOLA P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks [C]//2017 IEEE International Conference on Computer Vision. Venice: IEEE, 2017: 2242-2251.
[23] [23] YI Z L, ZHANG H, TAN P, et al. DualGAN: unsupervised dual learning for image-to-image translation [C]//2017 IEEE International Conference on Computer Vision. Venice: IEEE, 2017: 2868-2876.
[24] [24] KIM T, CHA M, KIM H, et al. Learning to discover cross-domain relations with generative adversarial networks [C]//Proceedings of the 34th International Conference on Machine Learning. Sydney: ACM, 2017.
[25] [25] LIANG X D, ZHANG H, XING E P,et al. Generative semantic manipulation with contrasting GAN [J]. arXiv preprint arXiv: 1708.00315, 2017.
[26] [26] MO S, CHO M, SHIN J, et al. InstaGAN: instance-aware image-to-image translation [J]. arXiv preprint arXiv: 1812. 10889, 2018.
[27] [27] ZHANG H, GOODFELLOW I, METAXAS D, et al. Self-attention generative adversarial networks [J]. arXiv preprint arXiv: 1805. 08318, 2018.
[28] [28] ZHU J Y, ZHANG R, PATHAK D, et al. Toward multimodal image-to-image translation [C]//Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook: ACM, 2017: 465-476.
Get Citation
Copy Citation Text
CHEN Wei-wei, YAN Qun, YAO Jian-min. TFT array defect detection based on AttentionGAN and morphological reconstruction[J]. Chinese Journal of Liquid Crystals and Displays, 2020, 35(12): 1270
Category:
Received: Jul. 10, 2020
Accepted: --
Published Online: Dec. 28, 2020
The Author Email: CHEN Wei-wei (ischan@foxmail.com)