INFRARED, Volume. 44, Issue 6, 12(2023)

of Improved ViBe Algorithm in Moving Target Detection

Peng-fei LI, Zhi-jia WU, and Zong-lin JIANG
Author Affiliations
  • [in Chinese]
  • show less
    References(19)

    [1] [1] Cloutre L, Demers M. Figr: Few-shot image generation with reptile [J]. arXiv: 1901.02199, 2019.

    [2] [2] Antoniou A, Storkey A, Edwards H. Data augmentation generative adversarial networks [J]. arXiv: 171104340, 2017.

    [3] [3] Hong Y, Niu L, Zhang J, et al. Deltagan: Towards diverse few-shot image generation with sample-specific delta[C]. Tel Aviv: European Conference on Computer Vision, 2022.

    [4] [4] Hong Y, Niu L, Zhang J, et al. Matchinggan: Matching-based few-shot image generation [C]. London: 2020 IEEE International Conference on Multimedia and Expo (ICME), 2020.

    [5] [5] Hong Y, Niu L, Zhang J, et al. F2gan: Fusing-and-filling gan for few-shot image generation [C]. Seattle: 28th ACM International Conference on Multimedia, 2020.

    [6] [6] Saito K, Saenko K, Liu M Y. Coco-funit: Few-shot unsupervised image translation with a content conditioned style encoder [C]. Glasgow: European Conference on Computer Vision, 2020.

    [7] [7] Gu Z, Li W, Huo J, et al. Lofgan: Fusing local representations for few-shot image generation [C]. Montreal: IEEE/CVF International Conference on Computer Vision, 2021.

    [8] [8] Li T, Li Z, Luo A, et al. Prototype memory and attention mechanisms for few shot image generation [C]. Vienna: International Conference on Learning Representations, 2021.

    [9] [9] Yang M, Wang Z, Chi Z, et al. WaveGAN: Frequency-Aware GAN for High-Fidelity Few-Shot Image Generation[C]. Tel Aviv: European Conference on Computer Vision, 2022.

    [10] [10] Li Z, Wang C, Zheng H, et al. FakeCLR: Exploring Contrastive Learning for Solving Latent Discontinuity in Data-Efficient GANs [C]. Tel Aviv: European Conference on Computer Vision, 2022.

    [11] [11] Huang J, Liao J, Kwong S. Unsupervised image-to-image translation via pre-trained stylegan2 network [J]. IEEE Transactions on Multimedia, 2021, 24: 1435-1448.

    [12] [12] Park T, Efros A, Zhang R, et al. Contrastive learning for unpaired image-to-image translation [C]. Glasgow: European Conference on Computer Vision,2020.

    [13] [13] Karras T, Lainel S, Aila T. A style-based generator architecture for generative adversarial networks [C]. Long Beach: IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019.

    [14] [14] Li S, Han B, Yu Z, et al. I2v-gan: Unpaired infrared-to-visible video translation [C]. Chengdu: 29th ACM International Conference on Multimedia, 2021.

    [15] [15] Hore A, Ziou D. Image quality metrics: PSNR vs. SSIM [C]. Istanbul: 20th International Conference on Pattern Recognition, 2010.

    [16] [16] Winkler S, Mohandas P. The evolution of video quality measurement: From PSNR to hybrid metrics [J]. IEEE Transactions on Broadcasting, 2008, 54(3): 660-668.

    [17] [17] Ssara U, Akter M, Uddin M S. Image quality assessment through FSIM, SSIM, MSE and PSNR — a comparative study [J]. Journal of Computer and Communications, 2019, 7(3): 8-18.

    [18] [18] Setiadi D. PSNR vs SSIM: imperceptibility quality assessment for image steganography [J]. Multimedia Tools and Applications, 2021, 80(6): 8423-8444.

    [19] [19] Davis J W, Sharma V. Background-subtraction using contour-based fusion of thermal and visible imagery [J]. Computer Vision and Image Understanding, 2007, 106(2-3): 162-182.

    Tools

    Get Citation

    Copy Citation Text

    LI Peng-fei, WU Zhi-jia, JIANG Zong-lin. of Improved ViBe Algorithm in Moving Target Detection[J]. INFRARED, 2023, 44(6): 12

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category:

    Received: Jan. 13, 2023

    Accepted: --

    Published Online: Jan. 15, 2024

    The Author Email:

    DOI:10.3969/j.issn.1672-8785.2023.06.003

    Topics