OPTICS & OPTOELECTRONIC TECHNOLOGY, Volume. 18, Issue 4, 47(2020)
Data Augmentation Based on Generative Adversarial Networks
[1] [1] Goodfellow I J,Warde-Farley D,Mirza M,et al. Maxout networks[C]. Proceedings of the 30th International Conference on Machine Learning(ICML),2013:1319-1327.
[2] [2] Goodfellow I,Pouget-Abadie J,Mirza M,et al. Generative adversarial nets[C]. Advances in Neural Information Processing Systems,2014:2672-2680.
[3] [3] Bergstra J,Bengio Y. Random search for hyper-parameter optimization[J]. Journal of Machine Learning Research,2012,13(1):281-305.
[4] [4] Coates A,Ng A Y. Learning feature representations with K-means[J]. Lecture Notes in Computer Science,2012,7700:561-580.
[5] [5] Dosovitskiy A,Fischer P,Springenberg J,et al. Discriminative unsupervised feature learning with exemplar convolutional neural networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2015:1-1.
[6] [6] Gregor K,Danihelka I,Graves A,et al. DRAW:A recurrent neural network for image generation[C]. Proceedings of the 32th International Conference on Machine Learning,2015,37:1462-1471.
[7] [7] Zeiler,Matthew D,Fergus Rob. Visualizing and understanding convolutional networks[C]. In Computer Vision-ECCV 2014:818-833.
[8] [8] Vincent,Pascal,Larochelle,et al. Stacked denoising autoencoders:Learning useful representations in a deep network with a local denoising criterion[J]. The Journal of Machine Learning Research,2010,11:3371-3408.
[9] [9] Ioffe,Sergey,Szegedy,et al. Batch normalization:Accelerating deep network training by reducing internal covariate shift[J]. arXiv preprint arX-iv:1502.03167,2015.
[10] [10] Freeman,William T,Jones,et al. Example-based super-resolution[J]. Computer Graphics and Applications,IEEE,2002,22(2):56-65.
[11] [11] Junbo Zhao,Micheal Mathieu,Ross Goroshin,et al. Stacked what-where autoencoders[J]. arXiv preprint arXiv:1506.02351,2015.
[12] [12] Fisher Yu,Yinda Zhang,Shuran Song,et al. LSUN:Construction of a large-scale image dataset using deep learning with humans in the loop[J]. arXiv preprint arXiv:1506.03365,2015.
[13] [13] XU Bing,WANG Naiya,CHEN Tianqi,et al. Empirical evaluation of rectified activations in convolutional network[J]. arXiv preprint ar-Xiv:1505.00853,2015.
[14] [14] Jascha Sohl-Dickstein, Eric A Weiss, Niru Maheswaranathan, et al. Deep unsupervised learning using nonequilibrium thermo-dynamics[J]. arXiv preprint arXiv:1503.03585,2015.
[15] [15] Antti Rasmus,Harri Valpola,Mikko Honkala,et al. Semisupervised learning with ladder network[J]. arXiv preprint arXiv:1507.02672,2015.
[16] [16] Hauberg S, Freifeld O, Larsen A B L, et al. Dreaming more data: Class-dependent distributions over diffeomorphisms for learned data augmentation[J]. arXiv preprint arXiv:1510.02795,2015.
[17] [17] Hardt,Moritz,Ben Recht,et al. Train faster,generalize better:Stability of stochastic gradient descent[J]. arXiv preprint arXiv:1509.01240,2015.
Get Citation
Copy Citation Text
WU Tian-yu, XU Ying-chao, CHAO Peng-fei. Data Augmentation Based on Generative Adversarial Networks[J]. OPTICS & OPTOELECTRONIC TECHNOLOGY, 2020, 18(4): 47
Category:
Received: Oct. 20, 2019
Accepted: --
Published Online: Nov. 2, 2020
The Author Email: Tian-yu WU (756738690@qq.com)
CSTR:32186.14.