Journal of Terahertz Science and Electronic Information Technology , Volume. 22, Issue 2, 209(2024)

Deep learning algorithm featuring continuous learning for modulation classifications in wireless networks

WU Nan*, SUN Yu, and WANG Xudong
Author Affiliations
  • School of Information Science and Technology , Dalian Maritime University , Dalian Liaoning 116000 , China
  • show less
    References(39)

    [1] [1] HINTON G E,SALAKHUTDINOV R R. Reducing the dimensionality of data with neural networks[J]. Science, 2006,313(5786): 504-507. doi:10. 1126/science. 1127647.

    [2] [2] DAYHOFF J E,DELEO J M. Artificial neural networks[J]. Cancer, 2001,91(S8): 1615-1635. doi:10. 1002/ 1097-0142(20010415) 91:8+%3C1615::AID-CNCR1175%3E3.0.CO;2-L.

    [3] [3] FREAN M, ROBINS A. Catastrophic forgetting in simple networks: an analysis of the pseudorehearsal solution[J]. Network: Computation in Neural Systems, 1999, 10(3):227-236. doi:10. 1088/0954-898X_ 10 3 302.

    [4] [4] SRIVASTAVA A,HAN E H,KUMAR V,et al. Parallel formulations of decision-tree classification algorithms[J]. Data Mining and Knowledge Discovery, 1999,3(3):237-261. doi:10. 1023/A: 1009832825273.

    [5] [5] WANG Lanxun, REN Yujing. Recognition of digital modulation signals based on high order cumulants and support vector machines[C]// 2009 ISECS International Colloquium on Computing, Communication, Control, and Management. Sanya, China: IEEE, 2009:271-274. doi:10. 1109/CCCM.2009.5267733.

    [6] [6] DOBRE O A, ABDI A, BAR-NESS Y, et al. Survey of automatic modulation classification techniques: classical approaches and new trends[J]. IET Communications, 2007, 1(2): 137-156. doi:10. 1049/iet-com:20050176.

    [7] [7] DOBRE O A,HAMEED F. Likelihood-based algorithms for linear digital modulation classification in fading channels[C]// 2006 Canadian Conference on Electrical and Computer Engineering. Ottawa, ON, Canada: IEEE, 2006: 1347-1350. doi: 10. 1109/ CCECE.2006.277525.

    [8] [8] PENG Shengliang, SUN Shujun, YAO Yudong. A survey of modulation classification using deep learning: signal representation and data preprocessing[J]. IEEE Transactions on Neural Networks and Learning Systems, 2022,33( 12):7020-7038. doi:10. 1109/ TNNLS.2021.3085433.

    [9] [9] RUMELHART D E, HINTON G E, WILLIAMS R J. Learning representations by back-propagating errors[J]. Nature, 1986, 323(6088):533-536. doi:10. 1038/323533a0.

    [10] [10] KIM B,KIM J,CHAE H,et al. Deep neural network-based automatic modulation classification technique[C]// 2016 International Conference on Information and Communication Technology Convergence(ICTC). Jeju, Korea(South): IEEE, 2016: 579-582. doi: 10. 1109/ICTC.2016.7763537.

    [11] [11] HE Kaiming,ZHANG Xiangyu,REN Shaoqing,et al. Deep residual learning for image recognition[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition(CVPR). Las Vegas,NV,USA:IEEE, 2016:770-778. doi:10. 1109/CVPR.2016.90.

    [12] [12] HUANG Gao, LIU Zhuang,VAN DER MAATEN L, et al. Densely connected convolutional networks[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition(CVPR). Honolulu,HI,USA:IEEE, 2017:2261-2269. doi:10. 1109/CVPR.2017.243.

    [13] [13] CHEN Shiyao,ZHANG Yan, HE Zunwen,et al. A novel attention cooperative framework for automatic modulation recognition[J]. IEEE Access, 2020(8): 15673-15686. doi:10. 1109/ACCESS.2020.2966777.

    [15] [15] SILVER D, HUANG A, MADDISON C J, et al. Mastering the game of Go with deep neural networks and tree search[J]. Nature, 2016,529(7587):484-489. doi:10. 1038/nature 16961.

    [16] [16] MCCLOSKEY M, COHEN N J. Catastrophic interference in connectionist networks: the sequential learning problem[J]. Psychology of Learning and Motivation, 1989(24): 109-165. doi:10. 1016/S0079-7421(08)60536-8.

    [17] [17] FRENCH R M. Using semi-distributed representations to overcome catastrophic forgetting in connectionist networks[C]// Proceedings of the 13th Annual Cognitive Science Society Conference. Chicago:LEA, 1991: 173-178.

    [18] [18] PULITO B L, DAMARLA T R, NARIANI S. A two-dimensional shift invariant image classification neural network which overcomes the stability/plasticity dilemma[C]// 1990 IJCNN International Joint Conference on Neural Networks. San Diego, CA, USA:IEEE, 1990:825-833. doi:10. 1109/IJCNN. 1990. 137798.

    [19] [19] PARISI G I, KEMKER R,PART J L,et al. Continual lifelong learning with neural networks:a review[J]. Neural Networks, 2019,( 113):54-71. doi:10. 1016/j.neunet.2019.01.012.

    [20] [20] KADIRKAMANATHAN V, NIRANJAN M. A function estimation approach to sequential learning with neural networks[J]. Neural Computation, 1993,5(6):954-975. doi:10. 1162/neco. 1993.5.6.954.

    [21] [21] CHAUDHRY A, DOKANIA P K, AJANTHAN T, et al. Riemannian walk for incremental learning: understanding forgetting and intransigence[C]// Computer Vision-ECCV 2018. Munich, Germany: Springer, 2018: 556-572. doi: 10. 1007/978-3-030-01252- 6_33.

    [22] [22] CARUANA R. Multitask learning[M]. Boston,MA:Springer, 1998:95-133. doi:10. 1007/978-1-4615-5529-2_5.

    [23] [23] FRENCH R M. Dynamically constraining connectionist networks to produce distributed, orthogonal representations to reduce catastrophic interference[M]. Proceedings of the Sixteenth Annual Conference of the Cognitive Science Society. New York: Routledge, 1994.

    [24] [24] ROLNICK D, AHUJA A, SCHWARZ J, et al. Experience replay for continual learning[EB/OL]. (2019-11-26). https://doi. org/ 10.48550/arXiv. 1811. 11682. doi:10.48550/arXiv. 1811. 11682.

    [25] [25] ANS B, ROUSSET S. Avoiding catastrophic forgetting by coupling two reverberating neural networks[J]. Comptes Rendus de l'Académie des Sciences-Series Ⅲ-Sciences de La Vie, 1997,320( 12):989-997. doi: 10. 1016/S0764-4469(97)82472-9.

    [26] [26] SRIVASTAVA N, HINTON G, KRIZHEVSKY A, et al. Dropout: a simple way to prevent neural networks from overfitting[J]. Journal of Machine Learning Research, 2014, 15( 1): 1929-1958. doi:10.5555/2627435.2670313.

    [27] [27] SILVER D L, MERCER R E. The task rehearsal method of life-long learning: overcoming impoverished data[C]// Advances in Artificial Intelligence. Calgary,Canada:Springer, 2002:90-101. doi:10. 1007/3-540-47922-8_8.

    [28] [28] KEMKER R, MCCLURE M, ABITINO A, et al. Measuring catastrophic forgetting in neural networks[EB/OL]. (2017-11-09) [2021-12-27]. https://arxiv.org/abs/ 1708.02072. doi:10.48550/arXiv. 1708.02072.

    [29] [29] ATKINSON C, MCCANE B, SZYMANSKI L, et al. Pseudo-Recursal: solving the catastrophic forgetting problem in deep neural networks[EB/OL]. (2018-05-07)[2021-12-27]. https://arxiv.org/abs/ 1802.03875. doi:10.48550/arXiv. 1802.03875.

    [30] [30] REBUFFI S A, KOLESNIKOV A, SPERL G, et al. iCaRL: incremental Classifier and Representation Learning[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition(CVPR). Honolulu, HI, USA: IEEE, 2017: 5533-5542. doi: 10. 1109/ CVPR.2017.587.

    [31] [31] ALJUNDI R,LIN Min, GOUJAUD B,et al. Gradient based sample selection for online continual learning[EB/OL]. (2019-10-31) [2021-12-27]. https://arxiv.org/abs/ 1903.08671. doi:10.48550/arXiv. 1903.08671.

    [32] [32] LI Zhizhong,HOIEM D. Learning without forgetting[EB/OL]. (2017-02-14)[2021-12-27]. https://arxiv.org/abs/ 1606.09282. doi: 10.48550/arXiv. 1606.09282.

    [33] [33] ZENKE F,POOLE B,GANGULI S. Continual learning through synaptic intelligence[EB/OL]. (2017-06-12)[2021-12-27]. https:// arxiv.org/abs/ 1703.04200. doi:10.48550/arXiv. 1703.04200.

    [34] [34] KIRKPATRICK J,PASCANU R,RABINOWITZ N,et al. Overcoming catastrophic forgetting in neural networks[J]. Proceedings of the National Academy of Sciences of the United States of America, 2017, 114( 13):3521-3526. doi:10. 1073/pnas. 1611835114.

    [35] [35] ALJUNDI R, BABILONI F, ELHOSEINY M, et al. Memory aware synapses: learning what (not) to forget[EB/OL]. (2018-10-05) [2021-12-27]. https://arxiv.org/abs/ 1711.09601. doi:10.48550/arXiv. 1711.09601.

    [36] [36] MALLYA A, LAZEBNIK S. PackNet: adding multiple tasks to a single network by iterative pruning[C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, USA: IEEE, 2018: 7765-7773. doi: 10. 1109/ CVPR.2018.00810.

    [37] [37] LOPEZ-PAZ D,RANZATO M A. Gradient episodic memory for continual learning[EB/OL]. (2022-09-13)[2021-12-27]. https:// arxiv.org/abs/ 1706.08840. doi:10.48550/arXiv. 1706.08840.

    [38] [38] O'SHEA T J, WEST N. Radio machine learning dataset generation with GNU radio[C]// Proceedings of the GNU Radio Conference. Boulder,CO:[s.n.], 2016.

    [39] [39] MANI I, ZHANG Jianping. KNN approach to unbalanced data distributions: a case study involving information extraction[C]// Proceeding of International Conference on Machine Learning(ICML 2003), Workshop on Learning from Imbalanced Data Sets. Washington,DC:ICML, 2003.

    [40] [40] NOCEDAL J,WRIGHT S J. Quadratic programming[M]. New York:Springer, 1999:438-486. doi:10. 1007/0-387-22742-3_ 16.

    Tools

    Get Citation

    Copy Citation Text

    WU Nan, SUN Yu, WANG Xudong. Deep learning algorithm featuring continuous learning for modulation classifications in wireless networks[J]. Journal of Terahertz Science and Electronic Information Technology , 2024, 22(2): 209

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category:

    Received: Dec. 27, 2021

    Accepted: --

    Published Online: Aug. 14, 2024

    The Author Email: WU Nan (wu.nan@dlmu.edu.cn)

    DOI:10.11805/tkyda2021436

    Topics