Chinese Journal of Quantum Electronics, Volume. 42, Issue 1, 70(2025)

Performance optimization of quantum Otto cycle via deep reinforcement learning

LI Jiansong1、*, LI Hai1, YU Wenli2, and HAO Yaming1
Author Affiliations
  • 1School of Information and Electronic Engineering, Shandong Technology and Business University, Yantai 264005, China
  • 2School of Computer Science and Technology, Shandong Technology and Business University, Yantai 264005, China
  • show less

    In response to the challenge that complicated control fields are generally required for realizing the high-performance shortcuts to adiabaticity quantum Otto cycle (QOC), the performance characteristics of QOC under linear driving field which is easy to manipulate in experiment, are studied in this work. Using the strategy-based deep reinforcement learning, the driving field added during the expansion and compression processes of QOC with single qubit as the working medium is optimized, and then the high-performance QOC under linear driving field can be realized. Compared with the scheme of QOC with the non-adiabatic free evolution, the QOC under the optimized additional driving scheme exhibits significant advantages in the output work, power and efficiency. Especially, in the case of short-cycle period, for the QOC under free evolution scheme, the output of positive work is completely suppressed due to the generation of a large amount of irreversible work, while the QOC under the optimized driving scheme can still operate normally (with output positive work). This work preliminarily tests the validity of deep reinforcement learning in optimizing the performance of quantum engine.

    Keywords
    Tools

    Get Citation

    Copy Citation Text

    Jiansong LI, Hai LI, Wenli YU, Yaming HAO. Performance optimization of quantum Otto cycle via deep reinforcement learning[J]. Chinese Journal of Quantum Electronics, 2025, 42(1): 70

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category:

    Received: Feb. 6, 2023

    Accepted: --

    Published Online: Mar. 13, 2025

    The Author Email: LI Jiansong (ljs1019@foxmail.com)

    DOI:10.3969/j.issn.1007-5461.2025.01.007

    Topics