Acta Optica Sinica, Volume. 45, Issue 14, 1420002(2025)

Key Technologies and Advances in Photonic Neural Networks (Invited)

Qipeng Yang1, Ye Tian1, Shuhan Yue1, Xueling Wei1, Zenan Wu1, Bowen Bai1, Haowen Shu1, Weiwei Hu1, and Xingjun Wang1,2,3、*
Author Affiliations
  • 1State Key Laboratory of Photonics and Communications, School of Electronics, Peking University, Beijing 100871, China
  • 2Frontiers Science Center for Nano-Optoelectronics, Peking University, Beijing 100871, China
  • 3Yangtze Delta Institute of Optoelectronics, Peking University, Nantong 226010, Jiangsu , China
  • show less

    Significance

    The rapid advancement of artificial intelligence, particularly deep learning, has created increasingly demanding requirements for hardware performance. Traditional electronic computing architectures encounter substantial limitations—including the deceleration of Moore’s Law and persistent challenges from the “memory wall” and “power wall”—restricting their capacity to maintain performance improvements for large-scale, highly concurrent AI tasks. This widening gap between computational requirements and hardware capabilities necessitates the exploration of alternative computing paradigms to overcome these fundamental constraints. Optical computing, utilizing the inherent properties of photons, emerges as a highly promising solution. Among various optical computing approaches, photonic neural networks (PNNs) have attracted considerable attention. PNNs employ photons directly to perform essential mathematical operations fundamental to neural networks, such as vector-matrix multiplication, convolution, and nonlinear activation functions. This natural capability to execute computation in the optical domain provides significant advantages over conventional electronic methods, including ultra-high processing speed, extensive bandwidth for data throughput, inherent parallelism, and substantially reduced energy consumption through minimized data transfer latency. Consequently, PNNs have emerged as a critical research frontier bridging photonics, information science, and artificial intelligence, offering an innovative solution for next-generation high-performance AI hardware. This review thoroughly examines PNNs’ core concepts, technological developments, and future directions.

    Progress

    This review systematically summarizes recent key technologies and progress in PNN physical implementations, organized by primary architectural types that have driven significant advancements in the field.

    PNNs based on diffractive optical elements, often referred to as diffractive optical neural networks (DONNs), harness the wave propagation of light through structured diffractive layers to perform all-optical deep learning inference. This architecture has demonstrated remarkable performance in tasks like complex image classification and reconstruction. Recent breakthroughs include the development of reconfigurable and programmable DONNs for multi-task learning, the integration of multi-dimensional multiplexing to significantly boost computational throughput, enhanced robustness against fabrication errors and environmental noise, and successful on-chip integration, paving the way for compact and efficient devices.

    PNNs based on Mach-Zehnder interferometer (MZI) arrays utilize reconfigurable MZI units to implement arbitrary linear optical transformations, establishing highly adaptable computational layers. Early theoretical designs have evolved into large-scale integrated MZI meshes that achieve high-accuracy classification and regression tasks, including complex-valued computations. Key advances include innovative architectural designs for enhanced scalability and energy efficiency, robust configurations addressing hardware imperfections and crosstalk, and sophisticated on-chip training methods for precise weight loading and adaptive operation in real-time.

    PNNs leveraging microring resonator (MRR) arrays utilize the distinctive wavelength-selective properties of microring resonators, particularly in wavelength division multiplexing (WDM) systems, to enable high-throughput parallel processing. The “broadcast-and-weight” architecture establishes a fundamental paradigm for MRR-based PNNs, enabling dynamic weight modulation and optical summation. Notable advances include sophisticated weight bank control for high-precision tuning, innovative architectural designs for integrated tensor computations and optical convolutions at impressive computation densities, and the integration of diverse functionalities for specialized applications, demonstrating their potential for ultra-compact and high-performance computing.

    PNNs based on cascaded modulator architectures achieve complex optical transformations through the sequential modulation of optical signals, offering structural simplicity and high integration potential. These architectures have demonstrated ultra-low energy consumption per operation and high accuracy in classification tasks like MNIST digit recognition. Recent advancements focus on direct cascaded modulator systems, robust hybrid optoelectronic integration for versatile control and non-linearity, coherent processing architectures for high-precision complex-valued computations, and programmable signal processors for reconfigurable and high-speed inference, pushing the boundaries of compact integrated photonic circuits.

    Finally, the implementation of optical nonlinear activation functions is crucial for enabling deep learning capabilities in PNNs, allowing networks to learn and process complex, non-linear relationships. Two primary categories are distinguished: optoelectronic hybrid methods, which convert optical signals to electrical for nonlinear processing before re-converting, and all-optical methods, which directly exploit intrinsic material nonlinearities or specific device effects (Figs. 20?22). Progress in this area is vital for constructing truly multi-layered PNNs that can break linearity and achieve high accuracy across diverse and challenging AI tasks.

    Conclusions and Prospects

    While PNN research has achieved significant progress, substantial challenges remain. These include achieving high level integration and scalability for complex tasks, improving power efficiency of active photonic components, enhancing robustness against manufacturing errors and environmental noise, realizing efficient all-optical nonlinear activation for deep networks, and developing practical on-chip optical memory. Future development requires multidisciplinary innovation, emphasizing novel materials and computing elements, co-design of hardware and algorithms, advanced photonic integration platforms, and expanding PNN applications into scientific computing, optimization, simulation, and advanced sensing. Addressing these challenges will enable PNNs to evolve from prototypes to practical solutions, establishing their position in post-Moore computing.

    Keywords
    Tools

    Get Citation

    Copy Citation Text

    Qipeng Yang, Ye Tian, Shuhan Yue, Xueling Wei, Zenan Wu, Bowen Bai, Haowen Shu, Weiwei Hu, Xingjun Wang. Key Technologies and Advances in Photonic Neural Networks (Invited)[J]. Acta Optica Sinica, 2025, 45(14): 1420002

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Category: Optics in Computing

    Received: Apr. 23, 2025

    Accepted: Jun. 12, 2025

    Published Online: Jul. 22, 2025

    The Author Email: Xingjun Wang (xjwang@pku.edu.cn)

    DOI:10.3788/AOS250986

    CSTR:32393.14.AOS250986

    Topics