Magnetic resonance imaging (MRI), as a noninvasive and powerful method in modern diagnostics, has been advancing in leaps and bounds. Conventional methods to improve MRI based on increasing the static magnetic field strength are restricted by safety concerns, cost issues, and the impact on patient experience; as such, innovative approaches are required. It has been suggested that metamaterials featuring subwavelength unit cells can be used to take full control of electromagnetic waves and redistribute electromagnetic fields, achieve abundant counterintuitive phenomena, and construct versatile devices. Recently, metamaterials with exotic effective electromagnetic parameters, peculiar dispersion relations, or tailored field distribution of resonant modes have shown promising capabilities in MRI. Herein, we outline the principle of the MRI process, review recent advances in enhancing MRI by employing the unique physical mechanisms of metamaterials, and demystify ways in which metamaterial designs could improve MRI, such as by enhancing the imaging quality, reducing the scanning time, alleviating field inhomogeneities, and increasing patient safety. We conclude by providing our vision for the future of improving MRI with metamaterials.
We propose an approach for recognizing the pose and surface material of diverse objects, leveraging diffuse reflection principles and data fusion. Through theoretical analysis and the derivation of factors influencing diffuse reflection on objects, the method concentrates on and exploits surface information. To validate the feasibility of our theoretical research, the depth and active infrared intensity data obtained from a single time-of-flight camera are initially combined. Subsequently, these data undergo processing using feature extraction and lightweight machine-learning techniques. In addition, an optimization method is introduced to enhance the fitting of intensity. The experimental results not only visually showcase the effectiveness of our proposed method in accurately detecting the positions and surface materials of targets with varying sizes and spatial locations but also reveal that the vast majority of the sample data can achieve a recognition accuracy of 94.8% or higher.
Leveraging an optical system for image encryption is a promising approach to information security since one can enjoy parallel, high-speed transmission, and low-power consumption encryption features. However, most existing optical encryption systems involve a critical issue that the dimension of the ciphertexts is the same as the plaintexts, which may result in a cracking process with identical plaintext-ciphertext forms. Inspired by recent advances in computational neuromorphic imaging (CNI) and speckle correlography, a neuromorphic encryption technique is proposed and demonstrated through proof-of-principle experiments. The original images can be optically encrypted into event-stream ciphertext with a high-level information conversion form. To the best of our knowledge, the proposed method is the first implementation for event-driven optical image encryption. Due to the high level of encryption data with the CNI paradigm and the simple optical setup with a complex inverse scattering process, our solution has great potential for practical security applications. This method gives impetus to the image encryption of the visual information and paves the way for the CNI-informed applications of speckle correlography.
Two mainstream approaches for solving inverse sample reconstruction problems in programmable illumination computational microscopy rely on either deep models or physical models. Solutions based on physical models possess strong generalization capabilities while struggling with global optimization of inverse problems due to a lack of sufficient physical constraints. In contrast, deep-learning methods have strong problem-solving abilities, but their generalization ability is often questioned because of the unclear physical principles. In addition, conventional deep models are difficult to apply to some specific scenes because of the difficulty in acquiring high-quality training data and their limited capacity to generalize across different scenarios. To combine the advantages of deep models and physical models together, we propose a hybrid framework consisting of three subneural networks (two deep-learning networks and one physics-based network). We first obtain a result with rich semantic information through a light deep-learning neural network and then use it as the initial value of the physical network to make its output comply with physical process constraints. These two results are then used as the input of a fusion deep-learning neural work that utilizes the paired features between the reconstruction results of two different models to further enhance imaging quality. The proposed hybrid framework integrates the advantages of both deep models and physical models and can quickly solve the computational reconstruction inverse problem in programmable illumination computational microscopy and achieve better results. We verified the feasibility and effectiveness of the proposed hybrid framework with theoretical analysis and actual experiments on resolution targets and biological samples.
Recent breakthroughs in the field of non-Hermitian physics present unprecedented opportunities, from fundamental theories to cutting-edge applications such as multimode lasers, unconventional wave transport, and high-performance sensors. The exceptional point, a spectral singularity widely existing in non-Hermitian systems, provides an indispensable route to enhance the sensitivity of optical detection. However, the exceptional point of the forementioned systems is set once the system is built or fabricated, and machining errors make it hard to reach such a state precisely. To this end, we develop a highly tunable and reconfigurable exceptional point system, i.e., a single spoof plasmonic resonator suspended above a substrate and coupled with two freestanding Rayleigh scatterers. Our design offers great flexibility to control exceptional point states, enabling us to dynamically reconfigure the exceptional point formed by various multipolar modes across a broadband frequency range. Specifically, we experimentally implement five distinct exceptional points by precisely manipulating the positions of two movable Rayleigh scatterers. In addition, the enhanced perturbation strength offers remarkable sensitivity enhancement for detecting deep-subwavelength particles with the minimum dimension down to 0.001λ (with λ to be the free-space wavelength).
Deep learning has transformed computational imaging, but traditional pixel-based representations limit their ability to capture continuous multiscale object features. Addressing this gap, we introduce a local conditional neural field (LCNF) framework, which leverages a continuous neural representation to provide flexible object representations. LCNF’s unique capabilities are demonstrated in solving the highly ill-posed phase retrieval problem of multiplexed Fourier ptychographic microscopy. Our network, termed neural phase retrieval (NeuPh), enables continuous-domain resolution-enhanced phase reconstruction, offering scalability, robustness, accuracy, and generalizability that outperform existing methods. NeuPh integrates a local conditional neural representation and a coordinate-based training strategy. We show that NeuPh can accurately reconstruct high-resolution phase images from low-resolution intensity measurements. Furthermore, NeuPh consistently applies continuous object priors and effectively eliminates various phase artifacts, demonstrating robustness even when trained on imperfect datasets. Moreover, NeuPh improves accuracy and generalization compared with existing deep learning models. We further investigate a hybrid training strategy combining both experimental and simulated datasets, elucidating the impact of domain shift between experiment and simulation. Our work underscores the potential of the LCNF framework in solving complex large-scale inverse problems, opening up new possibilities for deep-learning-based imaging techniques.
Phase recovery, calculating the phase of a light wave from its intensity measurements, is essential for various applications, such as coherent diffraction imaging, adaptive optics, and biomedical imaging. It enables the reconstruction of an object’s refractive index distribution or topography as well as the correction of imaging system aberrations. In recent years, deep learning has been proven to be highly effective in addressing phase recovery problems. The two most direct deep learning phase recovery strategies are data-driven (DD) with supervised learning mode and physics-driven (PD) with self-supervised learning mode. DD and PD achieve the same goal in different ways yet there is a lack of necessary research to reveal similarities and differences. Therefore, we comprehensively compare these two deep learning phase recovery strategies in terms of time consumption, accuracy, generalization ability, ill-posedness adaptability, and prior capacity. What is more, we propose a co-driven strategy of combining datasets and physics for the balance of high- and low-frequency information.
Fiber sensors are commonly used to detect environmental, physiological, optical, chemical, and biological factors. Thermally drawn fibers offer numerous advantages over other commercial products, including enhanced sensitivity, accuracy, improved functionality, and ease of manufacturing. Multimaterial, multifunctional fibers encapsulate essential internal structures within a microscale fiber, unlike macroscale sensors requiring separate electronic components. The compact size of fiber sensors enables seamless integration into existing systems, providing the desired functionality. We present a multimodal fiber antenna monitoring, in real time, both the local deformation of the fiber and environmental changes caused by foreign objects in proximity to the fiber. Time domain reflectometry propagates an electromagnetic wave through the fiber, allowing precise determination of spatial changes along the fiber with exceptional resolution and sensitivity. Local changes in impedance reflect fiber deformation, whereas proximity is detected through alterations in the evanescent field surrounding the fiber. The fiber antenna operates as a waveguide to detect local deformation through the antisymmetric mode and environmental changes through the symmetric mode. This multifunctionality broadens its application areas from biomedical engineering to cyber–physical interfacing. In antisymmetric mode, the device can sense local changes in pressure, and, potentially, temperature, pH, and other physiological conditions. In symmetric mode, it can be used in touch screens, environmental detection for security, cyber–physical interfacing, and human–robot interactions.
Dynamically tunable metasurfaces employing chalcogenide phase-change materials (PCMs) such as Ge2Sb2Te5 alloys have garnered significant attention and research efforts. However, the utilization of chalcogenide PCMs in dynamic metasurface devices necessitates protection, owing to their susceptibility to volatilization and oxidation. Conventional protective layer materials such as Al2O3, TiO2, and SiO2 present potential drawbacks including diffusion, oxidation, or thermal expansion coefficient mismatch with chalcogenide PCMs during high-temperature phase transition, severely limiting the durability of chalcogenide PCM-based devices. In this paper, we propose, for the first time to our knowledge, the utilization of chalcogenide glass characterized by high thermal stability as a protective material for chalcogenide PCM. This approach addresses the durability challenge of current dynamic photonic devices based on chalcogenide PCM by virtue of their closely matched optical and thermal properties. Building upon this innovation, we introduce an all-chalcogenide dynamic tunable metasurface filter and comprehensively simulate and analyze its characteristics. This pioneering work paves the way for the design and practical implementation of optically dynamically tunable metasurface devices leveraging chalcogenide PCMs, ushering in new opportunities in the field.
Quantum microwave photonics (QMWP) is an innovative approach that combines energy–time entangled biphoton sources as the optical carrier with time-correlated single-photon detection for high-speed radio frequency (RF) signal recovery. This groundbreaking method offers unique advantages, such as nonlocal RF signal encoding and robust resistance to dispersion-induced frequency fading. We explore the versatility of processing the quantum microwave photonic signal by utilizing coincidence window selection on the biphoton coincidence distribution. The demonstration includes finely tunable RF phase shifting, flexible multitap transversal filtering (with up to 14 taps), and photonically implemented RF mixing, leveraging the nonlocal RF mapping characteristic of QMWP. These accomplishments significantly enhance the capability of microwave photonic systems in processing ultraweak signals, opening up new possibilities for various applications.
Dual-comb interferometric systems with high time accuracy have been realized for various applications. The flourishing ultralow noise dual-comb system promotes the measurement and characterization of relative timing jitter, thus improving time accuracy. With optical solutions, introducing an optical reference enables 105 harmonics measurements, thereby breaking the limit set by electrical methods; nonlinear processes or spectral interference schemes were also employed to track the relative timing jitter. However, such approaches operating in the time domain either require additional continuous references or impose stringent requirements on the amount of timing jitter. We propose a scheme to correct the relative timing jitter of a free-running dual-comb interferometry assisted by a Fabry–Pérot (F–P) cavity in the frequency domain. With high wavelength thermal stability provided by the F–P cavity, the absolute wavelength deviation in the operating bandwidth is compressed to <0.4 pm, corresponding to a subpicosecond sensitivity of pulse-to-pulse relative timing jitter. Also, Allan deviation of 10 - 10 is obtained under multiple coherent averaging, which lays the foundation for mode-resolved molecular spectroscopic applications. The spectral absorption features of hydrogen cyanide gas molecules at ambient temperature were measured and matched to the HITRAN database. Our scheme promises to provide new ideas on sensitive measurements of relative timing jitter.
Neural networks have provided faster and more straightforward solutions for laser modulation. However, their effectiveness when facing diverse structured lights and various output resolutions remains vulnerable because of the specialized end-to-end training and static model. Here, we propose a redefinable neural network (RediNet), realizing customized modulation on diverse structured light arrays through a single general approach. The network input format features a redefinable dimension designation, which ensures RediNet wide applicability and removes the burden of processing pixel-wise light distributions. The prowess of originally generating arbitrary-resolution holograms with a fixed network is first demonstrated. The versatility is showcased in the generation of 2D/3D foci arrays, Bessel and Airy beam arrays, (perfect) vortex beam arrays, and even snowflake-intensity arrays with arbitrarily built phase functions. A standout application is producing multichannel compound vortex beams, where RediNet empowers a spatial light modulator (SLM) to offer comprehensive multiplexing functionalities for free-space optical communication. Moreover, RediNet has the hitherto highest efficiency, only consuming 12 ms (faster than the mainstream SLM framerate of 60 Hz) for a 10002-resolution holograph, which is critical in real-time required scenarios. Considering the fine resolution, high speed, and unprecedented universality, RediNet can serve extensive applications, such as next-generation optical communication, parallel laser direct writing, and optical traps.
Single-pixel imaging (SPI) enables an invisible target to be imaged onto a photosensitive surface without a lens, emerging as a promising way for indirect optical encryption. However, due to its linear and broadcast imaging principles, SPI encryption has been confined to a single-user framework for the long term. We propose a multi-image SPI encryption method and combine it with orthogonal frequency division multiplexing-assisted key management, to achieve a multiuser SPI encryption and authentication framework. Multiple images are first encrypted as a composite intensity sequence containing the plaintexts and authentication information, simultaneously generating different sets of keys for users. Then, the SPI keys for encryption and authentication are asymmetrically isolated into independent frequency carriers and encapsulated into a Malus metasurface, so as to establish an individually private and content-independent channel for each user. Users can receive different plaintexts privately and verify the authenticity, eliminating the broadcast transparency of SPI encryption. The improved linear security is also verified by simulating attacks. By the combination of direct key management and indirect image encryption, our work achieves the encryption and authentication functionality under a multiuser computational imaging framework, facilitating its application in optical communication, imaging, and security.
Structurally anisotropic materials are ubiquitous in several application fields, yet their accurate optical characterization remains challenging due to the lack of general models linking their scattering coefficients to the macroscopic transport observables and the need to combine multiple measurements to retrieve their direction-dependent values. Here, we present an improved method for the experimental determination of light-transport tensor coefficients from the diffusive rates measured along all three directions, based on transient transmittance measurements and a generalized Monte Carlo model. We apply our method to the characterization of light-transport properties in two common anisotropic materials—polytetrafluoroethylene tape and paper—highlighting the magnitude of systematic deviations that are typically incurred when neglecting anisotropy.
We present what we believe is the first conjugate adaptive optics (AO) extension that can be retrofitted into a commercial microscope by being positioned between the camera port and the image sensor. The extension features a deformable phase plate (DPP), a refractive wavefront modulator, and indirect wavefront sensing to form a completely in-line architecture. This allows the axial position of the DPP to be optimized by maximizing an image quality metric, which is a cumbersome task with deformable mirrors as the correction element. We demonstrate the performance of the system on a Zeiss AxioVert 200M microscope equipped with a 20 × 0.75 NA air objective. To simulate sample-induced complex aberrations, transparent custom-made arbitrary phase plates were introduced between the sample and the objective. We demonstrate that the extension can provide high-quality full-field correction even for large aberrations, when the DPP is placed at the conjugate plane of the phase plates. We also demonstrate that both the DPP position and its surface profile can be optimized blindly, which can pave the way for plug-and-play conjugate-AO systems.