Chinese Optics Letters, Volume. 22, Issue 6, 060009(2024)

Information-theoretic perspective on performance assessment and fundamental limit of quantum imaging [Invited]

Na Li1, Chenyu Hu2、*, and Xiao-Ming Lu1、**
Author Affiliations
  • 1School of Sciences, Hangzhou Dianzi University, Hangzhou 310018, China
  • 2Hangzhou Institute for Advanced Study, University of Chinese Academy of Sciences, Hangzhou 310024, China
  • show less

    Performance assessment of an imaging system is important for the optimization design with various technologies. The information-theoretic viewpoint based on communication theory or statistical inference theory can provide objective and operational measures on imaging performance. These approaches can be further developed by combining with the quantum statistical inference theory for optimizing imaging performance over measurements and analyze its quantum limits, which is demanded in order to improve an imaging system when the photon shot noise in the measurement is the dominant noise source. The aim of this review is to discuss and analyze the recent developments in this branch of quantum imaging.

    Keywords

    1. Performance Assessment of Imaging Systems

    Imaging science has become a rapidly evolving multidisciplinary field in the past few years. In most cases of imaging, the intensity of the light used during the imaging process is relatively high, and thus the quantum nature of the optical field does not manifest[1]. However, when the light levels are rather low, the imaging quality will be limited by quantum effect. The field of quantum imaging encompasses a broad spectrum of imaging disciplines, wherein the inherent quantum fluctuations in light play a crucial role during the imaging process and significantly impact its performance[2,3]. In these cases, the performance assessment and determination of its quantum limit through powerful approaches are of great importance in guiding the practical optimization of the imaging system.

    As an typical topic of quantum imaging, ghost imaging (GI) is a new type of imaging modality that utilizes high-order correlation of light fields for imaging information acquisition[4,5]. The technique initially emerged as an unconventional imaging method that utilizes quantum-entangled photons as the illuminating source and retrieves the image through two-photon correlation obtained from photon coincidence counting[6,7]. However, subsequent verification demonstrated its feasibility using classical sources[8,9]. Along with the proof-of-principle verification and application of GI techniques, the performance evaluation of its imaging results becomes of significant importance[10].

    Resolving power, as a key indicator for the most of imaging systems, can be defined in many ways[11,12]. The famous traditional optical resolving power is Rayleigh’s criterion[13] on the minimum distance of two incoherent point sources that is resolvable from the image. Although Rayleigh’s criterion is still widely used today due to its simplicity, it is a rough idea in the first place. The minimum resolvable distance behind Rayleigh’s criterion can be refined in a more rigorous foundation in terms of the statistical phase transition, where the sample complexity for statistically resolving closely spaced point sources goes from polynomial to exponential, but at the expense of making the calculation more complicated[14].

    Resolving power can be better defined from information-theoretic perspectives, which we divide into three categories: overall assessment, image quality assessment, and task-oriented assessment. Overall assessment focuses on the capability of an imaging system to transfer information from object to image. For instance, based on the Nyquist–Shannon sampling theorem, the degree of freedom of an image was proposed to evaluate an imaging system[15,16]. Here, it is essentially the mutual information and capacity[17] of the optical channel that is considered. In the quantum imaging field, there also exist similar thoughts. For example, in GI, by taking the optical system as a communication process, the mutual information between the detection signal and the imaging object is analyzed as a performance measure to indicate the acquired information content for quantitative assessment of the system[18]. Image quality assessment focuses on the faithfulness of the image to the object and, explicitly or implicitly, and involves a comparison between the image (or the observation data) and the object. For instance, the mean square error of the estimates for radiance values at each point[19] is an explicit object-image comparison, while the mutual information is an implicit comparison using the correlation between the object and its image or observation data[20]. For example, the image mutual information between the imaging result and the ground-truth image has been investigated to assess the imaging quality of GI system[21]. Task-oriented assessment is based on the tasks to infer specific features of the objects, such as the brightness, the number, and the distance of optical point sources[2224]. The tasks used to assess resolving power are often strongly relevant to the realistic scenes, e.g.,  astronomy observation[25] and molecular imaging[26] and thus have clear operational significance.

    2. Quantum Statistical Description of Imaging

    Statistical inference theory, including parameter estimation and hypothesis testing, is the main mathematical foundation of task-oriented assessment of resolving power and is also applicable to image quality assessment[22,2730].

    To apply statistical inference theory to guide the optimization of an imaging system and analyze its quantum limit, various fluctuations, including the extrinsic noise and the intrinsic quantum noise, should be taken into account. The extrinsic noise, like the background light and dark count, comes from technical limitations. The intrinsic quantum noise, e.g.,  the photon shot noise, comes from the random nature of quantum measurement.

    The imaging process is usually treated as a map from the object space to the observation space with additive noises[31]. Instead, we should consider the complex amplitude of optical field, E(r,t)=Es(r,t)+Eb(r,t),where the signal field Es(r,t) is propagated from the object plane and the noise field Eb(r,t) is due to the random background light. The stochastic fields Es(r,t) and Eb(r,t) can be described by the auto- and cross-correlation spectra[32]. And essentially, these correlation functions of the optical field play important roles in quantum imaging. For example, in GI, which first emerged as an unconventional imaging technique that uses quantum-entangled photons, the high-order correlation function of the fluctuation of light is utilized. It has been theoretically derived that the image information can be achieved by the second-order correlation function between the two optical fields in two light paths, ΔG(2)(rr,rt)=ΔIr(rr)ΔIt(rt)|d2rhr*(rr;r)ht(rt;r)|2,where hr(rr;r), ht (rt;r) are transfer functions of light fields in the reference and object paths, respectively. Actually, the image information is included in the transfer function ht(rt;r) of the object-beam path. Under general conditions where the light level is high, this result holds for both treatments of the optical fields in either a classical or a quantum way. In superresolution imaging, such as fluorescence microscopy and stellar imaging, photon shot noise in the measurement becomes the dominant noise source[3335] for which a quantum treatment may give new insights into further improvement in resolving power[36,37].

    The quantum description of optical field is split into two parts, the field operators and the quantum states, both of which are represented on the Fock space associated with a set of orthonormal spatiotemporal modes. To focus on the spatial resolving power, assume that the light is quasimonochromatic, scalar, and paraxial. Then, within one temporal mode, the quantum state for the spatial degree of freedom can be expressed with the Sudarshan–Glauber representation as[38]ρ=Φ(α,α)|αα|d2mα,where |α is the multimode coherent state in a fixed set {φj}j=1m of orthonormal spatial modes with amplitude α(α1,α2,,αm), α(α1*,α2*,,αm*), and d2mαd2α1d2α2d2αm. A general quantum measurement is characterized by a positive-operator-valued measure, M{Mq|Mq0,qMq=I},where q denotes the measurement outcomes and I the identity operator. When a quantum measurement is performed, the probability of obtaining the outcome q is p(q)=tr(ρMq) according to Born’s rule in quantum mechanics. The most of traditional approaches for resolving power are based on a fixed measurement—direct imaging, whose outcomes are the optical intensities or photon counts at each pixel[12,31].

    3. Diffraction Limit and State Discrimination

    The diffraction limit of imaging can be understood from the perspective of modal transformation during light propagation. For instance, a typical 4f optical system is a low-pass filter, which allows low-spatial-frequency components of the optical field near the optical axis to pass through and causes the blur of the image even for an optical point source[39]. The modal transformation underlying the 4f optical system is illustrated in Fig. 1, where the modal spaces[40,41] successively experience the unitary Fourier transformation U, the projection P for the low-pass filter, and the inverse Fourier transformation U. The modal space is a Hilbert space, in which the orthogonal modal functions possess the maximal distinguishability between each other. The total modal transformation TUPU is a projection, which may map a pair of orthogonal modal functions to nonorthogonal ones. In such a case, a part of distinguishability between the input states will be lost, which is the source of the diffraction limit.

    Diffraction limit understood in the perspective of modal transformation. Here, U is the Fourier transform and P stands for the projection that only allows low-spatial-frequency components at the Fourier plane to pass through, and U† is the adjoint of U. The red and green grids represent the coordinate space and the frequency space in the transverse plane, respectively.

    Figure 1.Diffraction limit understood in the perspective of modal transformation. Here, U is the Fourier transform and P stands for the projection that only allows low-spatial-frequency components at the Fourier plane to pass through, and U is the adjoint of U. The red and green grids represent the coordinate space and the frequency space in the transverse plane, respectively.

    The above-mentioned perspective of diffraction limit becomes more pronounced when considering a single-photon wave packet propagating from the object plane to the image plane. For a single-photon state, the spatial modal function ψ(x) can be considered as its wave function for the quantum state of the spatial degree of freedom on transverse planes, i.e., |ψ=ψ(x)|xdx with |x being the photon transverse-plane position eigenket. Let ψ(x;x) be the point spread function for a point source located at the position x on the object plane. A single-photon wave packet is propagated from the object plane to the image plane as |xobψ(x;x)|ximdx.

    Correspondingly, the inner product of the single-photon states is transformed as x1|x2ob=δ(x1x2)ψ*(x;x1)ψ(x;x2)dx.

    The nonzero overlap between ψ(x;x1) and ψ(x;x2) for x1x2 manifests the loss of distinguishability that causes image blurring. Note that the norm of ψ(x;x)|ximdx may be less than unit, meaning that the projection causes photon losses[42].

    For thermal sources, the quantum state of the optical fields in the object plane can be described by the Sudarshan–Glauber representation of Eq. (3) with[38]Φ(α,α)=1det(πΓ)exp(αΓ1α),where Γ is the mutual coherence matrix defined by Γjk=tr(ajρak), aj and aj denote the annihilation and creation operators on the jth spatial mode, respectively. In the image plane, the point spread functions from different point sources may overlap so that they cannot be taken as the modal basis to represent the density operators for the image-plane quantum states. The method of finding the minimal set of modal function to represent the density operator of a multimode state can be found in Ref. [40].

    4. Quantum Statistical Inference Toolbox

    The advantage of the quantum description for imaging is that it will be convenient for utilizing quantum statistical inference theory, which is about how to effectively extract the information encoded in the quantum state by optimizing quantum measurements and data processing. An abstract procedure of quantum statistical inference is illustrated in Fig. 2. After an encoding process, the quantum state depends on an unknown parameter θ that stands for some features of interest. This quantum state can experience some transforms before the measurement is performed. Denote by θ^ the algorithm used to infer θ, which can be considered as a function of the observation data obtained by performing the same quantum measurement on n independent and identically distributed (i.i.d.) samples.

    Quantum statistical inference process.

    Figure 2.Quantum statistical inference process.

    The ultimate purpose of quantum statistical inference theory is to find a good measurement M and algorithm θ^. To do so, we need a figure of merit to evaluate the performance of a quantum inference strategy, which is denoted by (θ^,M). Without loss of generality, let f(θ^,M|Q) be a function standing for the error of the inference, where Q stands for the quantum statistical model for imaging. Usually, Q is composed of a parametric family ρθ of states, a prior probability distribution pprior(θ) of the unknown parameter θ, and the number n of available samples. We take f(θ^,M|Q) as an objective function to be minimized for defining the optimal inference strategy for a given quantum statistical model Q.

    In practice, direct minimization of an objective function f(θ^,M|Q) is often formidable, especially for imaging problems whose degree of freedom is very large. Instead, one often resorts to some lower bounds on f(θ^,M|Q) for assessing the potential for improving the performance of conventional schemes. A good lower bound f(θ^,M|Q)(Q) should satisfy the following criteria. (i) It should be solely determined by the statistical model Q so that it is valid for any inference scheme. (ii) It should be easy to calculate. (iii) The lower bound should be close to the true limit of inference error, at least for the cases with large samples. In addition, an intermediate lower bound [M|Q] such that f(θ^,M|Q)[M|Q](Q) is helpful for separating the optimization over measurements and that over data processing, as it can take full advantage of the classical statistical inference. In what follows, we shall give a brief review on two main kinds of lower bounds used in quantum statistical inference problems and their applications to resolving power.

    5. Imaging Performance Evaluation Based on Parameter Estimation

    When the unknown parameter θ takes continuous values, the statistical inference problem belongs to the class of parameter estimation, for which the classical and quantum Cramér–Rao bounds (CRBs) provide an easy-to-use framework. For any unbiased estimator and any quantum measurement, the covariance matrices of the estimators for each component θj of the vectorial parameter θ satisfy the following matrix inequalities: Cov(θ^,M)n1F[M]1n1F1,where F[M] and F are the classical and quantum Fisher information (FI) matrix[41,43,44], respectively, and n is the number of i.i.d. samples. The matrix inequality like AB should be interpreted as AB being positive semidefinite. The classical FI matrix is defined as F[M]jk=q1p(q|θ)p(q|θ)θjp(q|θ)θk,where p(q|θ)=tr(ρθMq). The quantum FI matrix is defined as Fjk=12tr[ρθ(LjLk+LkLj)],where L is a Hermitian operator determined by (Ljρθ+ρθLj)/2=ρθ/θj. The first inequality in Formula (8), i.e., Cov(θ^,M)F[M]1/n, is the classical CRB[45,46]. The inequality Cov(θ^,M)F1/n is called the quantum CRB or Helstrom’s bound[44,47,48], which is the most popular theoretical tool used in quantum metrology[4952].

    5.1. CRB in GI with respect to imaging quality

    The classical CRB has been used for the image quality assessment of GI[53,54]. Generally speaking, CRB is one of the reasonable theoretical means for analyzing the bound of uncertainty for parameter estimation. By extending it to the scenario where the image is regarded as multiple parameters, the CRB for image information retrieval in the GI system was initially explored. Through the likelihood function p(y;A,x) from the detection model, the CRB of each pixel in the retrieved image is obtained by the FI matrix J=Ey,A{2ln[p(y;A,x)]x2}, as CRB(xi)1m·SNR(NT¯)2,where m is the number of samplings, SNR is the detection signal-to-noise ratio (SNR), N is roughly the pixel number of the retrieved image, and T¯ is the mean transmittance of the object. It can be found that the CRB for each pixel is the same, meaning that the uncertainty distribution on the spatial pixels is independent of the spatial distribution of the object’s transmittance. Also, the CRB is reciprocal to the sampling number and the SNR, which is reasonable, since the more samplings or the lower noise level, the higher imaging quality. However, due to theoretical reasons, imaging results with uncertainty achieving the CRB may be hard to obtain in practice. Thus, this kind of study on CRB still needs further investigation to give a more reliable imaging uncertainty evaluation.

    Due to the specific assumption of light fields in the above study on the CRB of GI systems, it is restricted to imaging uncertainty assessment for light fields subject to the Gaussian distribution. However, as different kinds of optimized light fields have been developed[55], a more general method for CRB evaluation is desirable. Recently, a practical numerical method for obtaining the CRB as well as the fundamental uncertainty of the GI imaging result[54] has also been proposed by incorporating the Bayesian filtering paradigm[56].

    In particular, considering the multiple varying light-field patterns and the detection noises, the detection process of GI is firstly modeled as the forward process of a Bayesian filtering problem consisting of state evolution and observation. Then, via the prediction and update procedure in Bayesian filtering, the probability distribution of the desired image information is gradually estimated, by assuming appropriate initialization on it. And the estimation as well as uncertainty lower bound of the imaging result can be accordingly obtained. Figure 3(a) illustrates the image information estimation results of different sampling numbers, with the evaluated CRB distribution shown in Fig. 3(b). It can be found that the CRB shows similar property to that in the analytical study. Namely, it is generally the same for each pixel and decreases as the sampling number increases. To better show the effect of uncertainty estimation using CRB, Fig. 3(c) represents the variation of both the reconstruction MSE and evaluated average CRB with the sampling number under different SNRs. It can be found that the MSE matches well with the CRB, which indicates that the uncertainty represented via CRB can be adopted for quantitative imaging quality assessment.

    Results of the Bayesian filtering method. (a) The estimation results of image information under different number of measurements; (b) the evaluated CRB distribution corresponding to (a); (c) variation of both the reconstruction MSE and evaluated average CRB with the sampling number under different SNRs (adapted from Fig. 2 in Ref. [54]).

    Figure 3.Results of the Bayesian filtering method. (a) The estimation results of image information under different number of measurements; (b) the evaluated CRB distribution corresponding to (a); (c) variation of both the reconstruction MSE and evaluated average CRB with the sampling number under different SNRs (adapted from Fig. 2 in Ref. [54]).

    Besides quality assessment, CRB and FI have also been utilized to enhance the quality of the GI system. For example, in the corresponding Fourier-transform GI method[57], better image retrieval is achieved by selecting detection signals that contain more information and result in lower CRB. Specifically, from the conditional probability distribution of detection light-field signals in both paths of the system, p(Ir,It|β1It<It<β2It)=1eβ1eβ21IrIt(1γ)×exp[ItIr+IrItIrIt(1γ)]I0(2γ1γIrItIrIt).The FI conditioned on detection signal fluctuations is numerically calculated, where γ denotes the underlying Fourier spectrum component of the object, and I0(·) is the zeroth-order modified Bessel function of the first kind. And the relationship between FI and the signal intensity (fluctuation) is shown as Fig. 4, indicating that signals with larger fluctuations essentially contain more information. Inspired by this, the reference light fields corresponding to detection signals with larger fluctuations are selected for the conditional averaging processing on them. It is both theoretically and experimentally verified that this information-inspired scheme is able to retrieve the desired Fourier spectrum pattern, and enhance imaging quality under the same number of samplings.

    (a) The FI of different parts of detection signals as a function of relative intensity β (with respect to the mean intensity of the detection signals); (b) comparison of experimental results reconstructed from 15,000 samplings using the proposed scheme (left) and vanilla correlation scheme (right) (adapted from Figs. 7 and 8 in Ref. [57]).

    Figure 4.(a) The FI of different parts of detection signals as a function of relative intensity β (with respect to the mean intensity of the detection signals); (b) comparison of experimental results reconstructed from 15,000 samplings using the proposed scheme (left) and vanilla correlation scheme (right) (adapted from Figs. 7 and 8 in Ref. [57]).

    5.2. CRB in resolving incoherent point sources

    In the 1970s, Helstrom, the pioneer of quantum detection and estimation theory, calculated the quantum CRB for the strength, frequency, and position of an incoherent point source in the presence of background light[41]. These works focus on the single-point resolution limited by background light noise. In 2006, Raw et al. took the classical CRB on the estimation error for the separation between two incoherent optical point sources as a fundamental resolution measure and studied two-point resolution of direct imaging limited by the photon shot noise[24]. They showed that the classical FI about the separation under direct imaging abruptly decreases to zero when the two point sources get closer. This result is consistent with the basic idea behind Rayleigh’s criterion, that is, the significant overlap between the intensity profiles of the light from the two sources damages the distinguishability about the features of the sources.

    In 2016, Tsang et al.[37] revealed substantial room for the improvement of two-point resolving power by calculating the quantum FI of the single-photon states about the separation between two incoherent point sources that are closely placed. They also demonstrated that the spatial-mode demultiplexing (SPADE) with respect to the Hermite–Gaussian modes can attain the quantum-limited two-point resolution. Some advanced tools like the moment estimation[5860] and quantum semiparametric estimation[61,62] are used for generalized quantum superresolution analysis from two-point problem to extended sources. A recent review on this branch of quantum superresolution can be found in Ref. [63].

    6. Imaging Performance Evaluation Based on Hypothesis Testing

    Another easy-to-use framework of quantum statistical inference is quantum hypothesis testing, which corresponds to the cases where the unknown parameter θ takes discrete values. In the context of binary hypothesis testing, either the hypothesis H1 or H2 is true and we have to make a decision about which hypothesis is true based on the observation data. The error probability, namely, the probability of making wrong decisions, is perr=p1Pr(H2|H1)+p2Pr(H1|H2),where p1 and p2 are the prior probabilities of the two hypotheses, respectively, and Pr(Hj|Hk) stands for the probability of choosing Hj given that Hk is true. Assume that the state is ρ1n if the hypothesis H1 is true and ρ2 if the hypothesis H2 is true, where n is the number of i.i.d. samples. The minimum error probability over all quantum measurements was obtained by Helstrom[64] as minperr=12(1p2ρ2np1ρ1n1),where A1=trAA is the trace norm. The formal optimal measurement and decision rule, known as the Helstrom test, is given by the POVM constituted by the projection operator onto the nonnegative-eigenvalue subspace and its complement. However, the measurement of the Helstrom test is in general a collective measurement on all the samples and thus hard to be implemented in realistic situations.

    The Chernoff exponent[65] and its quantum version[66,67] concern the asymptotic efficiency of hypothesis testing and are much easier to apply than the minimum error probability together with the Helstrom test. For the problems with a large number of samples, we usually perform the same measurement on individual systems. As a result, the observation data are i.i.d. random variables. According to the classical hypothesis testing theory[65], the minimum error probability will decrease with the number of samples in the exponential way, that is, minperrexp(nξ[M]), where ξ[M] is the error rate defined as ξ[M]limnlogminperrn.

    This error rate, called the (classical) Chernoff exponent, can be calculated through the expression[65]ξ[M]=logmin0s1p1(q)sp2(q)1sdq,with pj(q)=tr(ρjMq) for j=1, 2. The classical Chernoff exponent is further bounded from above by the quantum Chernoff exponent ξQ as[66,67]ξ[M]ξQlogmin0s1tr(ρ1sρ21s).

    Therefore, the classical Chernoff exponent assesses the asymptotic efficiency of a measurement for hypothesis testing and the quantum Chernoff exponent gives the quantum limits. They play similar roles as the classical and quantum FI does in parameter estimation problems.

    With hypothesis testing theory, in 1964, Harris[22] analyzed resolving power by considering the task of deciding whether the optical field in the image plane is generated by one incoherent source or by two incoherent sources, under conditions of high background radiation. In 1973, Helstrom[68] investigated the quantum limit of the minimum error probability of the one-versus-two incoherent optical sources. However, Helstrom states that the optimal measurement (the Helstrom test) “though possible in principle, cannot be carried out by any known apparatus.” In addition, Helstrom’s work[68] does not compare the quantum-limited performance with the performance of direct imaging, so it is unclear how much room for improvement there is.

    In 2018, Lu et al.[69] utilized the Chernoff exponent to compare the asymptotic efficiencies of the binary SPADE measurement[37], the image inversion interferometry (SLIVER) measurement[70], direct imaging, and the quantum limit. They showed that both SPADE and SLIVER attain the quantum limit in the sub-Rayleigh regime, without knowing the separation in advance. In 2021, Huang and Lupo[71], using asymmetric quantum hypothesis testing, showed that SPADE and SLIVER are superior direct imaging for the exoplanet detection problem. In 2022, Zanforlin et al.[72], based on hypothesis testing, experimentally demonstrated optical quantum superresolution imaging for detecting a weak secondary source.

    7. Conclusion

    Imaging science is a rapidly evolving multidisciplinary field. Various technologies have been used to optimize imaging systems. A good criterion for evaluating the imaging performance is of great importance to guide the design and optimization of imaging systems. The advantage of using the quantum-information-theoretic perspective is that we can tackle the resolution optimization problem in a unified formalism. It often provides an operational resolution measure and an asymptotic attainable lower bound that can be used to reveal the quantum limit of resolution measure. The quantum treatment of imaging may give new insights into ultimate room for improvement of resolving power. More importantly, the new measurement strategies like SPADE are superior to direct imaging in the sub-Rayleigh regime. This branch of quantum imaging is rapidly developing and many proof-of-principle experiments have been reported. It is hoped that, with the information-theoretic perspectives, quantum-optimal measurement can be implemented for realistic imaging problems in the near future.

    For GI, related studies so far have been mainly focused on the evaluation of imaging performance with information-based measures and the optimization of the imaging quality. For evaluation, the mutual information and CRB are used as content-free indicators for beforehand assessment. And it is also demonstrated that increasing the detection information contributes to improving the imaging quality. These studies have shown the effect of information theory on assessing and optimizing GI systems, while they also suggest some further theoretical possibility for performing related research. First, the fundamentally theoretical model of GI for information-theoretic study could be further investigated. For example, exploration studies involving the optical coherence should be desirable, since optical coherence essentially plays an important role in GI (as well as several quantum imaging modalities), while probability modeling in existing GI studies rarely involves that. Also, though existing studies have shown that the information measure has similar properties to mean square error, and imaging SNR in imaging quality assessment, the analysis of performance evaluation should be extended by further connecting the information measure with commonly recognized evaluation indicators of an imaging system. In addition, the information-theoretic approaches could be further applied to adaptive and task-oriented imaging system design. Currently, there have been several studies on GI techniques for dynamic imaging or specific non-imaging tasks where the property of flexible information encoding and mapping of GI modality is exploited. The information measure or CRB analysis framework described above is expected to be incorporated into them to promote more related system design studies on a solid theoretical basis. Currently, studies on GI focus more on classical sources. Nonetheless, quantum GI techniques exploiting nonclassical sources as well as the quantum description of the interaction between lights and objects are rather fascinating. And it is expected to have better prospects by combining them with quantum statistical inference methods.

    [1] M. I. Kolobov. Quantum Imaging(2007).

    [3] A. Gatti, E. Brambilla, L. Lugiato. Quantum imaging. Prog. Opt., 51, 251(2008).

    [5] Y. Shih. The physics of ghost imaging. Classical, Semi-classical and Quantum Noise, 169(2012).

    [14] S. Chen, A. Moitra. Algorithmic foundations for the diffraction limit. Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, 490(2021).

    [16] D. Gabor. IV light and information. Progress in Optics, 1, 109(1961).

    [17] R. E. Blahut. Principles and Practice of Information Theory(1987).

    [31] H. H. Barrett, K. J. Myers. Foundations of Image Science(2003).

    [38] L. Mandel, E. Wolf. Optical Coherence and Quantum Optics(1995).

    [39] J. W. Goodman. Introduction to Fourier Optics(2004).

    [41] C. W. Helstrom. Quantum Detection and Estimation Theory(1976).

    [43] S. M. Kay. Fundamentals of Statistical Signal Processing, Volume I: Estimation Theory(1993).

    [45] H. Cramér. Mathematical Methods of Statistics(1946).

    [46] C. R. Rao. Information and the accuracy attainable in the estimation of statistical parameters. Bull. Calcutta Math. Soc., 37, 81(1945).

    [54] L.-K. Du, C. Hu, S. Liu et al. Bayesian recursive information optical imaging: a ghost imaging scheme based on Bayesian filtering(2023).

    [56] S. Särkkä, L. Svensson. Bayesian Filtering and Smoothing(2023).

    Tools

    Get Citation

    Copy Citation Text

    Na Li, Chenyu Hu, Xiao-Ming Lu, "Information-theoretic perspective on performance assessment and fundamental limit of quantum imaging [Invited]," Chin. Opt. Lett. 22, 060009 (2024)

    Download Citation

    EndNote(RIS)BibTexPlain Text
    Save article for my favorites
    Paper Information

    Special Issue: SPECIAL ISSUE ON QUANTUM IMAGING

    Received: Mar. 19, 2024

    Accepted: Apr. 25, 2024

    Published Online: Jun. 24, 2024

    The Author Email: Chenyu Hu (huchenyu@ucas.ac.cn), Xiao-Ming Lu (lxm@hdu.edu.cn)

    DOI:10.3788/COL202422.060009

    Topics