In the modern financial industry system, the structure of products has become more and more complex, and the bottleneck constraint of classical computing power has already restricted the development of the financial industry. Here, we present a photonic chip that implements the unary approach to European option pricing, in combination with the quantum amplitude estimation algorithm, to achieve quadratic speedup compared to classical Monte Carlo methods. The circuit consists of three modules: one loading the distribution of asset prices, one computing the expected payoff, and a third performing the quantum amplitude estimation algorithm to introduce speedups. In the distribution module, a generative adversarial network is embedded for efficient learning and loading of asset distributions, which precisely captures market trends. This work is a step forward in the development of specialized photonic processors for applications in finance, with the potential to improve the efficiency and quality of financial services.

Author Presentation Playback

1. INTRODUCTION

The pricing of financial derivatives is a prominent problem that requires extensive computational resources, as the stochastic nature of the underlying assets requires precise modeling. One of the typical financial derivatives is the option, which is a contract that allows the holder to buy or sell assets at a pre-established price (strike) at or before a specified date (maturity date). The payoff of an option relies heavily on the stochastic evolution of asset price. The traditional option pricing model, Black–Scholes–Merton (BSM) [1], usually oversimplifies market dynamics, which limits its practical application to real-life scenarios. As such, numerical methods such as the Monte Carlo method are typically employed for handling more realistic stochastic fluctuations. However, the Monte Carlo method requires extensive computation resources and is slow to predict complicated options. Reducing the computational resources required for models and speeding up option pricing could have significant implications for the financial industry.

Recently, quantum algorithms have shown promise in facilitating computationally hard financial problems such as trading, portfolio optimization, and risk profiling [2,3], and specifically, quantum amplitude amplification can accelerate option pricing with quadratic speedups [4–10]. The unique advantages of quantum algorithms will make up for the shortcomings of classical algorithms to a certain extent, enabling massive high-speed data services in the financial industry. However, current experimental demonstrations using binary approaches and standard quantum circuit models on superconducting devices [11] require dense chip connections and high gate fidelity, making it difficult for practical applications in the near future without a universal quantum computer [12,13]. In addition, superconducting devices require bulky, energy-intensive, and expensive peripherals such as cooling systems, making industrial-scale applications have poor prospects.

For specialized application tasks such as option pricing, there is no need to use universal quantum computers. Photonic circuits can provide fundamental functions that can be combined to implement specific algorithms [14–20], which would be practical and efficient for user-cased application scenarios. Moreover, the reduced energy costs of photonic computing have been a driving force behind works on dedicated photonic chips for machine learning and algebra [16,21–24]. Therefore, we demonstrate a unary (versus binary) approach in a photonic chip for option pricing. Compared to the binary approach, the unary approach [9] has a remarkably simplified structure and reduced depth of quantum circuits and is especially suitable for linear optical circuit realizations in photonic chips. The unary scheme also allows a post-selection strategy for error mitigation. Additionally, we demonstrate generative adversarial learning to upload the probability distribution implicitly given by data samples into the photonic chip. Generative adversarial learning has previously been demonstrated only in superconducting and optoelectronics devices [25–30]. Compared with traditional Monte Carlo methods, our approach shows high accuracy and significant speedup. It provides a promising avenue for interdisciplinary research in quantum machine learning and financial problems, paving the way for the development of practical photonic processors for quantitative financial applications. It can greatly improve the efficiency and quality of financial services, which is of great significance to the rapid and steady development of the financial industry.

Sign up for Photonics Research TOC Get the latest issue of Advanced Photonics delivered right to you！Sign up now

2. CHIP DESIGN FOR UNARY OPTION PRICING

In this work, we focus on European option pricing, and the expected payoff of options is given by $$C({S}_{T},K)={\int}_{K}^{\infty}({S}_{T}-K)\mathrm{d}{S}_{T},$$where ${S}_{T}$ is the asset price at time $T$, and $K$ is the strike price; see European option pricing model in Appendix A. Figure 1 shows the overall scheme of our photonic-chip-based unary approach. The photonic chip [Fig. 1(a)] consists of a generative adversarial network (GAN) and an option pricing part that includes payoff computation and amplitude estimation. In contrast to the classical Monte Carlo approach [Fig. 1(b)] that requires huge computing power to simulate future asset prices to obtain an accurate solution, our approach is expected to show a speedup in the convergence of the standard error of estimated payoff [Fig. 1(c)], which is proved experimentally later in this section results.

Figure 1.Schematic of the unary approach to option pricing, compared to the classical Monte Carlo method. (a) Integrated photonic chip with the unary algorithm, consisting of a generator of the generative adversarial network (GAN), payoff calculation, and quantum amplitude estimation for acceleration. (b) Monte Carlo simulation on a classical computer, which first generates the future asset price paths based on random variables, and then calculates the payoff. The accuracy relies on extensive simulations of random walk asset paths. (c) Expected acceleration of the convergence of payoff errors, compared to classical Monte Carlo simulations. Shaded areas in the top inset indicate statistical uncertainty.

The unary approach to option pricing encodes an asset price distribution into the unary basis of a quantum register, as shown in Fig. 2. A binning scheme is applied such that Monte Carlo paths that would belong to the same interval of asset prices end up in the same bin. Each bin is then mapped to an element of the unary basis, whose coefficient is the ratio of the number of Monte Carlo paths in that bin to the total number. The accuracy of unary encoding is bounded by the number of bins that can be stored in a quantum state, i.e., the usable dimension of the high-dimensional unary state. Based on the unary basis, Fig. 3(a) depicts the algorithmic model for unary option pricing, which consists of three modules: a distribution loading module $\mathcal{D}$ that loads the asset price distribution into a quantum state, a payoff calculation module $\mathcal{P}$ that computes the expected return, and a quantum amplitude estimation module $\mathcal{Q}$ to gain quadratic speedup over classical sampling to reach a target accuracy.

Figure 2.Mapping of asset prices to unary basis. (a) Classical Monte Carlo paths partitioned into different unary bases. (b) Probability density function (PDF) according to the defined unary basis. (c) Payoff value calculated according to the PDF and asset prices.

Figure 3.Photonic chip design for the unary option pricing algorithm. (a) Algorithmic model of unary option pricing. The input state consists of an $n$-dimensional qudit and a two-dimensional ancilla. The following modules are contained: $\mathcal{D}$, distribution loading; $\mathcal{P}$, payoff calculation; $\mathcal{Q}$, quantum operator for amplitude estimation. The amplification module $\mathcal{Q}$ is performed sequentially by ${\mathcal{S}}_{\psi}\to {\mathcal{P}}^{\u2020}\to {\mathcal{D}}^{\u2020}\to {\mathcal{S}}_{0}\to \mathcal{D}\to \mathcal{P}$. The expected payoff is obtained by measuring the ancilla. (b) Optical circuit model by transforming the algorithmic model to linear optical operators. Each element of the unary basis is represented by two waveguides, extending the $n$-bin unary basis to $2n$-dimensional Hilbert space. Relevant linear optical operators $\mathrm{swp}$, ${R}_{y}(\theta )$, and $\mathrm{XZX}$ are listed with their waveguide structures. (c) Photonic chip design and architecture. The chip is designed by transforming the optical path model into waveguide structures and realizes the distribution loading, payoff calculation, and amplitude estimation sequentially. The distribution loading is trained as a GAN embedded in the machine learning module.

Figure 3(b) depicts the optical circuit model, whereby each module of the unary algorithm is mapped to a linear optical operator. We represent the high-dimensional state by path encoding a single photon using $n$ optical waveguides. The superposition of a single photon traveling through different waveguides directly encodes the unary basis. This high-dimensional state can be written as $|\psi \u27e9={\sum}_{i=0}^{n-1}\sqrt{{p}_{i}}|i\u27e9$, where ${p}_{i}$ represents the probability of observing a photon in the waveguide mode $|i\u27e9$, and these probabilities conform to ${\sum}_{i=0}^{n-1}{p}_{i}=1$. The payoff calculation requires an ancilla qubit to store the expected return for each asset price, expanding the Hilbert space of the algorithm to $2n$. To avoid non-local controlled gates in the photonic chip implementation, we instead add an ancillary waveguide to each of the $n$ unary waveguide modes to represent the effect of the ancilla qubit. Each element of the unary basis is now represented by two waveguides. This way, the controlled operations of the original algorithm are converted to linear transformations on the optical circuit. The architecture of the photonic processor with the detailed chip design is shown in Fig. 3(c), which replaces each linear optical operator with the corresponding waveguide structure. The entire chip is reconfigurable via wire bonds and integrated thermo-optic phase shifters; see the experimental setup in Appendix B.

In the distribution loading module $\mathcal{D}$, a single photon is incident into the chip from a waveguide in the middle of the circuit, which encodes the ancilla in its $|0\u27e9$ state, e.g., for a three-asset case, the initial input state can be written as the tensor product of the middle unary qudit and the ancilla qubit as $[0,1,0]\otimes [1,0]=[0,0,1,0,0,0]$. The distribution of asset prices is then uploaded to the different waveguides using a linear depth circuit. This distribution loading circuit spreads the superposition to the neighboring basis using swp operators: $$\mathrm{swp}=\left(\begin{array}{cccc}I& & & \\ & \sqrt{p}& \sqrt{1-p}& \\ & \sqrt{1-p}& -\sqrt{p}& \\ & & & I\end{array}\right)\otimes I,$$where $p$ depends on the target distribution. The procedure is repeated until the edge of the circuit is reached. The distribution loading module can be reconfigured to obtain any target probability distribution in the unary representation. Precisely, given $n$ assets, the depth of the circuit is always $\lfloor (n+1)/2\rfloor $, and the loading of any known probability distribution onto the unary basis depends on $(n-1)$ splitting parameters $p$. The generator of a GAN is embedded in this module. The GAN is employed to capture the probability distribution underlying given market data. The details are presented in the next section.

The payoff calculation module $\mathcal{P}$ encodes the expected payoff as the probability of measuring the photon in the waveguides encoding the ancilla in state $|1\u27e9$, using rotation operations between the two waveguides of each element of the unary basis. The rotations encode the expected return for each asset price in the distribution. This action, labeled $\mathcal{P}$, can be written as $$P=\left(\begin{array}{cccc}{M}_{0}& & & \\ & {M}_{1}& & \\ & & \ddots & \\ & & & {M}_{n-1}\end{array}\right),\phantom{\rule[-0.0ex]{1em}{0.0ex}}{M}_{i}=\left(\begin{array}{cc}\mathrm{cos}{\theta}_{i}& -\mathrm{sin}{\theta}_{i}\\ \mathrm{sin}{\theta}_{i}& \mathrm{cos}{\theta}_{i}\end{array}\right)$$for a $2n$-waveguide, $n$-bin example.

A quantum amplitude estimation module $\mathcal{Q}$ is applied to achieve quantum speedups. Various amplitude estimation techniques have been presented that are friendly to NISQ devices [31–33]. Here, we implement an amplitude estimation algorithm without quantum phase estimation in the photonic circuit, following the technique used in Ref. [9]. Increasing steps of amplitude amplification are applied to estimate the relevant amplitudes with up to a square root advantage oversampling from the original distribution. This amplification module $\mathcal{Q}$ is performed by applying the following operators. First, ${S}_{\psi}$ identifies the amplitudes that encode the expected payoff and reverses their signs. Explicitly, for the three-asset example at hand, such an operation is ${S}_{\psi}=\mathrm{diag}(1,-1,1,-1,1,-1)$, and is realized experimentally by applying a phase shift of $\pi $ on the second waveguide of each element of the unary basis. Then, the original operations are reversed, that is, the inverse of the payoff calculator ${\mathcal{P}}^{\u2020}$ and the distribution loading ${\mathcal{D}}^{\u2020}$ are applied. An operator ${S}_{0}$ follows, which reverses the sign of the initial state of the computation. Experimentally, it is applied by introducing a phase shift of $\pi $ to the waveguide where the photon was introduced. The last step is to repeat the distribution loading $\mathcal{D}$ and the payoff calculator $\mathcal{P}$ modules. The amplitude amplification operator $\mathcal{Q}=\mathcal{P}\xb7\mathcal{D}\xb7{\mathcal{S}}_{0}\xb7{\mathcal{D}}^{\u2020}\xb7{\mathcal{P}}^{\u2020}\xb7{\mathcal{S}}_{\psi}$ is repeated a different number of times, and the results are processed to estimate the expected payoff. This technique provides up to quadratic speedups over ordinary sampling in the number of calls to the $\mathcal{D}$ and $\mathcal{P}$ operators to reach the same confidence level; see theoretical derivations in Appendix C.

3. GAN FOR DISTRIBUTION UPLOADING

A GAN is implemented in the distribution loading module with on-chip training for real-time noise perception. The goal of the GAN is to obtain an intelligent generator at the chip parameter level that captures the probability distribution behind the given market data without simulating enormous random paths, accumulating data statistics, and then fitting them into the chip architecture. With the GAN, we can efficiently load the classical data, i.e., the probability distribution underlying market data, into quantum states and obtain more precise payoff calculations with the presented unary option pricing methods.

GANs train a generator (G) to synthesize semantically meaningful data from standard signal distributions, as well as a discriminator (D) to distinguish real samples in the training dataset from fake ones produced by the generator [34], as depicted in Fig. 4(a). As its adversary, the generator aims at deceiving the discriminator by producing more realistic samples. Training GANs involve the search for a Nash equilibrium of a two-player game between a generative and a discriminative network, which can be formulated as $$\underset{G}{\mathrm{min}}\text{\hspace{0.17em}}\underset{D}{\mathrm{max}}{\mathbb{E}}_{x\sim {p}_{\mathrm{real}}}(\mathrm{log}({D}_{\varphi}(x)))+{\mathbb{E}}_{z\sim {p}_{z}}(\mathrm{log}(1-{D}_{\varphi}({G}_{\theta}(z)))),$$where the generative network ${G}_{\theta}$ takes noisy samples $z$ from a normal or uniform distribution ${p}_{z}$ as input, and $x$ comes from the real distribution ${p}_{\mathrm{real}}$. The discriminative network ${D}_{\varphi}$ tries to distinguish the generated (fake) sample ${G}_{\theta}(z)$ and real sample $x$ by projecting their output to $\{\mathrm{0,1}\}$. $\theta $ and $\varphi $ are free parameters that construct the generator and discriminator. The training procedure is complete when the generator wins the adversarial game, that is, the discriminator cannot make a better decision than random guesses on the validity of a sample.

Figure 4.GAN on the photonic chip for precise asset distribution uploading. (a) Algorithm of GAN, composed of a generator and a discriminator. (b) Generator implemented by a variational photonic circuit, which is trained on-chip in real time. The probability distributions accumulated on the waveguide paths are used as fake samples. Real samples are the training targets taken from market data in real applications. (c) Classical discriminator consisting of sequential convolutional layers and trained by a gradient descent algorithm. The discriminator aims to distinguish the source of the input sample, from the generator or a real distribution. The cost function is calculated from the discriminator output and used to train the discriminator itself and the generator. (d) The generator is trained by an evolutionary optimization procedure where populations (e.g., different configurations of the generator ansatz) are generated, evaluated, and iterated. The evaluation is accomplished using the scores granted by the discriminator. New generations are produced via the operators of selection, crossover, and mutation of current populations.

We develop a hybrid GAN implementation that consists of a generator network in the photonic chip, a classical discriminator network, and a control system that communicates between the classical computer and photonic chip, all depicted in Figs. 4(b)–4(d). The generator is parameterized by the angles on the phase shifters that are reconfigurable through the thermo-optic effect, induced by applying tiny electrical power to the integrated heaters. Instead of a noise distribution as input, we utilize the uncertainty of photons appearing at different waveguide modes to achieve the equivalent randomness for the generator. The fake samples are the probability distribution of the photons at different waveguide modes. The real samples are drawn from the desired probability distribution, a log-normal or normal distribution for the examples presented in Fig. 5. Fake and real samples sequentially enter the classical discriminator to achieve the classification results. The discriminator is a classical neural network implemented with TensorFlow. We then explicitly discuss the training of the GAN with data samples drawn from the log-normal distribution and the normal distribution.

Figure 5.Experimental training performance of the GAN under Wasserstein distance. (a), (c) Comparison between the probability distributions obtained experimentally from the generator (solid line with data points) and the target distribution (histogram). (b), (d) Evolution of the ${\ell}_{2}$ norm between the fake and real samples with increasing training iterations. (a), (b) Log-normal distribution; (c), (d) normal distribution.

The training process of the GAN in a photonic chip introduces two challenges, the difficulty of obtaining gradients due to the stochastic nature of measurements, and the phenomenon that the discriminator easily overpowers the generator. To circumvent these problems, we propose a hybrid training strategy, where the generator is optimized under a gradient-free evolutionary algorithm, while the classical discriminator uses a gradient descent optimizer. Additionally, the Wasserstein distance [35,36] is used to train the GAN, which changes the dynamic between the generator and the discriminator. In this new GAN scheme, the discriminator acts as a critic, instead of classifying. It aims to give a high score to real instances over fake ones, effectively alleviating the problem of unstable GAN training.

4. RESULTS AND DISCUSSION

A. GAN Results

Our chip can accommodate the entire option pricing process of distribution loading, payoff calculation, and amplitude estimation for three option assets. However, training three bins is too trivial to demonstrate the ability to implement GANs on a photonic chip. Here, to demonstrate the GAN, we employ a chip that supports up to eight bins to demonstrate the generation of probability distribution. Figure 5(a) shows the probability distribution of the generator output compared to the real log-normal distribution. Figure 5(b) shows the convergence of the ${\ell}_{2}$ norm between the fake and real samples, of 100 training iterations. For generator output $g$ and real distribution $x$, the ${\ell}_{2}$ norm is defined as $${\ell}_{2}=\sqrt{\sum _{i=1}^{m}{({x}_{i}-{g}_{i})}^{2}}.$$The results for a target normal distribution are shown in Figs. 5(c) and 5(d). For both examples of log-normal and normal distributions, the final ${\ell}_{2}$ norm between the generator output and the real distribution stabilizes at $-18\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{dB}$.

By training this generative model directly on the photonic chip, we bypass the need to solve the BSM equations while capturing the nuances that the simplified method overlooks. Concurrently, it incorporates environmental elements that are hard to model, such as cross talk and chip imperfections into the GAN training. Another feature of using GANs for the amplitude distribution step is that we can tailor the variational ansatz to construct short-depth circuits for a given degree of accuracy, even in a more general case with multiple photons.

B. Unary Option Pricing Results

As a proof of principle, the fabricated photonic chip supports an option pricing problem with three asset values, whose schematic diagram is shown in Fig. 6(a). The chip has six waveguide inputs: each pair represents one element of the unary basis and the ancilla qubit state. The chip is divided into distribution loading, payoff calculation, and then $m$ runs of amplitude amplification. To stay within the depth constraints of this proof of concept, the circuit’s unitary matrix is multiplied and uploaded into the photonic chip at a constant depth. The single-photon measurement is performed at the waveguide modes that represent ancilla state $|1\u27e9$ for asset prices larger than the strike value. The comparison between the theoretical payoff and estimation achieved experimentally is shown in Fig. 6(b), with increasing iterations of amplitude estimation. The performance of the amplitude estimation is shown in Figs. 6(c) and 6(d). In Fig. 6(c), the dotted line represents the theoretical payoff expectation, the solid line with data points represents the experimental results, and the shaded area represents the standard deviation (STD) of 50 measurements performed in each step of amplitude estimation. The progression of $m$ from 0 to 50 ($m=0$ being a classical sampling of payoff calculation) demonstrates the convergence of the STD. Similarly, in Fig. 6(d), we visualize the convergence of the payoff error with more amplitude estimation runs. The amplitude estimation improves the accuracy of the expected payoff for a certain number of circuit runs.

Figure 6.Experimental results of option pricing with three asset values. (a) Illustration of the optical chip with payoff calculation and amplitude estimation module. Operator $\mathcal{Q}$ is repeated up to $m$ ($m\le 50$) times. The payoff is measured on waveguides that encode the ancilla in state $|1\u27e9$ when the asset price is larger than the pre-defined strike value. (b) Comparison between theoretical expectations and experimental results of the payoff, represented in angles. The raw angles $(2m+1)\theta $ are shifted back to the original angles $\theta $, and the differences from theoretical expectations are recorded as errors. (c) Standard deviation (STD) of the expected payoff with increasing iterations of the amplitude estimation module. The STD converges from the initial $\sim 0.2$ to less than 0.004. Iterations from 20 to 50 are zoomed in. (d) Error in payoff estimation between theoretical and experimental results, with increasing iterations of amplitude estimation. It shows a speedup in convergence compared to the Monte Carlo method.

The structure of the unary algorithm allows a simple but efficient design of photonic chips, especially when loading probability distributions into quantum registers since only local interactions between neighboring waveguides are required. This, however, is inaccessible for the binary alternative, where high connectivity is required to offset the exponential Hilbert space available; see the unary and binary comparison in Appendix D. The optical circuit that implements the unary approach requires a linear number of waveguides scaling with the required precision, which coincides with the remarkable scalability of photonic chips. Instead, avoiding the use of controlled operations via ancilla waveguides bypasses one of the main bottlenecks of photonic chips in quantum computing: the obstacle of realizing photon interactions.

Speedup is achieved in our work much akin to proposals for quantum search without entanglement [37,38], whereby polynomial speedup of unstructured search is achieved with a single photon at the cost of exponential resources. In particular, isomorphisms exist between a system of $n$ qubits and a qudit residing in a ${2}^{n}$ Hilbert space (systems presented in this thought experiment and our implementation) [39]; thereby the unary implementation on a photonic chip displays entanglement in the path encoding of the single photon. Coherent light can achieve a similar effect with a high sampling rate, which is advantageous in near-term use case scenarios, with some trade-offs for the random behavior of single photons in the generator part of the GAN.

The presented avenue to achieve speedup in option pricing is scalable in the photonic chip. It transforms the unary algorithm’s need for increasing qubits into a need for waveguide paths, which are highly scalable in photonic chips. For further scalability, in the presented experiment, the photon detectors placed in ancilla waveguides could be combined into a single one, as only the counts of photons in any ancilla qubit are needed, hence significantly reducing the resources needed to scale this approach to meaningful problems. The energy efficiency of photonic chips also promises a relevant advantage beyond a complexity separation between quantum and classical algorithms. Given an energy budget instead of a shot budget, the photonic implementation of the unary approach to option pricing can yield a significant advantage in the number of operations performed.

5. CONCLUSION

This work is the first demonstration of photonic chips for financial applications. As a proof of concept, we implement the unary option pricing algorithm in a photonic chip for European options, which includes the generation of amplitude distribution of the asset value, evaluation of expected return, and amplitude estimation. We prove the high accuracy in calculating the payoff function, as well as the effectiveness of amplitude estimation in reducing the number of evaluations to reach the same degree of accuracy when compared to classical sampling. The unary representation remarkably simplifies the structure and reduces the depth of quantum circuits in the linear optical circuit implementation. Such photonic devices could eventually be an eco-friendly alternative to electronic circuits. Furthermore, we demonstrate an on-chip training of a GAN that successfully captures important market dynamics in real-life scenarios, bypassing the simplified assumptions in the BSM model that limit its accuracy, as well as the computational burden in solving differential equations. The photonic chip could be potentially employed for other options pricing, paving the way for developing dedicated processors in finance applications.

APPENDIX A: EUROPEAN OPTION PRICING MODEL

The Black–Scholes model is a typical economic model used to calculate the evolution of asset prices in financial markets, known as the European option pricing problem. In this model, the evolution of option price ${S}_{T}$ at time $T$ is decided by two market properties: interest rate $r$ and volatility $\sigma $, which are expressed by the stochastic differential equation $$\mathrm{d}{S}_{T}={S}_{T}r\mathrm{d}T+{S}_{T}\sigma \mathrm{d}{W}_{T},$$where ${W}_{T}$ describes a Brownian process, which is a continuous stochastic evolution starting at ${W}_{0}=0$ and consists of independent Gaussian increments. Specifically, let $\mathcal{N}(\mu ,{\sigma}_{s})$ be a normal distribution with mean $\mu $ and STD ${\sigma}_{s}$; then the increment of two steps of the Brownian processes is ${W}_{T}-{W}_{S}\sim \mathcal{N}(0,T-S)$, for $T>S$. The stochastic differential equation can be approximately resolved to first order, and the solution is $${S}_{T}={S}_{0}{e}^{\left(r-\frac{{\sigma}^{2}}{2}\right)T}{e}^{\sigma {W}_{T}}\sim {e}^{\mathcal{N}\left(\left(r-\frac{{\sigma}^{2}}{2}\right)T,\sigma \sqrt{T}\right)},$$which is a log-normal distribution. The process of solving the stochastic differential equation is valid for the simplified European option model, while for more practical cases, an analytical solution does not exist, and even numerical simulation is costly. To get the expected return, a payoff calculation block is integrated over the resulting probability distribution. The payoff function is given by $$f({S}_{T},K)=\mathrm{max}(0,{S}_{T}-K),$$producing the expected payoff $$C({S}_{T},K)={\int}_{K}^{\infty}({S}_{T}-K)\mathrm{d}{S}_{T},$$where $K$ is the strike.

APPENDIX B: EXPERIMENTAL SETUP AND SINGLE-PHOTON GENERATION

The entire packaged chip is shown in Fig. 7. Each phase shifter is independently controlled by an electronic current driver with 1-kHz frequency and 12-bit resolution. Output photons are filtered via wavelength division multiplexing (WDM) to remove the residual photons, and then detected by superconducting nanowire single-photon detectors (SNSPDs) (PhotonSpot, 100 Hz dark counts, 85% efficiency). Polarization controllers are placed before the SNSPDs as the detectors are polarization sensitive. A time tagger (Swabian Instrument) is used to count the single-photon events, which can support more than 40 million events per second. A temperature controller is used to stabilize the chip temperature and reduce thermal fluctuations caused by possible cross talk.

A degenerated photon pair is used in our experiment. The pump laser is generated from the Ultrafast Optical Clocks device (PriTel) with a repetition rate of 500 MHz, central wavelength of 1550.116 nm, and bandwidth of 1.9 nm. A dual pump scheme is employed to generate pairs of identical photons on chip with a degenerated spontaneous four-wave mixing (SFWM) process. On the chip, the desired state $|\psi \u27e9=|11\u27e9$ is generated out of the two-photon N00N state $|\psi \u27e9=\frac{1}{\sqrt{2}}(|20\u27e9+|02\u27e9)$, by configuring the phase value $\theta =\pi /2$ when interfering the two photons.

APPENDIX C: THEORY OF UNARY OPTION PRICING

By solving the aforementioned BSM model, the probability density function of the option price can be described by a log-normal distribution. We map this continuous price distribution into $n$ discrete values, which are the amplitudes of $n$ orthogonal quantum state basis, by using a probability loading operator $D$ acting on an initial state $|{\psi}_{\mathrm{ini}}\u27e9$ as $$D|{\psi}_{\mathrm{ini}}\u27e9=\sum _{i=0}^{n-1}\sqrt{{p}_{i}}{|{\psi}_{i}\u27e9}_{n},$$where each state $|{\psi}_{i}\u27e9$ represents a discrete option price value ${S}_{i}$, and ${p}_{i}$ is the corresponding probability. These quantum state bases are orthogonal so that $\u27e8{\psi}_{i}|{\psi}_{j}\u27e9={\delta}_{ij}$. The payoff is obtained by accumulating the asset value under its corresponding probability. The payoff of the European option in this discrete scenario can be simplified as $$C({S}_{T},K)=\sum _{0}^{n-1}{p}_{i}\xb7f({S}_{i},K)=\sum _{{S}_{i}>K}^{n-1}{p}_{i}\xb7({S}_{i}-K),$$where $K$ is the strike price. The rotation angles after being normalized by the maximum asset price ${S}_{\mathrm{max}}$ are given by $${\theta}_{i}=\mathrm{max}(0,\mathrm{arcsin}(\sqrt{\frac{{S}_{i}-K}{{S}_{\mathrm{max}}-K}}\left)\right).$$This payoff calculation can be mapped to the quantum model by introducing an ancilla qubit into the original quantum state followed by a controlled rotation gate $\mathrm{CR}$ defined as $$\mathrm{CR}=\sum _{i=0}^{n-1}|{\psi}_{i}\u27e9\u27e8{\psi}_{i}|\otimes {R}_{y}(2{\theta}_{i}).$$

Then, the expected payoff of the option price is related to the amplitude of the ancilla qubit in the form of $$|\psi \u27e9=\mathrm{CR}\xb7\sum _{i=0}^{n-1}\sqrt{{p}_{i}}|{\psi}_{i}\u27e9\otimes |0\u27e9\phantom{\rule{0ex}{0ex}}=\sum _{i=0}^{n-1}\sqrt{{p}_{i}}\mathrm{cos}\text{\hspace{0.17em}}{\theta}_{i}|{\psi}_{i}\u27e9|0\u27e9+\sqrt{{p}_{i}}{\mathrm{sin}}_{{\theta}_{i}}|{\psi}_{i}\u27e9|1\u27e9.$$By measuring the ancilla qubit under basis $|1\u27e9$, we can achieve the result as $$|\u27e81|\psi \u27e9{|}^{2}=\sum _{i=0}^{n-1}{p}_{i}\xb7{\mathrm{sin}}^{2}{\theta}_{i}=\frac{C({S}_{T},K)}{{S}_{\mathrm{max}}-K}.$$Thus, the payoff of the option price can be directly read out from the measurement results of the ancilla qubit under basis $|1\u27e9$. Then, we explain how the amplitude estimation works. The payoff calculation [Eq. (C5)] can be simplified as $$\mathrm{CR}\xb7D\xb7|{\psi}_{\mathrm{ini}}\u27e9|0\u27e9=\mathrm{cos}\text{\hspace{0.17em}}\alpha |{\psi}_{a}\u27e9|0\u27e9+\mathrm{sin}\text{\hspace{0.17em}}\alpha |{\psi}_{b}\u27e9|1\u27e9,$$where $\alpha $ is the normalized parameter, and $|{\psi}_{a}\u27e9$ and $|{\psi}_{b}\u27e9$ are the normalized states $$|{\psi}_{a}\u27e9=\sum _{i=1}^{n-1}\sqrt{{p}_{i}}\mathrm{cos}\text{\hspace{0.17em}}{\theta}_{i}|{\psi}_{i}\u27e9,|{\psi}_{b}\u27e9=\sum _{i=1}^{n-1}\sqrt{{p}_{i}}\mathrm{sin}\text{\hspace{0.17em}}{\theta}_{i}|{\psi}_{i}\u27e9.$$The ancilla qubit is functioning as an indicator to identify the useful state. The amplitude amplification step begins by applying an oracle operator ${S}_{\psi}$ on state $\psi $ with the form $${S}_{\psi}=I-2\sum _{i=0}^{n-1}|{\psi}_{i}\u27e9\u27e8{\psi}_{i}|\otimes |0\u27e9\u27e80|$$to produce a sign change on the ancilla qubit state $|0\u27e9$ that we want to perform the amplitude estimation. Then, we add an inversion operation of the previous payoff calculation $\mathrm{CR}$ and distribution loading operator $D$ followed by another sign flip operation ${S}_{0}$ on the initial state as $${S}_{0}=I-2|{\psi}_{\mathrm{ini}}\u27e9\u27e8{\psi}_{\mathrm{ini}}|\otimes |0\u27e9\u27e80|,$$and the last step is to apply $D$ and $\mathrm{CR}$ again so that the amplitude estimation operator $Q$ can be written as $$Q=\mathrm{CR}\xb7D\xb7{S}_{0}\xb7{D}^{\u2020}\xb7{\mathrm{CR}}^{\u2020}\xb7{S}_{\psi}.$$By repeating the $Q$ operator $m$ times, the full amplitude estimation can be represented as $${Q}^{m}\xb7\mathrm{CR}\xb7D\xb7|{\psi}_{\mathrm{ini}}\u27e9|0\u27e9=\mathrm{cos}(2m+1)\alpha |{\psi}_{a}\u27e9|0\u27e9+\mathrm{sin}(2m+1)\alpha |{\psi}_{b}\u27e9|1\u27e9.$$

Therefore, the measurement of the ancilla qubit under basis $|1\u27e9$ after repeated amplitude estimation would yield the results of ${\mathrm{sin}}^{2}(2m+1)\alpha $ for us to infer the payoff of the option price with improved accuracy. The amplitude estimation scheme we use here is an iterative approach [9]. This procedure is based on the theory of confidence intervals for binomial distributions [40] and uses samples of increasing amplitude amplification [4] steps to better estimate the value of the target amplitude. The quantum amplitude estimation algorithm achieves quadratic speedups when compared to classical Monte Carlo methods of option pricing: $$\mathcal{O}\left(\frac{1}{\sqrt{m}}\right)\to \mathcal{O}\left(\frac{1}{m}\right),$$where $m$ is the number of quantum samples used. The comparison between quantum and classical scaling factors is depicted in Fig. 8, exhibiting a trend that aligns well with our experimental results shown in Fig. 6(d).

Figure 8.Simulation of the scaling of quantum AE and classical MC.

The utilization of a unary approach [9], instead of the commonly adopted binary approach [6,11], distinguishes this work from other quantum approaches. The key advantage of the unary method is its ability to implement all the necessary quantum operations within the option pricing algorithm using a linear optical circuit. In contrast, the binary approach relies on two-qubit controlled operations, which cannot be deterministically achieved in a photonic chip. The comparison between unary and binary approaches is summarized in Table 1. The unary method has a simple chip architecture, no need for phase estimation, scalable gate count, accurate distributed loading, and robustness in payoff computation.

Unary and Binary Comparison

Aspect

Unary Approach

Binary Approach

Representation

Intuitive: single symbol repeated multiple times

Compact: base-2 system with 0 and 1 symbols

Chip architecture

Simple: first-nearest neighbor connectivity

Complex: full connectivity

Amplitude estimation

Without phase estimation: feasible in linear optical circuits

Phase estimation required: not feasible in linear optical circuits

Gate count

Linear (advantageous for near-term devices with $<100$ qubits)

Logarithmic, requiring Toffoli gate

Distribution loading error due to single-qubit error

KL divergence of ${10}^{-3}$, one order of magnitude lower

KL divergence of ${10}^{-2}$

Payoff deviation due to single-qubit error

$\sim 25\%$, 10% more robust

$\sim 35\%$

[1] F. Black, M. Scholes. The pricing of options and corporate liabilities. Foundations of CCA and Equity Valuation, 3-21(2019).

[34] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio. Generative adversarial networks. Advances in Neural Information Processing Systems, 27, 2672-2680(2014).

[35] M. Arjovsky, S. Chintala, L. Bottou. Wasserstein generative adversarial networks. International Conference on Machine Learning, 214-223(2017).