Skip to main content
SearchLoginLogin or Signup

The Development of Quantum Machine Learning

Published onJan 27, 2022
The Development of Quantum Machine Learning
key-enterThis Pub is a Commentary on

1. Early Developments

The archaeological evidence indicates that humans have been counting for about 50,000 years (Eves, 1983). Since 300 B.C. when early counting tools such as abacuses were used, computing machines have gone through a long path, however, the major breakthrough happened in the 1950s with developments in the semiconductor industry, which led to the invention of transistors (Shockley et al. 1956). This revolutionized the computing industry and became the building block of standard computers and other digital devices and consequently human beings eventually entered the digital era. However, the transistor industry soon came to realize a fundamental question of how much the number of transistors in a dense integrated circuit (IC) can be grown, which was first addressed by Gordon Moore (Moore, 1964).

While modern computers rely on the principle of von Neumann architecture with a distinct memory and central processing units (CPUs) utilizing the input and output (von Neumann, 1993), some key features of the computation process can be characterized by the Turing machine, (Turing, 1969). The Turing machine is an abstract machine that consists of an infinitely long tape that are blanks in the beginning. At any time step the machine’s head can move at each square, read its symbol, edit the existing symbol, and either halt or move to the next square depending on the current state. Despite its simplicity, the Turing machine is a universal computing machine that can simulate any given algorithm no matter how complicated it is. Moreover, the Turing machine also captures the concept of computational complexity by the extended Church-Turing thesis, namely, any reasonable computational model can be simulated by a Turing machine in polynomial time. (As discussed below, this thesis is believed to be refuted by quantum computers.) However, the speed of the machine, if realized physically, would be too slow for real-world problems. These restrictions on achieving the ceiling power of Turing machines in terms of run time and the issue of increasing the number of transistors in the IC unit has made the quest for finding novel and efficient paradigms of computations inevitable.

Figure 1. Schematic picture for different paradigms of computing. The vertical axis indicates computing technology based on complementary metal-oxide-semiconductor, CMOS (IBM chip) (IBM, n.d.) and non-CMOS technology (single molecule transistor can be considered among the smallest possible [Thiele et al., 2014]) The perpendicular axis indicates the architecture of the computing presenting standard von-Neumann with distinguishable CPU and memory units, and non von-Neumann architectures such as brain-inspired or neuromorphic computing (Esser et al., 2016; Körber, et al., 2020). Quantum computers can be considered as novel technologies beyond CMOS with non von-Neumann architecture (Ball, 2021).

2. Quantum Computers: A Radical Paradigm of Computing

One of the radically different models of computation that has attracted growing attention is quantum computing, which harnesses the fundamental laws of quantum physics such as superposition and entanglement (Wilczek, 2016) to target problems with many degrees of freedom and complexity that lie beyond the scope of classical computing. At the heart of quantum computers we have qubits as a unit of information processing. A qubit is a representation of a quantum state that inherently consists of a real and imaginary part and could exist in superposition, meaning that there is a complex amplitude attributed to a state being 0 and another different amplitude for being 1. These amplitudes can interfere with each other in a constructive (adding up) or destructive manner (canceling each other), leading to different measurement outcomes. The power of quantum algorithms arises from the fact that one can choreograph a specific pattern of constructive and destructive interference such that they cancel each other for a wrong answer and amplify for the right answer. Of course, the important key is to find such a unique choreography without knowing the answer in advance. So in a nutshell, a quantum algorithm scrutinizes the entanglement and superposition to explore different pathways and construct clever interference patterns among these pathways to boost the probability of the computation result leading to exponential computational power, (Aranson, 2021).

Since 1982 when Feynman proposed the idea of using quantum systems to do computation, the field has seen tremendous progress currently with many industrial and academic leaders leading the field forward and proposing various quantum systems as building blocks of promising quantum computers, such as superconducting qubit (IBM, Google, USTC, Rigetti), trapped Ions (IonQ), and trapped Rydberg atoms (QuEra). While reaching fault tolerance, quantum computing with near-perfect logic qubits requires yet another milestone, with dozens or even hundreds of already existing noisy qubits (Arute et al., 2019; Ball, 2021; Ebadi et al., 2021; Pednault et al., 2019; Scholl et al., 2021; Wu et al., 2021; Zhang et al., 2017; Zhu et al., 2021) the field has entered an era with promising quantum advantage, driving many researchers and scientists to target some of the most challenging problems of human beings, such as climate change, clean energies, and drug discoveries.

3. Machine Learning and Brain-Inspired Computing

While traditional computers are capable of tremendously complicated tasks, however, it is hard for them to operate seemingly simple tasks such as pattern or voice recognition, let alone to invent new things that a child can execute easily. In fact, our own brain is a sophisticated machine that consists of 100 billion neurons and about 125 trillion synapses where each neuron itself is connected to 10,000 or 100,000 other neurons on average. In comparison to computers, which perform a single task with high precision, our brain is capable of performing multiple tasks simultaneously with lower precision, which makes them very energy efficient as well. In addition, in contrast to computers, which are built based on the “von Neumann architecture” (Moore, 1964), the brain does not have any fixed topology, and neurons make new connections each time when a person learns new things.

Current machine learning models are in part based on a brain cell interaction that was developed as early as 1943. In that year, the very first mathematical model of an artificial neuron was developed by McCulloch & Pitts (MCP neuron, McCulloch & Pitts, 1943). The MCP neuron takes N excitatory binary inputs (only 0 or 1) and calculates the weighted sum over the inputs and will only fire or output 1 if the value of the sum is bigger than some value θ\theta. With this simple model one can implement different Boolean functions such as OR, AND, or NOT. However, despite its versatility, due to its limitations, it fails to implement the XOR Boolean function or any other nonlinear function. A major improvement over the MCP neuron model only happened 15 years later when Frank Rosenblatt came up with the idea of the “perceptron,” which was inspired by the Hebbian theory of synaptic plasticity, (Rosenblatt, 1957).

By relaxing some of the MCP rules, such as allowing noninteger or even negative values (indicating inhibitory neurons and unequal contribution of weights), Rosenblatt established an efficient algorithm enabling the perceptron to figure out the correct weights leading to learning data. Leveraging an already-existing machine learning algorithm developed by Arthur Samuel at IBM in 1959, Rosenblatt devised a supervised learning algorithm to perform binary classification. The success of his algorithm created much excitement in the late 1950s and early 1960s. However, limitations of his simple perceptron model in learning nonlinear classification and real-world problems led to what came to be known as “AI winter” (Minsk & Papert, 2017).

While over the next few decades many parallel efforts in machine learning, such as pattern recognition, the advent of multilayers, feed-forward neural network, and backpropagation (Rumethart, 1986), boosting techniques (Schapire, 1990), long short-term memory (LSTM, Hochreiter & Schmidhuber, 1997) were going on, the main breakthrough became possible only in 2012 as more powerful computers and larger data sets became available. Based on prior seminal works by Geoeffry Hinton and Yann Lecun (LeCun et al., 1998; Rumethart, 1986), several major breakthroughs in the performance and accuracy of deep neural networks happened, leading to outstanding ability in speech recognition (Microsoft, Google, IBM, and the University of Toronto), cat recognition on video (Google Brain), and successful performance of AlexNet in the 2012 ImageNet Large Scale Visual Recognition Challenge (ILSVRC).

The current state of machine learning includes many widely spanned concepts such as supervised and unsupervised learning, reinforcement learning, and new algorithms for robots. Currently, we can store exabytes of data on the cloud, train deep learning models such as GPT-3, and use advanced artificial intelligence algorithms such as those used for playing go and chess. These major breakthroughs led to many advances in the technology world, such that now even self-driving cars are an achievable reality in the near future.

Figure 2. A bird’s eye view of quantum machine learning landscape. The first letter on each square indicates the type of data or problem while the second letter indicates the model applied on the data. Thus, in this landscape, the orange squares indicating the use of quantum models either on classical or quantum data or use of classical models on quantum data can be considered as quantum machine learning, while the blue square indicating the use of classical models on classical data indicates the traditional machine learning

4. Machine Learning Meets Quantum Computing!

In the realm of quantum mechanics, which is intrinsically probabilistic and high dimensional, the interface of quantum computing and machine learning has become one of the most active fields (Biamonte et al., 2017; Carleo et al., 2019; Sarma et al., 2019; Wittek, 2014). Unlike other traditional computational problems, machine learning consists of two extra ingredients besides the algorithm, namely, the data and the model. Consequently, depending on the different abilities of the training machines (model) and whether the data are quantum or classical, one can imagine a landscape for quantum machine learning as shown in Figure 2. In this landscape, quantum machine learning consists of applying quantum machines or models either to classical or quantum data (top orange squares) or using classical models on quantum data (bottom orange square), while using classical models on classical data (bottom blue square) is considered classical machine learning.

Figure 3. One can consider three different domains in the history of quantum machine learning development. The first domain was formed by Harrow-Hassidim-Lloyd (HHL) algorithm in 2008, harnessing quantum computer for enhancement in linear algebra computation, while algorithms in the second domain tend to adapt hybrid approach leveraging on some parametrized quantum circuits. And the third domain uses inherent interaction and the dynamics of quantum system as a direct resource of learning trying to mimic brain-like computation.

Intrinsically, quantum computing can either speed up the computation by quantum algorithms, performing fast linear algebra, or provide new machine learning models of which inference and training are intractable by classical computers but are expected to be more expressively powerful. Arguably, one can consider the HHL algorithm developed by Aram Harrow, Avinatan Hassidim, and Seth Lloyd in 2008, which aims to solve a linear equation on a quantum computer, as a starting point of the field of quantum machine learning (Harrow et al., 2009). Since then, the field has seen many developments, and one can distinguish three distinct domains of quantum machine learning (see Figure 3.)

The first domain has dealt with traditional machine learning processes, especially those based on linear algebra computation, where the computation itself is done following a quantum algorithm; algorithms in the second domain use a parameterized quantum circuit (PQC) model as a machine learning hypothesis class and the training process is done by variational optimization on those parameters; the third domain takes advantage of the similarity between some physical systems, which are naturally realizable in near-term experiments, and neural networks. This suggests a new type of quantum machine learning, namely, brain-inspired quantum machine learning as depicted in Figure 3. In what follows, we discuss these three domains of quantum machine learning.

4.1. First Domain: Speed Up Via Quantum Linear Algebra

The first domain of quantum machine learning is based on the observation that (1) many traditional machine learning processes require linear algebra computation of high-dimensional vectors; (2) the mathematics behind quantum mechanics is basically linear algebraic computation in 2n2^n dimensions, requiring only nn qubits. More concretely, calculations can be formulated via a mapping of an NN-dimensional vector v\vec v to a quantum state of log2N\log_2N qubits of which basis is labeled by qαq_\alpha:

v  =  (v1,,vN)v  =  1ivi2ivii,wherei    q1q2q3log2 ⁣N  terms.\vec v \;=\; (v_1,\cdots,v_N) \quad\Rightarrow \quad|v\rangle \;=\; \frac{1}{\sqrt{\sum_iv_i^2}}\sum_iv_i|i\rangle, \quad{\rm where} \quad |i\rangle \;\equiv\; |\underbrace{q_1q_2q_3\ldots}_{\log_2\!N\; {\rm terms}}\rangle.

A quantum device achieving this mapping is called quantum random access memory (qRAM, Giovannetti et al., 2008). As mentioned earlier, one of the most basic problems in linear algebra is solving linear equations. In terms of quantum states, the goal of a quantum algorithm is to find a vector x|x\rangle such that

x    A1b|x\rangle \;\propto\; A^{-1}|b\rangle

given AA and b|b\rangle. The aforementioned HHL algorithm (Harrow et al., 2009) can solve this task in poly(logN)\text{poly}(\log N) time complexity under certain conditions on AA, but any classical algorithm requires time complexity at least proportional to NN. Many other linear algebra problems more or less can be reduced to this problem, for example, low rank decomposition. Machine learning algorithms like principal component analysis (PCA), support vector machine (SVM), and recommendation systems have been ’quantized’ (which means ‘make it quantum’) and exponential speedup is claimed because of the very efficient encoding of a high-dimensional vector into a quantum state (Kerenidis & Prakash, 2016; Lloyd et al., 2014; Rebentrost et al., 2014). There remain, however, a few caveats: (1) this requires a qRAM; it is still unknown how to realize this efficiently in any quantum computation model (Aaronson, 2015); (2) the output is a quantum state; due to the wave function collapse in quantum measurement, only one specific bitstring can be obtained according to the probability which is the absolute square of the corresponding coefficient. The latter has been solved in the quantum recommendation system, but the former is still unknown. In fact, if a similar assumption is made for classical computations (namely, encoding a high-dimensional vector with nonnegative entries into a probabilistic distribution), some of these quantum algorithms can be translated into the classical domain, which means the time complexity is also poly(logN)\text{poly}(\log N) for classical computation (Tang, 2019, 2021).

Figure 4. Schematic of variational quantum algorithms as a hybrid architecture leveraging quantum parameterized circuits (PQC) and classical optimization. A specific PQC can be trained over the circuit parameters via classical optimization for a specific problem and cost function; the optimized values get then fed back into the circuit and the procedure repeats iteratively until the desired accuracy is achieved.

4.2. Second Domain: Variational Quantum Algorithms

Taking the above caveats into account has led to the advent of a second domain of quantum machine learning that comprises hybrid classical-quantum schemes. In addition, reaching the fault-tolerant regime requires breakthrough progress in hardware development and error correction schemes. Instead, using already-existing machines with hundreds of noisy qubits (Ball, 2021), a family of variational quantum algorithms has been developed, targeting quantum simulation, optimization, and machine learning problems. These algorithms facilitate a path to extract useful knowledge from measuring some observables that are constructed over some colorredparameterized quantum circuit (PQC) and thus can iteratively be updated similar to parameterized activation functions of classical neural networks (see Figure 4). The first category of such algorithms consists of quantum approximate optimization algorithms (QAOA), developed by Farhi et al. in 2014, aiming to find a solution of certain optimization problems via dynamical evolution of a PQC. The second category is variational quantum eigensolvers (VQEs), first introduced by McClean it et al. in 2014 (McClean et al., 2016). The goal of VQEs is to estimate the lowest eigenvalue of a target Hamiltonian, and thus has, for example, potential applications in quantum chemistry (Kandala et al., 2017). In addition, quantum neural networks (QNNs, Farhi & Neven, 2018), and quantum convolutional neural networks (QCNN, Cong et al., 2019) are also among the most interesting examples of such novel optimization and inference tools. While these variational algorithms on PQCs have shown great promise, the random parameters initializing the circuit become undesirable because of the exponential dimension of their Hilbert space and the resulting gradient estimation complexity. It has been shown that the gradient becomes exponentially small in the number of qubits, leading to something known as ‘barren plateaus,’ failing the training (McClean et al., 2018). In fact, addressing this issue and proposing has become one of the most active fields in quantum machine learning (Cerezo et al., 2021; Marrero et al., 2021; Patti et al., 2021; Pesah et al., 2021).

One important category of machine learning is generative modeling that aims to learn the underlying probability distribution of a given data set and to generate new samples from it. Inspired by the probabilistic nature of quantum mechanics and novel quantum-inspired generative models, in 2018 a novel generative model known as Born machines was introduced (Cheng et al., 2018; Han et al., 2018) that uses quantum state representation and learns the joint probabilities over such quantum degrees of freedom according to

(4.1) pθ(x)=ψθ(x)2N,(4.1) \qquad\qquad\qquad\qquad \begin{aligned} \ p_{\boldsymbol{\theta}}(x)=\frac{|\psi_{\boldsymbol{\theta}}(x)|^{2}}{\mathcal{N}},\end{aligned}

where N=θψθ(x)2\mathcal{N}=\sum_{\boldsymbol{\theta}}|\psi_{\boldsymbol{\theta}}(x)|^2 is the normalization of the quantum state to ensure the positivity of the probabilities and θ\theta denotes the parameter of the model. One promising class of quantum-inspired models that can be used for training the Born machine are families of tensor networks, in particular, the one-dimensional factorization known as matrix product states (MPS). The complexity of the MPS arises from quantum correlations that could be generated in many-body quantum systems with a given efficient topology, thus capturing the underlying correlation of quantum data (Najafi et al., 2020).

Another type of quantum circuit generalization of classical machine learning models is the basis-enhanced Bayesian quantum circuit (BBQC, Gao et al., 2021). The model is based on quantum circuits that naturally encode the Bayesian network if the output is measured in the computational basis. By generalizing the measurement basis, the resulting generative model BBQC will demonstrate quantum advantage. In particular, when being applied to sequence domain problems, it can be shown that the advantage originates from quantum contextuality, which is a generalization of the famous quantum nonlocality (the latter is confirmed by the famous Bell’s test, [Bell, 1964; Hensen et al., 2015]).

Figure 5. Schematic picture of classical recurrent neural networks (RNNs: top) and quantum neurons (bottom). In both cases, the inputs are some local biases, and the interneural connections have arbitrary value and a set of neurons is used for readout. Top: It is known that while many neurons are excitory with positive interaction (green), some neurons are inhibatory (red neurons) with a 1-to-4 ratio. Bottom: Quantum recurrent neural networks (qRNN) made from considering an array of atoms. In the case of Rydberg system, one can encode the excitory and inhibitory interaction taking into account different atomic number.

4.3. Third Domain: Brain-Inspired Quantum Machine Learning

So far we have discussed how ‘quantum’ could improve machine learning, and how a quantum/classical hybrid architecture could use the advantages of both regimes. However, harnessing these advantages often relies on fault-tolerant quantum computing and sometimes requires qRAM in the case of first domain algorithms, and could suffer from training bottleneck and scalability in the second domain of quantum machine learning (McClean et al., 2018). On the other hand, as we discussed in section 3, our own brain is a powerful computation machine harnessing a large connectivity of neurons enabling efficient computation of a large number of parallel tasks in a noisy environment. Thus, the question whether we can harness the existing noisy quantum hardwares for learning tasks resembling brain-inspired computation becomes very crucial.

While in both categories there is still much left to understand in terms of the potential computational advantages, and their origins, in a recent paper by Bravo et al. (2021), they propose extension of classical RNNs into qRNNs (see Figure 5 utilizing Hamiltonian dynamics of ensembles of two-level systems showing that one recovers a classical RNN in certain regimes). However, the qRNNs exhibit new features aiding both classical and quantum computation with training speedup. Furthermore, with the particular choice of and array of Rydberg atoms as a fully controllable and programmable system (Bernien et al., 2017), in particular, different types of interactions in Rydberg atoms arising from different principal quantum numbers allow one to have positive and negative interactions, translated in neuron language as excitatory (blue) and inhibitory (red) neurons (Singh et al., 2022), which are believed to be crucial for a wide category of biological tasks. They, furthermore, were able to show that a qRNN is capable of learning several cognitive tasks such as multitasking, decision making, and long-term memory. How useful this direction is is still not proven: while these and similar studies have shown great promise for brain-inspired quantum machine learning algorithms, many questions, such as scalabilty, robustness to noise, and quantum advantages in the form of speedup, need yet a good deal of further investigation.

5. Conclusion

On the one hand, quantum computing and machine learning are two of the leading technologies at this time. On the other hand, today humanity is more than ever facing some of the biggest challenges of all time, namely, climate change, potentially dwindling energy resources, and the need for discovering new drugs and biological technology. While each of the aforementioned technologies have the potential to revolutionize our lives in their own way, the question of whether we can gain advantage by merging these technologies is crucial. The computational paradigms of machine learning and quantum computing are different: machine learning tries to find the optimal solution in a heuristic way, while quantum computing harnesses the quantum laws, allowing for more efficient and powerful computation as compared to classical computers.

In this article, we have discussed three waves of quantum machine learning, each harnessing a particular aspect of quantum computers and targeting particular problems. The first scrutinizes the power of quantum computers to work with high-dimensional data and speed-up algebra, but raises the caveat of input/output due to the quantum measurement rules. The second domain circumvents this problem by using a hybrid architecture, performing optimization on a classical computer while evaluating parameterized states on a quantum circuit, chosen based on a particular problem. Finally, the third domain is inspired by brain-like computation and uses the natural interaction and unitary dynamic of a given quantum system as a source for learning.

As quantum machine learning, and the artificial intelligence field and quantum-based technologies in general, grow rapidly, novel algorithms and applications become possible. Thus, what is discussed so far is likely just the tip of the iceberg. Looking forward into the future, each of the different domains of quantum machine learning are expected to advance rapidly.

On the one hand, achieving a fault-tolerant regime and finding easy access to qRAMs would be a breakthrough in implementing circuit-based quantum algorithms in general and for quantum machine learning in particular. On the other hand, there are many open questions regarding efficient classical algorithms yet to be discovered and general theories of quantum-enhanced learning that seem similarly important and challenging. And while there are many ongoing efforts in variational quantum algorithms, many tasks remain to be investigated, such as characterizing an optimization landscape or how efficiently these efforts scale up. In particular, looking at the early successes of brain-inspired quantum learning and the versatility of the platforms this can perform on, there seem to be many methods waiting to be discovered and many questions to be answered; these include, what kind of advantage can be gained? What kind of quantum systems would be suitable to learning a specific problem/data? We are at the beginning of an exciting journey of discovery!

Disclosure Statement

Khadijeh Najafi, Susanne F. Yelin, and Xun Gao have no financial or non-financial disclosures to share for this article.


Aaronson, S. (2015). Read the fine print. Nature Physics, 11(4), 291–293.

Aaronson, S. (2021, June 8). What makes quantum computing so hard to explain? Quanta Magazine.

Arute, F., Arya, K., Babbush, R., Bacon, D., Bardin, J. C., Barends, R., Biswas, R., Boixo, S., Brandao, F. G., Buell, D. A., Burkett, B., Chen, Y., Chen, Z., Chiaro, B., Collins, R., Courtney, W., Dunsworth, A., Farhi, E., Foxen, B., . . . Martinis, J. M. (2019). Quantum supremacy using a programmable superconducting processor. Nature, 574(7779), 505–510.

Ball, P. (2021). First quantum computer to pack 100 qubits enters crowded race. Nature, 599(7886), 542–542.

Bell, J. S. (1964). On the Einstein Podolsky Rosen paradox. Physics Physique Fizika, 1(3), 195.

Bernien, H., Schwartz, S., Keesling, A., Levine, H., Omran, A., Pichler, H., Choi, S., Zibrov, A. S., Endres, M., Greiner, M., Vuletić, V., & Lukin, M. D. (2017). Probing many-body dynamics on a 51-atom quantum simulator. Nature, 551(7682), 579–584.

Biamonte, J., Wittek, P., Pancotti, N., Rebentrost, P., Wiebe, N., & Lloyd, S. (2017). Quantum machine learning. Nature, 549(7671), 195–202.

Bravo, R. A., Najafi, K., Gao, X., & Yelin, S. F. (2021). Quantum reservoir computing using arrays of Rydberg atoms. arXiv.

Carleo, G., Cirac, I., Cranmer, K., Daudet, L., Schuld, M., Tishby, N., Vogt-Maranto, L., & Zde- borová, L. (2019). Machine learning and the physical sciences. Reviews of Modern Physics, 91(4), Article 045002.

Cerezo, M., Sone, A., Volkoff, T., Cincio, L., & Coles, P. J. (2021). Cost function dependent barren plateaus in shallow parametrized quantum circuits. Nature Communications, 12(1), Article 1791.

Cheng, S., Chen, J., & Wang, L. (2018). Information perspective to probabilistic modeling: Boltzmann machines versus born machines. Entropy, 20(8), 583.

Cong, I., Choi, S., & Lukin, M. D. (2019). Quantum convolutional neural networks. Nature Physics, 15(12), 1273–1278.

Ebadi, S., Wang, T. T., Levine, H., Keesling, A., Semeghini, G., Omran, A., Bluvstein, D., Samajdar, R., Pichler, H., Ho, W. W., Choi, S., Sachdev, S., Greiner, M., Vuletić, V., & Lukin, M. D. (2021). Quantum phases of matter on a 256-atom programmable quantum simulator. Nature, 595(7866), 227–232.

Esser, S. K., Merolla, P. A., Arthur, J. V., Cassidy, A. S., Appuswamy, R., Andreopoulos, A., Berg, D. J., McKinstry, J. L., Melano, T., Barch, D. R., di Nolfo, C., Datta, P., Amir, A., Taba, B., Flickner, M. D., & Modha, D. S. (2016). Convolutional networks for fast, energy-efficient neuromorphic computing. Proceedings of the National Academy of Sciences, 113(41), 11441–11446.

Eves, H. (1983). An introduction to the history of mathematics. Saunders College Publishing. 

Farhi, E., Goldstone, J., & Gutmann, S. (2014). A quantum approximate optimization algorithm. arXiv.

Farhi, E., & Neven, H. (2018). Classification with quantum neural networks on near term processors. arXiv.

Fujii, K., & Nakajima, K. (2021). Quantum reservoir computing: A reservoir approach toward quantum machine learning on near-term quantum devices. In K. Nakajima, & I. Fischer (Eds.), Reservoir computing (pp. 423–450). Springer.

Gao, X., Anschuetz, E. R., Wang, S.-T., Cirac, J. I., & Lukin, M. D. (2021). Enhancing generative models via quantum correlations. arXiv.

Giovannetti, V., Lloyd, S., & Maccone, L. (2008). Quantum random access memeory. Physical Review Letters, 100(16), Article 160501.

Gonzalez-Raya, T., Cheng, X.-H., Egusquiza, I. L., Chen, X., Sanz, M., & Solano, E. (2019). Quantized single-ion-channel Hodgkin-Huxley model for quantum neurons. Physical Review Applied, 12(1), Article 014037.

Gonzalez-Raya, T., Solano, E., & Sanz, M. (2020). Quantized three-ion-channel neuron model for neural action potentials. Quantum, 4, 224.

Govia, L. C. G., Ribeill, G. J., Rowlands, G. E., Krovi, H. K., & Ohki, T. A. (2021). Quantum reservoir computing with a single nonlinear oscillator. Physical Review Research, 3(1), Article 013077.

Han, Z.-Y., Wang, J., Fan, H., Wang, L., & Zhang, P. (2018). Unsupervised generative modeling using matrix product states. Physical Review X, 8(3), Article 031012.

Harrow, A. W., Hassidim, A., & Lloyd, S. (2009). Quantum algorithm for linear systems of equations. Physical Review Letters, 103(15), Article 150502.

Hensen, B., Bernien, H., Dréau, A. E., Reiserer, A., Kalb, N., Blok, M. S., Ruitenberg, J., Vermeulen, R. F., Schouten, R. N., Abellán, C., Amaya, W., Pruneri, V., Mitchell, M. W., Markham, M., Twitchen, D. J., Elkouss, D., Wehner, S., Taminiau, T. H., & Hanson, R. (2015). Loophole-free Bell inequality violation using electron spins separated by 1.3 kilometres. Nature, 526(7575), 682–686.

Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735–1780.

IBM. (n.d.).

Kandala, A., Mezzacapo, A., Temme, K., Takita, M., Brink, M., Chow, J. M., & Gambetta, J. M. (2017). Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets. Nature, 549(7671), 242–246.

Kerenidis, I., & Prakash, A. (2016). Quantum recommendation systems. arXiv.

Kiraly, B., Knol, E. J., van Weerdenburg, W. M., Kappen, H. J., & Khajetoorians, A. A. (2021). An atomic boltzmann machine capable of self-adaption. Nature Nanotechnology, 16(4), 414–420.

Körber, L., Schultheiss, K., Hula, T., Verba, R., Faßbender, J., Kákay, A., & Schultheiss, H. (2020). Nonlocal stimulation of three-magnon splitting in a magnetic vortex. Physical Review Letters, 125(20), Article 207203.

LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278–2324.

Lloyd, S., Mohseni, M., & Rebentrost, P. (2014). Quantum principal component analysis. Nature Physics, 10(9), 631–633.

Marković, D., & Grollier, J. (2020). Quantum neuromorphic computing. Applied Physics Letters, 117(15), Article 150501.

Marrero, C. O., Kieferová, M., & Wiebe, N. (2021). Entanglement-induced barren plateaus. PRX Quantum, 2(4), Article 040316.

McClean, J. R., Boixo, S., Smelyanskiy, V. N., Babbush, R., & Neven, H. (2018). Barren plateaus in quantum neural network training landscapes. Nature communications, 9(1), Article 4812.

McClean, J. R., Romero, J., Babbush, R., & Aspuru-Guzik, A. (2016). The theory of variational hybrid quantum-classical algorithms. New Journal of Physics, 18(2), Article 023023.

McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics, 5(4), 115–133.

Minsky, M., & Papert, S. A. (2017). Perceptrons: An introduction to computational geometry [(original edition 1969)]. MIT press.

Moore, G. (1964). The future of integrated electronics. Fairchild Semiconductor internal publication, 2.

Najafi, K., Azizi, A., Stoudenmire, M., Gao, X., Yelin, S., Lukin, M., & Mohseni, M. (2020). Limitations of gradient-based born machines over tensor networks on learning quantum nonlocality. In Proceedings of the 34th Conference on Neural Information Processing Systems.

Nakajima, K., Fujii, K., Negoro, M., Mitarai, K., & Kitagawa, M. (2019). Boosting computational power through spatial multiplexing in quantum reservoir computing. Physical Review Applied, 11(3), Article 034021.

Patti, T. L., Najafi, K., Gao, X., & Yelin, S. F. (2021). Entanglement devised barren plateau mitigation. Physical Review Research, 3(3), Article 033090.

Pednault, E., Gunnels, J. A., Nannicini, G., Horesh, L., & Wisnieff, R. (2019). Leveraging secondary storage to simulate deep 54-qubit sycamore circuits. arXiv.

Pesah, A., Cerezo, M., Wang, S., Volkoff, T., Sornborger, A. T., & Coles, P. J. (2021). Absence of barren plateaus in quantum convolutional neural networks. Physical Review X, 11(4), Article 041011.

Pfeiffer, P., Egusquiza, I., Di Ventra, M., Sanz, M., & Solano, E. (2016). Quantum memristors. Scientific Reports, 6(1), Article 29507.

Rebentrost, P., Mohseni, M., & Lloyd, S. (2014). Quantum support vector machine for big data classification. Physical review letters, 113(13), Article 130503.

Rosenblatt, F. (1957). The perceptron, a perceiving and recognizing automaton project para. Cornell Aeronautical Laboratory.

Rumethart, D. (1986). Learning representations by back-propagating errors. Nature, 323(6088), 533–536.

Sarma, S. D., Deng, D.-L., & Duan, L.-M. (2019). Machine learning meets quantum physics. Physics Today, 72(3), 48.

Schapire, R. E. (1990). The strength of weak learnability. Machine Learning, 5(2), 197–227.

Scholl, P., Schuler, M., Williams, H. J., Eberharter, A. A., Barredo, D., Schymik, K.-N., Lienhard, V., Henry, L.-P., Lang, T. C., Lahaye, T., Läuchli, A. M., & Browaeys, A. (2021). Quantum simulation of 2D antiferromagnets with hundreds of Rydberg atoms. Nature, 595(7866), 233–238.

Shockley, W., Bardeen, J., & Brattain, W. (1956). Nobel Prize in Physics for 1956: Dr. W. Shockley, Prof. J. Bardeen and Dr. W. H. Brattain. Nature, 178, 1149.

Singh, K., Anand, S., Pocklington, A., Kemp. J. T., & Bernien, H. (2022). Dual-element, two-dimensional atom array with continuous-mode operation. Physical Review X, 12(1), Article 011040.

Tang, E. (2019). A quantum-inspired classical algorithm for recommendation systems. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing (pp. 217–228).

Tang, E. (2021). Quantum principal component analysis only achieves an exponential speedup because of its state preparation assumptions. Physical Review Letters, 127(6), Article 060503.

Thiele, S., Balestro, F., Ballou, R., Klyatskaya, S., Ruben, M., & Wernsdorfer, W. (2014). Electrically driven nuclear spin resonance in single-molecule magnets. Science, 344(6188), 1135–1138.

Torrontegui, E., & Garcıa-Ripoll, J. J. (2019). Unitary quantum perceptron as efficient universal approximator. Europhysics Letters, 125(3), Article 30004.

Turing, A. (1969). Intelligent machinery. 1948. In B. J. Copeland (Ed.), The Essential Turing (p. 395–432).

von Neumann, J. (1993). First draft of a report on the edvac. IEEE Annals of the History of Computing, 15(4), 27–75.

Wilczek, F. (2016, April 28). Entanglement made simple. Quanta Magazine.

Wittek, P. (2014). Quantum machine learning: What quantum computing means to data mining. Academic Press.

Wu, Y., Bao, W.-S., Cao, S., Chen, F., Chen, M.-C., Chen, X., Chung, T.-H., Deng, H., Du, Y., Fan, D., Gong, M., Guo, C., Guo, C., Guo, S., Han, L., Hong, L., Huang, H.-L., Huo, Y.-H., Li, L., . . . Pan, J.-W. (2021). Strong quantum computational advantage using a superconducting quantum processor. Physical review letters, 127(18), Article 180501.

Zhang, J., Pagano, G., Hess, P. W., Kyprianidis, A., Becker, P., Kaplan, H., Gorshkov, A. V., Gong, Z.-X., & Monroe, C. (2017). Observation of a many-body dynamical phase transition with a 53-qubit quantum simulator. Nature, 551(7682), 601–604.

Zhu, Q., Cao, S., Chen, F., Chen, M.-C., Chen, X., Chung, T.-H., Deng, H., Du, Y., Fan, D., Gong, M., Guo, C., Chu, G., Guo, S., Han, L., Hong, L., Huang, H.-L., Huo, Y.-H., Li, L., Li, N., Pan, J.-W. (2021). Quantum computational advantage via 60-qubit 24-cycle random circuit sampling. Science Bulletin, 67(3), 240–245.

©2022 Khadijeh Najafi, Susanne F. Yelin, and Xun Gao. This article is licensed under a Creative Commons Attribution (CC BY 4.0) International license, except where otherwise indicated with respect to particular material included in the article.


No comments here

Why not start the discussion?