Quantum Machine Learning without Measurements

We propose a quantum machine learning algorithm for efficiently solving a class of problems encoded in quantum controlled unitary operations. The central physical mechanism of the protocol is the iteration of a quantum time-delayed equation that introduces feedback in the dynamics and eliminates the necessity of intermediate measurements. The performance of the quantum algorithm is analyzed by comparing the results obtained in numerical simulations with the outcome of classical machine learning methods for the same problem. The use of time-delayed equations enhances the toolbox of the field of quantum machine learning, which may enable unprecedented applications in quantum technologies. Introduction One of the main consequences of the revolution in computation sciences, started by Alan Turing, Konrad Zuse and John Von Neumann, among others,1, 2 is that computers are capable of substituting us and improving our performance in an increasing number of tasks. This is due to the advances in the development of complex algorithms and the technological refinement allowing for faster processing and larger storage. One of the goals in this area, in the frame of bio-inspired technologies, is the design of algorithms that provide computers human-like capacities such as image and speech recognition, as well as preliminary steps in some aspects related to creativity. These achievements would enable us to interact with computers in a more efficient manner. This research, together with other similar projects, is carried out in the field of artificial intelligence.3 In particular, researchers in the area of machine learning (ML) inside artificial intelligence are devoted to the design of algorithms responsible of training the machine with data, such that it is able to find a given optimal relation according to specified criteria.4 More precisely, ML is divided in three main lines depending on the nature of the protocol. In supervised learning, the goal is to teach the machine a known function without explicitly introducing it in its code. In unsupervised learning, the goal is that the machine develops the ability to classify data by grouping it in different subsets depending on its characteristics. In reinforcement learning, the goal is that the machine selects a sequence of actions depending on its interaction with an environment for an optimal transition from the initial to the final state. The previous ML techniques have also been studied in the quantum regime in a field called quantum machine learning5–12 with two main motivations. The first one is to exploit the promised speedup of quantum protocols for improving the already existing classical ones. The second one is to develop unique quantum machine learning protocols for combining them with other quantum computational tasks. Apart from quantum machine learning, fields like quantum neural networks, or the more general quantum artificial intelligence, have also addressed similar problems.13–17 Here, we introduce a quantum machine learning algorithm for finding the optimal control state of a multitask controlled unitary operation. It is based on a sequentially-applied time-delayed equation that allows one to implement feedback-driven dynamics without the need of intermediate measurements. The purely quantum encoding permits to speedup the training process by evaluating all possible choices in parallel. Finally, we analyze the performance of the algorithm comparing the ideal solution with the one obtained by the algorithm. Results Quantum Machine Learning Algorithm The first step in the description of the algorithm is the definition of the concept of multitask controlled unitary operations U . In essence, these do not differ from ordinary controlled operations, but the multitask label is selected to emphasize that more than ar X iv :1 61 2. 05 53 5v 1 [ qu an tph ] 1 6 D ec 2 01 6 two operations on the target subspace are in principle possible. Mathematically, we define them as U = d ∑ i=1 |ci〉〈ci|⊗ si, (1) where |ci〉 denotes the control state, and si is the reduced or effective unitary operation that U performs on the target subspace when the control is on |ci〉. Our algorithm is appropriate when U is experimentally implementable but its internal structure, |ci〉 and si, are unknown. The goal is to find the optimal |ci〉 for fixed input and output states, |in〉 and |out〉, in the target subspace. The protocol consists in sequentially reapplying the same dynamics in such a way that the initial state in the signal subspace is always |in〉, while the initial state in the control subspace is the output of the previous cycle. The equation modeling the dynamics is d dt |ψ(t)〉=−i [ θ(t− ti)θ(t f − t)κ1H1 |ψ(t)〉+κ2H2 (|ψ(t)〉− |ψ(t−δ )〉) ] . (2) In this equation θ is the Heaviside function, H1 is the Hamiltonian giving rise to U with U = e−iκ1H1(t f−ti), and H2 is the Hamiltonian connecting the input and output states, with κ1 and κ2 the coupling constants of each Hamiltonian. We point out that this evolution cannot be realized with ordinary unitary or dissipative techniques. Nevertheless, recent studies in time delayed equations provide all the ingredients for the implementation of this kind of processes.18–20 Up to future experimental analyses involving the scalability of the presented examples, the inclusion of time delayed terms in the evolution equation is a realistic approach in the technological framework provided by current quantum platforms. Another important feature of Eq. (2) is that it only acquires physical meaning once the output is normalized. Regarding the behavior of the equation, each term has a specific role in the learning algorithm. The mechanism is inspired in the most intuitive classical technique for solving this problem, which is the comparison between the input and output states together with the correspondent modification of the control state. Here, the first Hamiltonian produces the input-output transition while the second Hamiltonian produces the reward by populating the control states responsible of the desired modification of the target subspace. The structure of H2 guarantees that only the population in the control |ci〉 associated with the optimal si is increased, H2 = 1⊗ (−i |in〉〈out|+ i |out〉〈in|) . (3) Notice that while this Hamiltonian does not contain explicit information about |ci〉, the solution of the problem, its multiplication with the feedback term, |ψ(t)〉− |ψ(t−δ )〉, is responsible for introducing the reward as an intrinsic part of the dynamics. This is a convenient approach because it eliminates the measurements required during the training phase. We would also like to point out the similarity existing between the effect of this term in our quantum evolution and gradient ascent techniques in algorithms for artificial intelligence.3 A possible strategy to perform the learning protocol would be to feed the system with random control states, measure each result, and combine them to obtain the final solution. However, we have discovered that it suffices to initialize the control subspace in a superposition of the elements of the basis. We would like to remark that this purely quantum feature reduces significantly the required resources, because a single initial state replaces a set of random states large enough to cover all possible solutions. The specific example we address is given by the excitation transport produced by the controlled SWAP gate. In this scenario, the complete system is an n-node network, where each node is composed by a control and a target qubit. The control states are in a superposition of open and close, |o〉 and |c〉, while the target qubits are written in the standard {|0〉 , |1〉} basis denoting the absence or presence of excitations. We define U , the multitask controlled unitary operation, to implement the SWAP gate between connected nodes only if all the controls of the corresponding nodes are in the open state, |o〉. See Fig. 1 for a graphical representation of the most simple cases, the two and three node line networks. The explicit formula for U2 is given by U2 = (|cc〉〈cc|+ |co〉〈co|+ |oc〉〈oc|)⊗1+ |oo〉〈oo|⊗ s12, (4) where si j represents the SWAP gate between qubits i and j. Although we have employed unitary operations for illustration purposes, the equation requires the translation to Hamiltonians. In order to do so, we first select κ1(t f − ti) to be π/2 and calculate the matrix logarithm, H1 = (|oo〉〈oo|)⊗h12. Denoting with σk the Pauli matrices, hi j for i < j reads hi j = 1 2 ( 3 ∑ k=1 1⊗σk⊗1 ⊗σk⊗1 j−1⊗n )

[1]  Gus Gutoski,et al.  Process tomography for unitary quantum channels , 2013, 1309.0840.

[2]  F. Petruccione,et al.  An introduction to quantum machine learning , 2014, Contemporary Physics.

[3]  Enrique Solano,et al.  Advanced-Retarded Differential Equations in Quantum Photonic Systems , 2016, Scientific Reports.

[4]  C. Papadimitriou,et al.  Introduction to the Theory of Computation , 2018 .

[5]  A. Turing On Computable Numbers, with an Application to the Entscheidungsproblem. , 1937 .

[6]  Matthew Day,et al.  Advances in quantum machine learning , 2015, 1512.02900.

[7]  E. Bagan,et al.  Quantum learning without quantum memory , 2011, Scientific Reports.

[8]  Dan Ventura,et al.  Quantum Neural Networks , 2000 .

[9]  Eitan M. Gurari,et al.  Introduction to the theory of computation , 1989 .

[10]  M. B. Plenio,et al.  Scalable reconstruction of unitary processes and Hamiltonians , 2014, 1411.6379.

[11]  Vedran Dunjko,et al.  Quantum speedup for active learning agents , 2014, 1401.4997.

[12]  Simon Whalen Open quantum systems with time-delayed interactions , 2015 .

[13]  Anthony Widjaja,et al.  Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond , 2003, IEEE Transactions on Neural Networks.

[14]  Kate Cummings,et al.  Introduction to the Theory , 2015 .

[15]  I. Deutsch,et al.  Quantum process tomography of unitary and near-unitary maps , 2014, 1404.2877.

[16]  Peter Zoller,et al.  Photonic Circuits with Time Delays and Quantum Feedback. , 2016, Physical review letters.

[17]  M. Kubát An Introduction to Machine Learning , 2017, Springer International Publishing.

[18]  Nils J. Nilsson,et al.  Artificial Intelligence , 1974, IFIP Congress.

[19]  Hans-J. Briegel,et al.  Quantum-enhanced machine learning , 2016, Physical review letters.

[20]  Lov K. Grover A fast quantum mechanical algorithm for database search , 1996, STOC '96.

[21]  Maria Schuld,et al.  The quest for a Quantum Neural Network , 2014, Quantum Information Processing.

[22]  Ievgeniia Oshurko Quantum Machine Learning , 2020, Quantum Computing.

[23]  Ashish Kapoor,et al.  Quantum Perceptron Models , 2016, NIPS.