Quantum Brain
← Back to papers

Quantum state preparation protocol for encoding classical data into the amplitudes of a quantum information processing register's wave function

S. Ashhab·July 29, 2021·DOI: 10.1103/PhysRevResearch.4.013091
Physics

AI Breakdown

Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.

Abstract

We present a protocol for encoding N real numbers stored in N memory registers into the amplitudes of the quantum superposition that describes the state of log2N qubits. This task is one of the main steps in quantum machine learning algorithms applied to classical data. The protocol combines partial CNOT gate rotations with probabilistic projection onto the desired state. The number of additional ancilla qubits used during the implementation of the protocol, as well as the number of quantum gates, scale linearly with the number of qubits in the processing register and hence logarithmically with N . The average time needed to successfully perform the encoding scales logarithmically with the number of qubits, in addition to being inversely proportional to the acceptable error in the encoded amplitudes. It also depends on the structure of the data set in such a way that the protocol is most efficient for non-sparse data. Quantum computing devices have made great progress towards the construction of a quantum computer whose computing power exceeds that of any existing classical computer [1–3]. In particular, a clear quantum advantage over classical computers was recently demonstrated using superconducting devices [4, 5]. Multi-order-of-magnitude increases in the number of qubits and computing power are expected in the coming few years. On the software side, new algorithms are continually being developed for future quantum computers [6, 7]. In particular, as machine learning techniques become increasingly prevalent, researchers are exploring the potential for quantum computers to offer a computational advantage using similar techniques [8, 9]. There have been a large number of proposals for using quantum computers to perform machine learning tasks. There have also been a few proof-of-principle experimental demonstrations of such tasks [10–13]. Quantum machine learning algorithms operate on data stored in the form of a quantum superposition in the state of a quantum information processing register. There are cases where the initial state can be encoded easily into the processing unit for machine learning processing. For example, the data could be a quantum state that results from easily reproducible quantum dynamics, e.g. a quantum simulation of a physical system. In this case it could be practically impossible to translate this data into classical form (because of the exponentially large Hilbert space) but easy to take the prepared quantum state and perform quantum machine-learning analysis on it. The situation is starkly different when dealing with input data that is provided in classical form and does not necessarily have any relation 2 to quantum mechanical quantities. Assuming that the data is described by a set of N real numbers {c0, c1, ..., cN−1}, one first needs to encode this data into the quantum state of a quantum register. In this case, the step of encoding the classical data into the quantum processor can be the most challenging step in running the machine-learning algorithm. A conceptually natural encoding of the data, which is used for example in the quantum support vector machine [14] and allows a straightforward evaluation of the distance between two data points, is amplitude encoding. This encoding can be described as preparing the state |Ψ〉 = 1 N N−1

Related Research

Quantum Intelligence

Ask about quantum research, companies, or market developments.