EP-PQM: Efficient Parametric Probabilistic Quantum Memory With Fewer Qubits and Gates
AI Breakdown
Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.
Abstract
Machine learning (ML) classification tasks can be carried out on a quantum computer (QC) using probabilistic quantum memory (PQM) and its extension, parametric PQM (P-PQM), by calculating the Hamming distance between an input pattern and a database of <inline-formula><tex-math notation="LaTeX">$r$</tex-math></inline-formula> patterns containing <inline-formula><tex-math notation="LaTeX">$z$</tex-math></inline-formula> features with <inline-formula><tex-math notation="LaTeX">$a$</tex-math></inline-formula> distinct attributes. For PQM and P-PQM to correctly compute the Hamming distance, the feature must be encoded using one-hot encoding, which is memory intensive for multiattribute datasets with <inline-formula><tex-math notation="LaTeX">$a>2$</tex-math></inline-formula>. We can represent multiattribute data more compactly by replacing one-hot encoding with label encoding; both encodings yield the same Hamming distance. Implementing this replacement on a classical computer is trivial. However, replacing these encoding schemes on a QC is not straightforward because PQM and P-PQM operate at the bit level, rather than at the feature level (a feature is represented by a binary string of 0’s and 1’s). We present an enhanced P-PQM, called efficient P-PQM (EP-PQM), that allows label encoding of data stored in a PQM data structure and reduces the circuit depth of the data storage and retrieval procedures. We show implementations for an ideal QC and a noisy intermediate-scale quantum (NISQ) device. Our complexity analysis shows that the EP-PQM approach requires <inline-formula><tex-math notation="LaTeX">$O(z \log _{2}(a))$</tex-math></inline-formula> qubits as opposed to <inline-formula><tex-math notation="LaTeX">$O(za)$</tex-math></inline-formula> qubits for P-PQM. EP-PQM also requires fewer gates, reducing gate count from <inline-formula><tex-math notation="LaTeX">$O(rza)$</tex-math></inline-formula> to <inline-formula><tex-math notation="LaTeX">$O(rz\log _{2}(a))$</tex-math></inline-formula>. For five datasets, we demonstrate that training an ML classification model using EP-PQM requires 48% to 77% fewer qubits than P-PQM for datasets with <inline-formula><tex-math notation="LaTeX">$a>2$</tex-math></inline-formula>. EP-PQM reduces circuit depth in the range of 60% to 96%, depending on the dataset. The depth decreases further with a decomposed circuit, ranging between 94% and 99%. EP-PQM requires less space; thus, it can train on and classify larger datasets than previous PQM implementations on NISQ devices. Furthermore, reducing the number of gates speeds up the classification and reduces the noise associated with deep quantum circuits. Thus, EP-PQM brings us closer to scalable ML on an NISQ device.