On Dequantization of Supervised Quantum Machine Learning via Random Fourier Features
AI Breakdown
Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.
Abstract
In the quest for quantum advantage, a central question is under what conditions can classical algorithms achieve a performance comparable to quantum algorithms--a concept known as dequantization. Random Fourier features (RFFs) have demonstrated potential for dequantizing certain quantum neural networks (QNNs) applied to regression tasks, but their applicability to other learning problems and architectures remained unexplored. In this work, we derive bounds on the true risk gap between classical RFF models and quantum models for regression and classification tasks with both QNN and quantum kernel architectures. Furthermore, we provide sufficient conditions under which this gap is small and thus the quantum system can be dequantized via the RFF method. We support our findings with numerical experiments that illustrate the practical dequantization of existing quantum kernel-based methods. Our findings not only broaden the applicability of RFF-dequantization but also enhance the understanding of potential quantum advantages in practical machine-learning tasks.