Quantum Brain
← Back to papers

Almost fault-tolerant quantum machine learning with drastic overhead reduction

Haiyue Kang, Younghun Kim, Eromanga Adermann, M. Sevior, Muhammad Usman·July 25, 2025·DOI: 10.1088/2058-9565/ae2157
Physics

AI Breakdown

Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.

Abstract

Errors in the current generation of quantum processors pose a significant challenge towards practical-scale implementations of quantum machine learning (QML) as they lead to trainability issues arising from noise-induced barren plateaus, as well as performance degradations due to the noise accumulation in deep circuits even when QML models are free from barren plateaus. Quantum error correction (QEC) protocols are being developed to overcome hardware noise, but their extremely high spacetime overheads, mainly due to magic state distillation, make them infeasible for near-term practical implementation. This work proposes the idea of partial QEC for QML models and identifies a sweet spot where distillations are omitted to significantly reduce overhead. By assuming error-corrected two-qubit Controlled-Zs (Clifford operations), we demonstrate that the QML models remain trainable even when single-qubit gates are subjected to ≈0.2% depolarizing noise, corresponding to a gate error rate of ≈0.13% under randomized benchmarking. Further analysis based on various noise models, such as phase-damping and thermal-dissipation channels at low temperature, indicates that the QML models are trainable independent of the mean angle of over-rotation, or can even be improved by thermal damping that purifies a quantum state away from depolarizations. While it may take several years to build quantum processors capable of fully fault-tolerant QML, our work proposes a resource-efficient solution for trainable and high-accuracy QML implementations in noisy environments.

Related Research

Quantum Intelligence

Ask about quantum research, companies, or market developments.