Quantum Brain
← Back to papers

Quantum Data Breach: Reusing Training Dataset by Untrusted Quantum Clouds

S. Upadhyay, Swaroop Ghosh·July 19, 2024·DOI: 10.1109/ISQED65160.2025.11014467
PhysicsComputer Science

AI Breakdown

Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.

Abstract

Quantum computing (QC) has the potential to rev-olutionize fields like machine learning, security, and healthcare. Quantum machine learning (QML) has emerged as a promising area, enhancing learning algorithms using quantum computers. However, QML models are lucrative targets due to their high training costs and extensive training times. The scarcity of quantum resources and long wait times further exacerbate the challenge. Additionally, QML providers may rely on a third-party quantum cloud for hosting the model, exposing the models and training data. As QML-as-a-Service (QMLaaS) becomes more prevalent, reliance on third party quantum clouds can pose a significant threat. This paper shows that adversaries in quantum clouds can use white-box access of the QML model during training to extract the state preparation circuit (containing training data) along with the labels. The extracted training data can be reused for training a clone model or sold for profit. We propose a suite of techniques to prune and fix the incorrect labels. Results show that ≈90% labels can be extracted correctly. The same model trained on the adversarially extracted data achieves approximately ≈90% accuracy, closely matching the accuracy achieved when trained on the original data. To mitigate this threat, we propose masking labels/classes and modifying the cost function for label obfuscation, reducing adversarial label prediction accuracy by ≈70%.

Related Research

Quantum Intelligence

Ask about quantum research, companies, or market developments.