Quantum Error Propagation
AI Breakdown
Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.
Abstract
Data poisoning attacks on machine learning models aim to manipulate the data used for model training such that the trained model behaves in the attacker's favour. In classical models such as deep neural networks, large chains of dot products do indeed cause errors injected by an attacker to propagate or accumulate. But what about quantum models? We hypothesise that, in quantum machine learning, error propagation is limited for two reasons. The first is that data, which is encoded in quantum computing, is in terms of qubits that are confined to the Bloch sphere. Second, quantum information processing happens via the application of unitary operators, which preserve norms. Testing this hypothesis, we investigate how extensive error propagation and, thus, poisoning attacks affect quantum machine learning.