Correcting quantum errors one gradient step at a time
AI Breakdown
Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.
Abstract
In this work, we introduce a general, gradient-based method that optimises codewords for a given noise channel and fixed recovery. We do so by differentiating fidelity and descending on the complex coefficients using finite-difference Wirtinger gradients with soft penalties to promote orthonormalisation. We validate the gradients on symmetry checks (XXX/ZZZ repetition codes) and the $[[5, 1, 3]]$ code, then demonstrate substantial gains under isotropic Pauli noise with Petz recovery: fidelity improves from 0.783 to 0.915 in 100 steps for an isotropic Pauli noise of strength 0.05. The procedure is deterministic, highly parallelisable, and highly scalable.