Scalable accuracy gains from postselection in quantum error correcting codes
AI Breakdown
Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.
Abstract
Decoding stabilizer codes such as the surface and toric codes involves evaluating free-energy differences in a disordered statistical mechanics model, in which the randomness comes from the observed pattern of error syndromes. We study the statistical distribution of logical failure rates across observed syndromes in the toric code, and show that, within the coding phase, logical failures are predominantly caused by exponentially unlikely syndromes. Therefore, postselecting on not seeing these exponentially unlikely syndrome patterns offers a scalable accuracy gain. In general, the logical error rate can be suppressed from $p_f$ to $p_f^b$, where $b \geq 2$ in general; in the specific case of the toric code with perfect syndrome measurements, we find numerically that $b = 3.1(1)$. Our arguments apply to general topological stabilizer codes, and can be extended to more general settings as long as the decoding failure probability obeys a large deviation principle.