Neural-Network-Based Design of Approximate Gottesman-Kitaev-Preskill Code.
AI Breakdown
Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.
Abstract
Gottesman-Kitaev-Preskill (GKP) encoding holds promise for continuous-variable fault-tolerant quantum computing. While an ideal GKP encoding is abstract and impractical due to its nonphysical nature, approximate versions provide viable alternatives. Conventional approximate GKP codewords are superpositions of multiple large-amplitude squeezed coherent states. This feature ensures correctability against single-photon loss and dephasing at short times, but also increases the difficulty of preparing the codewords. To minimize this tradeoff, we utilize a neural network to generate optimal approximate GKP states, allowing effective error correction with just a few squeezed coherent states. We find that such optimized GKP codes outperform the best conventional ones, requiring fewer squeezed coherent states, while maintaining simple and generalized stabilizer operators. Specifically, the former outperform the latter with just one-third of the number of squeezed coherent states at a squeezing level of 9.55 dB. This optimization drastically decreases the complexity of codewords while improving error correctability.