Beyond Single-Shot Fidelity: Chernoff-Based Throughput Optimization in Superconducting Qubit Readout
AI Breakdown
Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.
Abstract
Single-shot fidelity is the standard benchmark for superconducting qubit readout, yet it does not directly minimize the wall-clock time needed to certify a quantum state. We treat the dispersive measurement record as a stochastic communication channel and compute the classical Chernoff information governing the multi-shot error exponent, using a trajectory model that incorporates T1 relaxation with full cavity memory. The integration time that maximizes single-shot fidelity and the time that minimizes total certification time do not coincide. For representative transmon parameters and hardware overheads, the throughput-optimal window is longer, cutting certification time by roughly 9-11%, with the gain saturating near 1.13x in the high-readout-power and high-overhead regime. Benchmarking the extracted classical information against the unit-efficiency Gaussian Chernoff limit defines an information-extraction efficiency: dispersive schemes capture ~45% at short integration times, dropping to eta_info(tau_rate) ~ 12% at tau_rate ~ 1.22 us as T1-induced trajectory smearing accumulates. These results connect readout calibration directly to the operational objective of minimizing certification time in high-throughput superconducting processors.