Reducing Circuit Depth in Lindblad Simulation via Step-Size Extrapolation
AI Breakdown
Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.
Abstract
We study algorithmic error mitigation via Richardson-style extrapolation for quantum simulations of open quantum systems modelled by the Lindblad equation. Focusing on two specific first-order quantum algorithms, we perform a backward-error analysis to obtain a step-size expansion of the density operator with explicit coefficient bounds. These bounds supply the necessary smoothness for analyzing Richardson extrapolation, allowing us to bound both the deterministic bias and the shot-noise variance that arise in post-processing. For a Lindblad dynamics with generator bounded by $l$, our main theorem shows that an $n=Ω(\log(1/\varepsilon))$-point extrapolator reduces the maximum circuit depth needed for accuracy $\varepsilon$ from polynomial $\mathcal{O} ((lT)^{2}/\varepsilon)$ to polylogarithmic $\mathcal{O} ((lT)^{2} \log l \log^2(1/\varepsilon))$ scaling, an exponential improvement in~$1/\varepsilon$, while keeping sampling complexity to the standard $1/\varepsilon^2$ level, thus extending such results for Hamiltonian simulations to Lindblad simulations. Several numerical experiments illustrate the practical viability of the method.