Quantum optimization with exact geodesic transport
AI Breakdown
Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.
Abstract
We introduce an architecture for variational quantum algorithms that can be efficiently trained via parameter updates along exact geodesics on the Riemannian state manifold. This features a parameter-optimal circuit ansatz which supersedes known quantum natural gradient methods by removing expensive estimations of the metric tensor and provably reducing gradient estimation costs by $62.5\%$. Moreover, the framework also naturally incorporates conjugate gradients as a built-in feature, giving an accelerated descent method with convergence guarantees that we dub exact geodesic transport with conjugate gradients. Numerical benchmarks against state-of-the-art variational methods for ground-state preparation of molecular Hamiltonians or $1$-dimensional spin chains (both with and without particle-number conservation) up to $n=16$ qubits show reductions of over one order of magnitude in the number of optimization steps, with global convergence even for degenerate cases and competitive quantum-resource scalings. In addition, we perform proof-of-principle demonstrations on IonQ's Forte quantum processor, showcasing deployment of pre-trained circuits for the $H_{3}^{+}$ molecule and experimental training for $H_{2}$. Our work enables quantum machine learning applications with shorter training runtime, with implications at the interface of quantum simulation, differential geometry, and optimal control theory.