Lower Bounds for Learning Hamiltonians from Time Evolution
AI Breakdown
Get a structured breakdown of this paper — what it's about, the core idea, and key takeaways for the field.
Abstract
Learning about a Hamiltonian $H$ from its time evolution $e^{-iHt}$ is a fundamental task in quantum science. A flurry of recent work has developed powerful new algorithms with provable guarantees for this task, for a variety of natural settings. Despite this, relatively little is known about lower bounds for learning Hamiltonians. In particular, in the natural setting where we assume $H$ is a $k$-local Hamiltonian on $n$ qubits, all existing algorithms require total evolution time at least $n^{Ω(k)}$ to learn the parameters of $H$, and it remained open whether one could obtain even faster algorithms -- or at the very least, if one could obtain better runtimes for simpler tasks, such as estimating a single designated coefficient of the Hamiltonian. In this work we show the answer is essentially no, by obtaining strong lower bounds for these problems. We find that not only do $k$-local Hamiltonians require $n^{Ω(k)}$ time evolution or interactions to learn, but also that in several senses, learning anything about a Hamiltonian is just as hard as learning everything. In particular, we find the same $n^{Ω(k)}$ lower bound holds for learning a single coefficient of a $k$-local Hamiltonian $H$, even if the rest of $H$ is already known. We also show an $n^{Ω(k)}$ lower bound for the task of effective Hamiltonian learning, where one seeks only to learn a unitary that approximately implements time evolution of $H$. Several related lower bounds, such as for general sparse (but not necessarily local) $H$ are also given. On the technical side, we make a new connection between Hamiltonian learning lower bounds and the analysis of Boolean functions, where we introduce a novel extremal property that may be of independent interest.